text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
|---|---|---|---|---|---|---|---|---|---|
A lemon tree will provide fruit all throughout the winter and can make an attractive addition to a decorative garden. Meyer lemon trees can be kept to a small pot, restricted to the size of a small shrub or can grow up to a full-size lemon tree. Wait to begin pruning your tree until it has grown 3-4 feet tall. Do not prune until most of the fruit has matured and has ripened before pruning in the spring.
Pruning the Tree
Cut out any dead wood from the tree as well as any that look diseased or damaged. If any are branches hanging inside the lemon tree, remove them. Choose three to five branches inside the tree that will act as your scaffold branches. Prune around these branches. Choose the lowest scaffold branch so that it's higher than 3 feet from the ground.
Cut all the branches off the tree that are lower than the lowest scaffold branch. Make a cut on the bottom of the branch about a quarter of the way in. Cut at the top of the branch to make it collapse downward.
Cut all branches that are not part of the scaffolding of the tree and are growing vertically, back to about 4-8 inches. Thin out other branches until the lemons are receiving substantial sunlight.
Train the tree to grow to a certain height by trimming any branches that grow higher than you wish. Doing this often will keep the tree trained to the height you cut it. Cut away any branches that are not part of the scaffolding so that they are 4-6 inches from the trunk.
Step away from the tree and look at the shape. If the shape is not to your liking, even it out with your shears. Doing so regularly will keep the tree growing to your liking.
|
<urn:uuid:bec476f5-a412-4bbe-92c2-6a16d52941c8>
|
CC-MAIN-2016-26
|
http://www.gardenguides.com/80889-prune-meyer-lemon-tree.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959838
| 363
| 2.53125
| 3
|
Most of us know what the three “Rs” of education are: Reading, wRiting, and aRithmetic. But, how many of us know what the three “D’s” of learning disorders are: Dyslexia (Development reading disorder), Dysgraphia (Developmental writing disorder) and Dyscalculia (Developmental arithmetic disorder)? How many of us know how these specific learning disorders affect people? Notice, I didn’t ask how many of us know how these learning disorders affect students. That’s because if you have a learning disorder, it affects you all your life, in all areas of your life- not just when you are a student in school. You don’t outgrow learning disorders; you learn to cope with them.
Developmental reading disorders are more prevalent than you might think. Most likely, you know someone who has a learning disorder. Experts tell us that up to 17 percent of the population have learning disorders. That’s a lot of people! It’s important for us to understand reading disorders because we may have family members, friends, co-workers etc. who have them. If we understand the specific nature of DRD and the challenges the individuals face who have DRD, we can be more compassionate and helpful.
In this post, I’m going to examine the first “D” of the three “Ds”- developmental reading disorder also known as dyslexia.
- Individuals with DRD have average or above-average intelligence.
- DRD is not connected with the ability to think or understand complex ideas.
- It is not caused by a vision problem.
- DRD is a function of the problems the brain has recognizing and processing symbols.
- Individuals with DRD may have difficulty rhyming and separating sounds when they are listening to someone speak.
- Rhyming and separating sounds are abilities crucial for learning to read.
- DRD may be found in combination with dysgraphia or dyscalculia since all use symbols to convey meaning.
- New research suggests brain scans can predict whether individuals will improve at reading.
- Children with DRD who overcome their reading difficulties bypass brain regions normally used for reading.
- learning to recognize words;
- determining the meaning of simple sentences.
Before a diagnosis of DRD can be made the following tests should be conducted to rule out other causes
- complete medical, developmental, social, school performance , and family history
- psychoeducational testing
- psychological testing
Treatments can consist of special education services such as
- Reading specialist help
- Individualized tutoring
- Individualized Education Plan specific to the student
- Psychological counseling to help with self-esteem issues
- Positive reinforcement
Students with reading problems can use software applications like Premier Software to read text to them. I have my students input text by typing or scanning text into a word processing program and then the software reads the text to them. My students take delight in listening to their text in a variety of male and female voices with different accents. I also encourage my students to listen to the novels and plays in their courses. In ” the old good days” I would have these books and plays on tape for my students, now I can get most of these as audio books online from places like Audible or in regular bookstores and store them on MP3 players or discs to lend to students. I’ve even seen an audio only bookstore here in town. It’s getting much easier to access audiobooks. Although I enjoy reading books and do not have DRD, I also like listening to them. I’m always delighted when I get gift certificates for audiobooks.
- Reading problems can cause behaviour problems or self-esteem problems in school as a reaction to teasing by other students;
- Remediation can help students become better readers, but students will alway face reading challenges even in adults;
- Reading problems can lead to problems in certain careers and occupations;
- Reading problems tend to run in families so families should try to recognize the signs early and seek help as early as preschool;
- Early intervention can give the best results.
I encourage my students who have reading problems or DRD not to define themselves by what they can’t do or have difficulty doing. Everyone is challenged in some way. The point is to discover your strengths and use those to help you achieve your best. Find someone to help you with your weaknesses, and you in turn use your strengths to help someone with their weaknesses. I have my students determine their multiple intelligences so they are aware of their strengths. We share the information in class, and I encourage them to help one another. As adults we do this, so why shouldn’t we teach our students to do this. I think working together and using the various strengths of team members to accomplish a goal is a life skill.
Enjoyed reading this post? Subscribe to Teachers at Risk.
|
<urn:uuid:1d69616b-8c1a-433a-9780-ac55ffb2af04>
|
CC-MAIN-2016-26
|
http://www.teachersatrisk.com/2011/02/27/dyslexia-one-of-the-three-ds-of-learning-disorders/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00124-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954421
| 1,038
| 3.453125
| 3
|
The precise control of the rotational temperature of molecular ions opens up new possibilities for laboratory-based astrochemistry
Chemical reactions taking place in outer space can now be more easily studied on Earth. An international team of researchers from the University of Aarhus in Denmark and the Max Planck Institute for Nuclear Physics in Heidelberg, discovered an efficient and versatile way of braking the rotation of molecular ions.
Ions in a gaseous crystal: An alternating field between rod-shaped electrodes confines magnesium and magnesium hydride ions (red spheres) in a trap. A laser beam is used to cool the particles until they solidify to a crystal in which the distances between the ions are much greater than in a mineral crystal. A German-Danish team of researchers is able to slow down the rotation of the molecular ions with a highly tenuous, cold helium gas (spheres to the left and right of the ion crystal).
© J. R. Crespo/O. O. Versolato/MPI for Nuclear Physics
Cooling down an ion crystal: A cloud of magnesium (blue spheres) and magnesium ions (tied blue and green spheres) is confined between the four cylindrical electrodes of a Paul trap. A laser, depicted in this image as a bright transparent strip in the centre, cools the ions so that they solidify into a Coulomb crystal. When helium atoms (purple), which flow into the trap, collide with magnesium hydride ions, the rotation of the latter slows down - the rotation temperature drops.
© Alexander Gingell/Aarhus University
The spinning speed of these ions is related to a rotational temperature. Using an extremely tenuous, cooled gas, the researchers have lowered this temperature to about -265 °C. From this record-low value, the researchers could vary the temperature up to -210 °C in a controlled manner. Exact control of the rotation of molecules is not only of importance for studying astrochemical processes, but could also be exploited to shed more light on the quantum mechanical aspects of photosynthesis or to use molecular ions for quantum information technology.
Cold does not equal cold for physicists. This is because in physics, there is a different temperature associated with each type of motion that a particle can have. How fast molecules move through space determines the translational temperature, which comes closest to our everyday notion of temperature. However, there is also a temperature for the internal vibrations of a molecule, as well as for the rotational motion around their own axes. Similar to a stationary car with its engine running, the internal rotation (the engine, in this case) does not translate into motion before the clutch is released. In the case of molecules, the many microscopic collisions between the particles which constitute gases, fluids, and solids couple the various forms of motion with each other.
The different temperatures thus approach each other over time. Physicists then say that a thermal equilibrium has been established. However, how fast this equilibrium is reached depends on the collision rate, as well as on any external influences working against this equilibration. For example, the infrared radiation emanating from the contraction of an interstellar gas cloud can cause the rotation of molecules to quicken, even without changing the speed at which the molecules are travelling. These kinds of processes take a very long time in the emptiness of space, as there are very few collisions there.
The cooling method for the rotational temperature is quick and versatile
Time is totally irrelevant at cosmic dimensions but with physical experiments it is crucial. Indeed, physicists can nowadays reduce the flight speed of molecules relatively quickly to almost absolute zero at -273.15 °C. However, it takes several minutes or hours for the rotation of non-colliding particles to cool to a similar level, making some experiments almost impossible. This may be about to change.
“We have managed to cool down the rotation of molecular ions in milliseconds, and down to lower temperatures than previously possible,” says José R. Crespo López-Urrutia, Group Leader at the Max Planck Institute for Nuclear Physics. The researchers from the Max Planck Institute in Heidelberg and the group led by Michael Drewsen at Aarhus University froze molecular rotational motion at 7.5 K (or -265.65 °C). And not only that, as Oscar Versolato from the Max Planck Institute in Heidelberg, who played an important role in the experiments, explains: “With our methods we can choose and set a rotational temperature between about seven and 60 Kelvin, and are able to accurately measure this temperature in our experiments.” Unlike other methods, this cooling principle is very versatile, being applicable to many different molecular ions.
In their experiments, the team used a cloud of magnesium ions and magnesium hydride ions using methods pioneered in Aarhus. This ensemble was “confined” in an ion trap known as CryPTEx, which was developed by researchers at the Max Planck Institute for Nuclear Physics (see Background). The trap consists of four rod-shaped electrodes that are arranged in parallel, in pairs aligned one above the other and having opposite electrical polarities. A high-frequency alternating voltage is applied to the electrodes to confine the ions in the centre close to the longitudinal axis of the trap. The trap is cooled to a few degrees above absolute zero, and there is an excellent vacuum so that adverse collisions are very rare.
Collisions with cold helium atoms slow down the rotation of the molecular ions
In the trap, the physicists cooled the magnesium ions using laser beams which, to put it simply, slow down the ions with their photon pressure. The magnesium hydride ions in turn cool because of their interaction with the magnesium ions. This allowed the researchers to cool the translational temperature of the cloud to minus 273 degrees Celsius until several hundred particles solidify to form a regular crystal. In such crystals, the distances between the particles are very large, in contrast to the situation in crystals familiar from minerals. The particles which the cold laser causes to emit light can thus be seen at their fixed positions under the optical microscope.
To apply a brake to the rotation of the molecular ions, and thus to reduce their rotational temperature, the team injected an extremely tenuous, cold helium gas into the trap. In the ion crystal, the helium atoms flying at a leisurely speed collide with the magnesium hydride ions rotating about their own axis trillions of times per second. The collisions cause the helium atoms to gradually slow down the molecular ions. “This process is similar to the tides,” explains José Crespo: "The rotating ion polarizing the neutral helium atom is a little bit like the moon producing the tidal bulges.” A dipole is thus induced in the helium atom, which tugs at the rotating molecular ion such that it rotates a little slower.
The helium atoms in the experiment mediate between the various temperatures as they transfer translational kinetic energy to the molecular ions in some collisions and remove rotational energy in others. This effect is also exploited by the team to heat the rotational motion of the molecular ions through the amplification of the regular micro-motion of trapped particles.
Crystal size and shape control the heating of molecular ions
The physicists increase the micro-motion velocity of the molecular ions by varying the shape and size of the ion crystal in the trap: they knead the crystal as it were by means of the alternating voltage which is applied to the trap electrodes. The alternating field that the electrodes produce is equal to zero only along the trap axis. The further the molecular ions are located away from this axis, the more they feel the oscillating force of the field and the more violent is their micro-motion. Part of the kinetic energy of the swirling molecular ions is absorbed by the helium atoms in collisions, and these atoms in turn transfer it to the rotational motion of the ions, thus raising their rotational temperature.
For the Danish-German collaboration, the ability to control the rotation of the molecular ions not only enables the manipulation of the micro-motion, and thus the rotational temperature, but also the quantum-mechanical measurement of this temperature. The scientists do this by exploiting the fact that the rotational motion of the molecules is quantised. Put simply: the quantum states of a molecule correspond to certain speeds of its rotation.
At very cold temperatures the molecules occupy only very few quantum states. The researchers remove the molecules of one quantum state from the crystal by means of laser pulses whose energy is matched to that particular state. They determine how many ions are lost in this process, in other words how many ions take on this particular quantum state, from the size of the crystal remaining. They determine the rotational temperature of the molecular ions by thus scanning a few quantum states.
Accurate control of quantum states is a prerequisite for many experiments
“Being able to control the rotation of the molecular ions and thus the quantum state so accurately is important for many experiments,” says José Crespo. Scientists can therefore recreate in the laboratory chemical reactions that take place in space if they can bring the reactants into the same quantum state in which they drift through interstellar space. Only in this way can one quantitatively understand how molecules are formed in space, and ultimately explain how interstellar clouds, the hotbeds of stars and planets, evolve both physically and chemically.
This speed control knob for rotating molecules could also contribute to a better understanding of the quantum physics of photosynthesis. In photosynthesis, plants use the chlorophyll in their leaves to collect sunlight, whose energy is ultimately used to form sugars and other molecules. It is not yet entirely clear how the energy required for this is quantum mechanically transferred within the chlorophyll molecules. To understand this, the researchers must once again very accurately control and measure the quantum states and the rotation of the molecules involved. The findings thus obtained could serve as the basis for imitating or optimising the photosynthesis at some time in the future in order to supply us with energy.
Last but not least, this control is a prerequisite for quantum simulations as well as for many concepts of universal quantum computations. In quantum simulations physicists mimic a quantum mechanical system that is difficult, or even impossible, to examine directly with another quantum system that is well-known and controllable. In universal quantum computers which physicists are trying to develop, the aim is to process information extremely quickly using the quantum states of particles. Molecules are possible candidates for this, their chances now growing as molecular rotation can be quantum mechanically controlled.
“Our method for the cooling of the rotation of molecules opens up new possibilities in a variety of different fields,” says José Crespo. His team, too, will now use the new method to investigate open questions about the quantum mechanical world.
CryPTEx – a trap for cold ions
CryPTEx, the Cryogenic Paul Trap Experiment, is a cryogenically cooled trap setup developed and built by the team of José R. Crespo López-Urrutia at the Max Planck Institute for Nuclear Physics (MPIK) in Heidelberg based on a trap design by collaborator Michael Drewsen of Aarhus University (AU) in order to investigate highly charged ions (HCI). Production, trapping and spectroscopy with HCI are the fields of expertise of the Max Planck group, which uses various ultrahigh vacuum cryogenic settings for their investigation. These specific conditions required for HCI studies are also very beneficial for the study of molecular ions. The Heidelberg team then moved CryPTEx to Aarhus and commissioned the apparatus there together with the local team. Trapping and manipulation of molecular ions is the specialty of the Aarhus group, which has pioneered many of the laser-based techniques now used in the field. Drewsen saw the novel opportunities for cooling molecular ions in the cryogenic setting, including the application of an ultra-tenuous helium buffer gas. Thus, CryPTEx stayed in Aarhus for one year, where the young scientists from both groups carried out long series of experiments and tested new ideas. During those experiments, ion crystallisation and buffer gas cooling could be achieved simultaneously over a wide range of effective temperatures, down to the lowest ever recorded for a molecular ion.
Dr. José R. Crespo López-Urrutia | Max-Planck-Institute
Clandestine black hole may represent new population
28.06.2016 | International Centre for Radio Astronomy Research
Rotating ring of complex organic molecules discovered around newborn star
28.06.2016 | National Institutes of Natural Sciences
R2D2, a joint project to analyze and development high-TRL processes and technologies for manufacture of flexible organic light-emitting diodes (OLEDs) funded by the German Federal Ministry of Education and Research (BMBF) has been successfully completed.
In contrast to point light sources like LEDs made of inorganic semiconductor crystals, organic light-emitting diodes (OLEDs) are light-emitting surfaces. Their...
High resolution rotational spectroscopy reveals an unprecedented number of conformations of an odorant molecule – a new world record!
In a recent publication in the journal Physical Chemistry Chemical Physics, researchers from the Max Planck Institute for the Structure and Dynamics of Matter...
Strands of cow cartilage substitute for ink in a 3D bioprinting process that may one day create cartilage patches for worn out joints, according to a team of engineers. "Our goal is to create tissue that can be used to replace large amounts of worn out tissue or design patches," said Ibrahim T. Ozbolat, associate professor of engineering science and mechanics. "Those who have osteoarthritis in their joints suffer a lot. We need a new alternative treatment for this."
Cartilage is a good tissue to target for scale-up bioprinting because it is made up of only one cell type and has no blood vessels within the tissue. It is...
Physicists in Innsbruck have realized the first quantum simulation of lattice gauge theories, building a bridge between high-energy theory and atomic physics. In the journal Nature, Rainer Blatt‘s and Peter Zoller’s research teams describe how they simulated the creation of elementary particle pairs out of the vacuum by using a quantum computer.
Elementary particles are the fundamental buildings blocks of matter, and their properties are described by the Standard Model of particle physics. The...
A year and a half on the outer wall of the International Space Station ISS in altitude of 400 kilometers is a real challenge. Whether a primordial bacterium...
28.06.2016 | Event News
09.06.2016 | Event News
24.05.2016 | Event News
28.06.2016 | Physics and Astronomy
28.06.2016 | Life Sciences
28.06.2016 | Physics and Astronomy
|
<urn:uuid:a7ac1316-d7c8-4556-9f8a-eb19b881a0dc>
|
CC-MAIN-2016-26
|
http://www.innovations-report.com/html/reports/physics-astronomy/a-brake-for-spinning-molecules.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.912567
| 3,080
| 4.03125
| 4
|
An international team of scientists has pushed the limits of radio astronomy to detect a faint signal emitted by hydrogen gas in a galaxy more than five billion light years away—almost double the previous record.
The University of Western Australia’s Zadko Telescope and the Parkes Radio Telescope have joined forces in a new mission involving an international team of radio astronomers to hunt for mystery radio bursts in the universe.
Researchers at the Planning and Transport Research Centre (PATREC) at The University of Western Australia have created a biologically inspired computer model that can autonomously design urban residential layouts without human assistance.
A proposal co-sponsored by France and Italy to plan the Southern Hemisphere's first full-scale gravity wave observatory (GWO) near Gingin (80km north of Perth) will be discussed at a three-day international workshop "Physics for the Future" hosted by The University of Western Australia on September 27-29.
Western Australian school students and teachers will meet black hole discoverer Emeritus Professor Roy Kerr when he visits The University of Western Australia and the Gravity Discovery Centre in Gingin next week.
Space junk is becoming such a major problem that if it continues to accumulate at present rates, it will be impossible to launch anything into space in 100 years' time, according to researchers at The University of Western Australia.
A French research team will pay a special visit the Zadko Telescope in mid-April.
Two members of the Zadko project’s collaborators, TAROT (Télescope à Action Rapide pour les Objets Transitoires – Fast Action Telescope for Transient Objects), will be working closely with the Zadko research team to perform robotic testing of the telescope.
Dr Myrtille Laas-Bourez recently took up her position with the International Centre for Radio Astronomy Research and the School of Physics as the chief scientist in charge of the Zadko robotic control system, CCD camera and technical operations of the Zadko Telescope.
Many of Australia's top scientists and some of the next generation of scientists will help launch Western Australia's biggest telescope at The University of Western Australia next Wednesday, April 1, 2009.
A team of UWA astrophysicists has captured one hour of valuable video footage of the aftermath of a massive gamma ray explosion 11 billion years ago – just a few billion years after the Big Bang. In the January 2009 edition of ScienceNetwork WA, Carmelo Amalfi, discusses how this ancient light was detected for the first time on Earth by a one-metre robotic telescope installed just last year at the Gingin gravitational wave observatory, 70 km north of Perth.
Galileo Galilei, who recorded the first astronomical observations with a telescope 400 years ago, would be impressed. Just in time for the International Year of Astronomy, astronomers at The University of Western Australia have seen a massive gamma ray burst that happened 11 billion years ago - long before our own planet had even been formed.
|
<urn:uuid:415fc1cb-b4b2-4577-97b4-c5a6af0c30f9>
|
CC-MAIN-2016-26
|
http://www.news.uwa.edu.au/zadko
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.911829
| 611
| 2.890625
| 3
|
Starting & Operating
Starting an afterschool program can be an arduous task, particularly in areas where funding and support are scarce. The Administration for Children and Families created a list of considerations for building strong afterschool programs that serve the needs of school-age youth and their families. These considerations include:
Estimate, Measure, and Assess Supply and Demand: New programs are more likely to be successful if they meet an identified need in their community. Program managers should speak to local school officials, parents, or child care resources and referral agencies to determine where there is a need for a particular type of afterschool program.
Develop a Vision: Being able to articulate outcomes is key to attracting families and supporters. For example, some afterschool programs aim to raise academic scores, while others try to prevent youth violence or to promote healthy youth development.
Find Funding and Develop Partnerships: Most programs will likely need some start-up funding to get off the ground. Managers need to learn about federal, state, or local funds as well as look for private and in-kind donations to support afterschool programs.
Meet State Regulations: States have minimum licensing requirements that apply to programs serving children, including afterschool programs. These requirements typically vary for types of providers, and often include separate requirements for school-age care settings. Program managers should contact their state's licensing agency to find out about the requirements.
Plan High-Quality Activities: There is a growing body of information on curricula and activities for afterschool programs and providers. Program managers should familiarize themselves with different types of activities and identify local training opportunities to gain the know-how and resources to serve school-age kids.
Other important considerations are planning resource and personnel needs, including staffing, transportation, location, and hours of service. Program managers should decide on these issues ahead of time in order to properly assess costs and plan funding endeavors (McElvain, 2005).
For more information about starting a program, contact your state Lead Agency for Child Care (PDF, 8 pages).
McElvain, Carol et al. (2005). “Beyond the Bell: Third Edition” Apt Associates. Retrieved from http://www.beyondthebell.org/StartupGuide.pdf
The Child Care Bureau, (2005). “Starting an Afterschool Program: A Resource Guide”.
Other Resources on this Topic
Tools & Guides
|
<urn:uuid:8c8282dd-c846-459b-8be9-66055f50b281>
|
CC-MAIN-2016-26
|
http://youth.gov/youth-topics/afterschool-programs/starting-and-operating-afterschool-program
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919651
| 492
| 2.96875
| 3
|
A Guide to the Kitty Anderson Civil War Diary, 1861
Kitty Anderson was the daughter of Colonel Charles Anderson (1814-1895), attorney, officer in the Union Army, and Ohio Senator and Governor. Charles Anderson was born in Louisville, Kentucky on June 1, 1814. He attended Miami University and in 1835 began practicing law and farming in Dayton, Ohio. He married Eliza J. Brown in September 1835. Anderson was elected to the Ohio Senate in 1844, serving only one term. In 1848, the family moved to Cincinnati, then back to Dayton in 1855 or 1856. In 1859, the family moved to a farm Anderson had purchased near San Antonio, Texas. Anderson was a vocal Union supporter, and as the Civil War broke out, he feared for his family’s safety. As the family was attempting to travel to Brownsville, Anderson was arrested and imprisoned in San Antonio. He soon escaped to Mexico, and the family returned to Dayton. Anderson was commissioned as a colonel in the Ninety Third Ohio Volunteer Infantry in 1862, and he was severely wounded at the Battle of Stones River. He was elected lieutenant governor of Ohio in 1863, serving under Governor John Brough. When Brough died in office on August 29, 1865, Anderson became governor, serving until January 8, 1866. In 1870, Anderson returned to Kentucky. He died in Kuttawa, Kentucky on September 2, 1895.
Note: Creator’s Sketch prepared from collection material and from information contained in the Ohio History Central online encyclopedia, produced by the Ohio Historical Society.
The Kitty Anderson Civil War Diary chronicles events occurring between September 29, 1861 and November 30, 1861, including the arrest of Col. Anderson, his escape to Mexico, and the family's reunion. Kitty Anderson recorded her original diary in 1861; she copied the original diary directly to the diary in this collection in 1871. The collection also includes three cartes de visite contained in the diary, portraying Col. Anderson, his wife, and Kitty Anderson.
Kitty Anderson Civil War Diary, 1861, Dolph Briscoe Center for American History, The University of Texas at Austin.
Detailed Description of the Papers
|
<urn:uuid:93a5c169-93f1-4fc9-96b4-4d000c8ae29c>
|
CC-MAIN-2016-26
|
http://www.lib.utexas.edu/taro/utcah/01240/01240-P.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00062-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973692
| 451
| 3.0625
| 3
|
Start School Later for Better High School Students
Starting school just thirty minutes later at a private high school in Rhode Island resulted in better moods, more alert, and less depressed students. They were also more likely to attend class. The study, published in the July issue of Archives of Pediatrics & Adolescent Medicine bolsters evidence that teens have special sleep needs.
About 200 students in grades 9 through 12 at St. George’s School in Newport filled out on-line questionnaires on their sleep habits both before and after the school changed its start time from 8 am to 8:30 am on January 6, 2009. The proportion of students getting at least eight hours of sleep a night jumped from 16.4% to 54.7%. Those getting less than seven hours decreased by almost 80%.
Students reported less daytime sleepiness, improved mood and depression symptoms, and increased interest and motivation to participate in academic and athletic activities. Absences during first period declined and fewer students visited the health center with complaints of fatigue.
Head administrator Eric Peterson also noticed that the teachers were “less frantic” at the start of the day and that everyone at the school ate a healthier breakfast as a result of improved alertness.
The experiment was so successful among all students and faculty members that the school never went back to starting school at 8 am, according to lead author Dr. Judith Owens, pediatric sleep researcher at Hasbro Children’s Hospital.
"Mornings are so much more pleasant at my house I can't even begin to tell you," said Owens, whose daughter participated in the experiment. "Many of the faculty members said the same thing: that it improved the quality of their lives as well as the perception that students were just better rested and more ready to start the day."
Sleep medicine specialist have long known that the circadian rhythm of adolescents is different than that of either children or adults, says Dr. Heidi V. Connolly, chief of the division of pediatric sleep medicine at the University of Rochester Medical Center in New York. There is as much as a two hour shift in sleep-wake cycles that occurs during puberty, she adds.
|
<urn:uuid:8a853740-b0bf-4060-b3bf-008dfc927570>
|
CC-MAIN-2016-26
|
http://www.emaxhealth.com/1506/start-school-later-better-high-school-students
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.980065
| 435
| 2.5625
| 3
|
Chenab River Railway Bridge
Chenab Railway Bridge
Katra, Jammu-Kashmir, India
1,056 feet high / 322 meters high
1,532 foot span / 467 meter span
There is probably no other natural barrier on earth that has been more formidable to railway engineers than the Himalayan mountain range that stretches across northern India. This became all too obvious when the Indian railway decided to build a line connecting the states of Jammu and Kashmir in the Himalayan foothills of Northwestern India. When construction began in earnest in 2002, the engineers never expected extensive delays due to poor geology, access problems, tunnel excavation difficulties and labor disputes. When the 213 mile (343 kilometer) line finally opens in 2019 (or later), it will be the most expensive stretch of India’s 40,000 mile (64,374 km) railway network.
Of the many large barriers the railway crosses, the most daunting is the wide gorge of the Chenab River. With its headwaters high up in the Himalayan mountain range, the river carved a deep gash that left its elevation more than a 1,000 feet (305 mtrs) below the level of the rail line. The engineers decided the only bridge type suitable for the location would be a massive steel arch - the highest ever built for a railway at 1,056 feet (322 meters) from deck to water. Only an arch is capable of handling the weight of a 300 ton locomotive along with a thousand tons of passenger cars. With a length of 1,532 feet (467 meters), the main span will rank among the world’s 10 longest arches. Although its height will also surpass all of China’s current arch bridges, there are several Chinese railway lines planned that will contain railway bridges that will surpass Chenab in height.
Construction will be done by building the arch outward from both sides of the canyon using the stayed cantilever method. This technique was also used for the similar design of the New River Gorge bridge in West Virginia, U.S.A. The uneven sides of the gorge will result in one side of the arch terminating into the foundation 40 feet (12 meters) higher than the other side.
In September of 2008 it was announced that the Chenab Railway Bridge was canceled despite the completion of the approach viaducts in 2007. Difficult geological conditions on the steep slopes supporting the arch foundations were sited as the reason as well as the development of a lower, more direct route through tunnels. In 2013 this decision was reversed and the original route is back on track with the bridge being constructed as originally planned.
Whenever the Chenab Railway Bridge is finally completed, it will be more than just another bridge but a prestigious symbol of how far India and its railway engineers have come since the country’s first mile of railway track became operational more than 150 years ago.
Chenab and Anjikhad bridges are located less than 10 miles (16 kms) north of the busy tourist town of Katra. Despite its small population, Katra is loaded with hotels and restaurants due to its proximity to the Vaishno Devi, the second most visited religious shrine in all of India after the Tirumala Venkateswara Temple. Located a mile above sea level, the large complex of white buildings steps down the side of the holy mountain of Vaishno Devi. The Hindu shrine is located about 8 miles (13 kms) from Katra and is visited by millions of people a year. There is an airport in the much larger city of Jammu, located 30 miles (48 kms) south of the Chenab bridge.
The Kashmir valley has always been one of the most isolated regions in India. When the rail line is finally finished, it will finally open up the area to the rest of India and the outside world. For a more extensive history of the railway and its construction visit http://en.wikipedia.org/wiki/Kashmir_railway.
Chenab River canyon with the completed approach spans visible on the right. Image by WSP.
The two construction highline towers that are located more then 700 meters apart.
Chenab Railway Bridge wind tunnel test. Image by WSP.
The approach spans were completed several years before construction began on the main arch.
Chenab Railway Bridge north side staging area and construction site.
Chenab Railway Bridge south side staging area and construction site.
Chenab Railway Bridge topo map.
A wide satellite view shows the Chebab River near the top and the Anjikhad River across the bottom.
Chenab Railway Bridge location map.
Map of the Kashmir Railway route. The Chenab Bridge will be located between the stations of Reasi Road and Katra.
Image by Prashant Chaudhary
|
<urn:uuid:0819f368-634c-4a64-ad6c-cdee9006c428>
|
CC-MAIN-2016-26
|
http://www.highestbridges.com/wiki/index.php?title=Chenab_River_Railway_Bridge
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00129-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961828
| 999
| 3.53125
| 4
|
The American Academy of Pediatrics recommends that all students get at least 20 minutes to eat lunch, but many public elementary schools give kids just 20 minutes to enter, eat and exit the chaos of the cafeteria. Students often receive less time to get nutritious meal in their bellies than state governments provide for adult hourly wage-earners. For example, in Colorado, the law requires employers to provide an uninterrupted 30-minute lunch period.
Not so for many kids, including those with sensory challenges and autism spectrum disorder (ASD). For children with ASD, sensory overload in the lunchroom may impair their ability to focus on eating a nutritious lunch. New smells, lights and movement bombard their senses, in addition to unpredictable noises from the kitchen, lunch trays, cash registers and more. If the child is receiving feeding treatment, he may be in early stages of becoming an adventurous eater and may find eating in new stimulating environments especially challenging. He might need more support than some kids to deal with the sensory assault.
I use these four practical tips to help kids with sensory processing challenges focus on entering, eating and exiting the school cafeteria in a short amount of time:
- Practice, practice, practice: This article offers tips for practicing the cafeteria routine at home with younger kids during the summer months, but it’s not too late to start now.
- Create a cue card: It serves multiple purposes! Sew a vinyl pocket on the inside flap of your child’s lunchbox, as described in this tutorial.
- Add a cue card that uses pictures or words to help them stay focused. It might be as simple as “Remember to drink your milk!” or as detailed as the card pictured at right, which I used for an 8-year-old. He needed rules that he and I typed together on his computer to help him remember to eat or he would freeze and even put his head under the table. Over time, as he became more comfortable, he was able to listen to his own body’s cues, filter out external stimuli and eat on his own without the cue card. His vinyl pocket eventually became a spot for a lunch love-note from his parents: “Have a great day! Love you!”
- For smaller kids with ASD, being able to lift the flap of a lunchbox and unobtrusively block out any visual stimuli gives them a chance to regroup before lowering the flap and interacting with friends again.
- The vinyl pocket also serves as a reminder to staff how on to interact with your child. Use the tutorial to sew a pocket on the outer flap too, if needed. Place reminders for the staff in these outer pockets, such as:
- “Please let my child eat what he wants. He is learning to tune in to his own hunger signals.”
- “Please gently remind my son to drink his milk – it’s often all the food he gets at lunch right now.”
- If you are concerned that well-meaning school staff easily turn into food police who may feel the need to comment on the limited selection of foods in your child’s lunchbox, try this: “My daughter is learning to eat new foods. The foods you see may not appear “healthy” but they are a part of her journey to becoming a more adventurous eater. Thank you for not commenting on her choices today.”
- Send no more than five foods to school and all in one, easy-open container. I’ve started counting and kids bring an average of seven different baggies or containers of various foods, with the parent’s hope that “they’ll eat at least one of these!” But it’s overwhelming and most kids don’t unpack everything in their lunchbox. Try a bento box. My favorites are the Yumbox or EasyLunchboxes. Both offer quick-and-easy-open lids (especially important if a child has fine-motor challenges) and the child’s entire lunch goes in the partitioned container. Pack it with “grab and gab” food like bite-size sandwiches, fruit, veggies, etc., to create a smorgasbord of nutrition that quickly fills bellies while kids sit and chat. As a speech-language pathologist, I want to support my kids in pragmatics and other social language. Providing an easy-open, easy-to-eat meal gives them time to try to talk to friends and a chance to practice social skills.
- EAT UP, not clean up: When the lunchroom staff gives the five-minute warning that lunch is almost over, I suggest that they announce it this way: “Five more minutes! That means EAT UP, not clean up” to the kids. When the kids hear only “Five more minutes!” they panic and immediately begin to close their lunchboxes and line up to leave. One other strategy: Ask parents to have “clean-up races” at home with their kids, using the child’s packed lunchbox at a meal. How many seconds does it really take to close the lid, pack up and perhaps even recycle? Thirty seconds at the most – which leaves an extra 4-½ minutes devoted to eating. When time is of the essence, those minutes count!
What tips do you find help kids eat their lunch, even in the chaos of the school cafeteria? I hope you’ll share them in the comments below!
Melanie Potock, MA, CCC-SLP, treats children, birth to teens, who have difficulty eating. She is the co-author of “Raising a Healthy, Happy Eater: A Parent’s Handbook—A Stage by Stage Guide to Setting Your Child on the Path to Adventurous Eating” (Oct. 2015), the author of “Happy Mealtimes with Happy Kids,” and the producer of the award-winning kids’ CD “Dancing in the Kitchen: Songs That Celebrate the Joy of Food!” Melanie@mymunchbug.com
|
<urn:uuid:1f56e500-c227-4930-96b6-49a190c59e48>
|
CC-MAIN-2016-26
|
http://blog.asha.org/2015/09/15/autism-and-the-school-cafeteria-four-tips-to-help-kids-eat/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00050-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963927
| 1,274
| 3.125
| 3
|
Exercise Your Mindfulness Muscle Through Meditation
There are many articles in popular news reports, blogs and magazines, as well as in scientific journals, indicating that practicing mindfulness and meditation is good for you. Meditation has been linked to:
- stress reduction
- improvement in attention and focus
- reductions in anxiety, depression, ADHD
- reductions in violence in prisons
- changes in brain structure and processes
- improvements in physical symptoms
- pain relief
- and much more..
The concept of mindfulness emerged from Buddhist practices that focus on detachment from self-focus and desire. The process of mindfulness meditation enables you to develop distance from your thoughts and emotional reactions so you can observe your thinking process. This disentangles you from emotional immersion in and attachment to the “chatter” of your mind. Instead of experiencing your thoughts as facts that have the power to cause you distress, you experience them as “just thoughts”. You notice a thought and then move on. You don’t grab onto the thought and ride it to its potentially gruesome end. This technique allows you the potential to serenely sidestep the sturm and drang that your mind is capable of producing.
Mindfulness meditation is most often conducted through attention to the breath. We sit quietly and notice our breathing as it goes in and out. You can pay attention to it as it goes into and out of the nose, or as it raises and lowers your abdomen, or as it moves in and out of the chest (and there are other variations as well). That’s all it is really. Although this seems very simple, and in fact it is, many of us experience it as being very difficult. Often when I ask a patient (or a friend) if they meditate, they tell me, “Oh, I’ve tried that, I can’t do it”. They invariably tell me that when they try to meditate, they can’t stop having thoughts and “can’t clear” their minds. They are too full of thoughts!! So people become frustrated because they can’t stop thinking. This is a common frustration, but it is based on a common misconception– that we are supposed to be clearing our minds. For me, the object of mindfulness meditation is not to clear my mind of thoughts (although that may occur at times), but to notice my thoughts and then to come back to my breathing. The active ingredient here is not a clear mind, but it is the return to the breath. So it is really an exercise: notice that you’re thinking, then return to the breath…..notice that you are thinking and return to the breath…..notice that you are thinking (or feeling), then return to the breath…..The point is to build, as you do at the gym, a mindfulness muscle that becomes stronger and more able, through the exercise of noticing that you are thinking and then gently returning to the breath. Thinking is actually crucial to the practice of mindfulness! The muscle that we are strengthening is the redirection muscle, or perhaps it should be called the detachment muscle. We are strengthening our ability to redirect or detach ourselves from the distractions of our frantic minds. We are strengthening our ability to create a reflective space (i.e. noticing our thoughts) rather than reactively responding to our thoughts. This ultimately helps us to develop an ability to be more thoughtful and reflective, and to be less impulsive, emotional and reactive with the experiences that arise in our minds and in our lives.
The spirit with which you practice mindfulness is also crucial. One of the reasons people give up on meditation is not only because they are mystified by the number of interfering thoughts they have, but also because they are frustrated with how “incompetent” they perceive themselves to be in combating or avoiding those thoughts. They are critical and judgmental with themselves and so meditation becomes a truly unpleasant experience. It is important therefore, to just keep coming back to the breath. There is no reason to judge yourself for thinking or letting your mind wander, because that’s what the mind does. Your goal isn’t to keep the mind from wandering. Your goal is to notice (at some point) that your mind is wandering, and just bring it back to the breath. And when you do judge and criticize yourself, then notice that, and again come back to the breath. Meditation is a practice, an exercise, a process that is ongoing. As they say, in the 12 Step programs: “practice, not perfection”, and that concept is apt here. We are practicing mindfulness and exercising our mindfulness muscle through meditation.
I have some books, articles, resources and blog post recommendations that have helped to develop my thinking, and that you may find helpful:
Related blog articles, courtesy of Zemanta:
|
<urn:uuid:bcc16066-1dc7-45c1-9caa-3c271ac04993>
|
CC-MAIN-2016-26
|
http://robinscohenphd.com/exercis-your-mindfulness-muscle-through-meditation/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00117-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96503
| 1,000
| 2.65625
| 3
|
Most of the time, medicine is helpful. It relieves aches and pains, and fights infections. However, if certain drugs interact, it can result in dangerous side effects.
What if you could predict an unwanted interaction, before it happens? Flowers Hospital is doing just that, with a new electronic medication system.
"It's a very big safety initiative for us in terms of making sure our patients receive that safe quality of care,” said Dan Cumbie, Chief of Nursing Officer at Flowers Hospital.
Each patient's wristband has a barcode. By scanning the barcode, nurses can see their charts stat. Also, doctors prescribe meds, through a fingerprint-sensitive system. Once the doctor gets the medication, the nurse visits the patient’s room, and scans their wristband.
“That does a number of things. Number one, it checks to make sure it’s the right medication, the right dose, and the right route, being given to the right patient,” explained Cumbie.
It will also diagnose red flags.
"The type of interaction, the severity of interaction, and also provides information to help us, clinically, on how to handle those interactions whenever we call the physician back and let them know,” explained Lance Hagler, Clinical Coordinator and Assistant Director Pharmacist at Flowers Hospital.
The program will save patients from drug duplication, overdoses, and it keeps track of their records more efficiently.
Hagler said, “Patients are taking so many medications now, that it's hard for everyone to keep up.”
“We want to make sure that they understand what their side effects are, and also, why they're taking the medication and how important it is that they take it as it's prescribed by the physician,” said Cumbie.
Flowers Hospital has not done away with paper completely, but it is one step closer to curing medical errors.
|
<urn:uuid:f53954ec-7997-4078-b2b1-006cc6ccde8d>
|
CC-MAIN-2016-26
|
http://www.wtvy.com/home/misc/Joining-the-Health-Journey-at-Flowers-Hospital-224501571.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00036-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956491
| 404
| 2.53125
| 3
|
The construction of a working brain emulation would require, aside from brain scanning equipment and computer hardware to test and run emulations on, highly intelligent and skilled scientists and engineers to develop and improve the emulation software. How many such researchers? A billion dollar project might employ thousands, of widely varying quality and expertise, who would acquire additional expertise over the course of a successful project that results in a working prototype. Now, as Robin says:
They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find. While a few key insights would allow large gains, most gains would come from many small improvements.
Some project would start selling bots when their bot cost fell substantially below the (speedup-adjusted) wages of a profession with humans available to scan. Even if this risked more leaks, the vast revenue would likely be irresistible.
To make further improvements they would need skilled workers up-to-speed on relevant fields and the specific workings of the project’s design. But the project above can now run an emulation at a cost substantially less than the wages it can bring in. In other words, it is now cheaper for the project to run an instance of one of its brain emulation engineers than it is to hire outside staff or collaborate with competitors. This is especially so because an emulation can be run at high speeds to catch up on areas it does not know well, faster than humans could be hired and brought up to speed, and then duplicated many times. The limiting resource for further advances is no longer the supply of expert humans, but simply computing hardware on which to run emulations.
In this situation the dynamics of software improvement are interesting. Suppose that we define the following:
The stock of knowledge, s, is the number of standardized researcher-years that have been expended on improving emulation design
The hardware base, h, is the quantity of computing hardware available to the project in generic units
The efficiency level, e, is the effective number of emulated researchers that can be run using one generic unit of hardware
The first derivative of s will be equal to he, e will be a function of s, and h will be treated as fixed in the short run. In order for growth to proceed with a steady doubling, we will need e to be a very specific function of s, and we will need a different function for each possible value of h. Reduce it much, and the self-improvement will slow to a crawl. Increase h by an order of magnitude over that and you get an immediate explosion of improvement in software, the likely aim of a leader in emulation development.
How will this hardware capacity be obtained? If the project is backed by a national government, it can simply be given a large fraction of the computing capacity of the nation’s server farms. Since the cost of running an emulation is less than high-end human wages, this would enable many millions of copies to run at realtime speeds immediately. Since mere thousands of employees (many of lower quality) at the project had been able to make significant progress previously, even with diminishing returns, this massive increase in the effective size, intelligence, and expertise of the work force (now vastly exceeding the world AI and neuroscience communities in numbers, average IQ, and knowledge) should be able to deliver multiplicative improvements in efficiency and capabilities. That capabilities multiplier will be applied to the project’s workforce, now the equivalent of tens or hundreds of millions of Einsteins and von Neumanns, which can then make further improvements.
What if the project is not openly backed by a major state such as Japan, the U.S., or China? If its possession of a low cost emulation method becomes known, governments will use national security laws to expropriate the technology, and can then implement the plan above. But if, absurdly, the firm could proceed unmolested, then it could likely acquire the needed hardware by selling services. Robin suggests that:
This revenue might help this group pull ahead, but this product will not be accepted in the marketplace overnight. It may take months or years to gain regulatory approval, to see how to sell it right, and then for people to accept bots into their worlds, and to reorganize those worlds to accommodate bots.
But there are many domains where sales can be made directly to consumers across national borders, without emulations ever transfering their data to vulnerable locations. For instance, sped-up emulations could create music, computer games, books, and other art of extraordinary quality and sell it online through a website (held by some pre-existing company purchased by the project or the project’s backers) with no mention of the source of the IP. Revenues from these sales would pay for the cost of emulation labor, and the residual could be turned to self-improvement, which would slash labor costs. As costs fell, any direct-to-consumer engagement could profitably fund further research, e.g. phone sex lines using VoIP would allow emulations to remotely earn funds with extreme safety from the theft of their software.
Large amounts of computational power could also be obtained by direct dealings with a handful of individuals. A project could secretly investigate, contact, and negotiate with a few dozen of the most plausible billionaires and CEOs with the ability to provide some server farm time. Contact could be anonymous, with proof of AI success demonstrated using speedups, e.g. producing complex original text on a subject immediately after a request using an emulation with a thousandfold speedup. Such an individual could be promised the Moon, blackmailed, threatened, or convinced of the desirability of the project’s aims.
|
<urn:uuid:5c75026e-9930-474a-98ce-742ea6df7eb5>
|
CC-MAIN-2016-26
|
http://www.overcomingbias.com/2008/11/brain-emulation.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952986
| 1,188
| 2.53125
| 3
|
Cooperatives are in a unique place among business enterprises because they mix the enterprise with democratic principles of control and do not solely seek to maximize profit, but to maximize the benefits to the members, the owners who are the employees in a worker’s co-op.
Cooperatives are enterprises that are democratically owned and controlled by the people who benefit from them and are operated collaboratively for the purpose of providing services to these beneficiaries or members.
The International Cooperative Alliance defines a cooperative as “an autonomous association of persons united voluntarily to meet their common economic, social and cultural needs and aspirations through a jointly-owned and democratically-controlled enterprise.” A co-op is an enterprise formed by a group of people to meet their own self- defined goals. These goals may be economic, social, cultural, or as is commonly the case, some combination.
In a cooperative, only participants who have met the requirements for membership are allowed to be owners. All cooperatives operate on the principle of “one member, one vote”, so control is allocated evenly among the users of the co-op without regard to how much money each has invested. Cooperatives operate for the benefit of members, and those benefits are distributed in proportion to each member’s transactions with the cooperative.
From “Cooperative Equity and Ownership: An Introduction,” (PDF) Margaret Lund
Co-ops come in a wide variety of forms, from worker coops to consumer coops to investor coops. Coops may also use a mix of preferred, common, and non-voting stock, or may utilize subsidiaries or trusts or other methods to incorporate investor capital into the enterprise. For a complex and thorough view of new forms of agricultural coops see: “Understanding New Cooperative Models,” (PDF) Chaddad and Cook
Notable worker co-ops are:
Notable co-op organizations are:
|
<urn:uuid:c53422ba-01b3-4dbf-8efb-8c828fef4350>
|
CC-MAIN-2016-26
|
http://equitableprinciples.com/equitable-principles-home/4-2/websites-of-interest/co-ops/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00097-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954365
| 394
| 3.328125
| 3
|
Friday, March 28, 2014
As Dean Kevin Johnson blogged earlier, today is Cesar Chavez day in California. It is wonderful that students in California (and other states) learn about the advocacy work that Cesar Chavez did to help create better working conditions for farm workers and other immigrant workers.
To me, this day is also a call to remember the other unsung heroes of the farm labor movement. There were other advocates who worked alongside Cesar Chavez. Some of them were Filipino farmer workers known as "Delano Manongs." One of these manongs was Larry Itliong, who was a labor organizer who led a group of 1,500 Filipinos to strike, like Cesar Chavez and other Mexican American workers, against the grape growers of Delano, California.
Although I have yet to see the new Cesar Chavez movie , my 5th grader saw it today with her classmates and she explained that the movie showed the collaboration between the Delano Manongs and Cesar Chavez and other farm workers. A new documentary, "Delano Manongs," provides a more in-depth account of the advocacy work of these Filipino farm workers in improving the working and life conditions of immigrant workers.
Moreover, through a bill (AB 123) that was sponsored by Assembly Member Rob Bonta and signed by Governor Jerry Brown in October 2013, students in California will learn about the contributions of Filipino farm workers to the California labor movement.
So on this Cesar Chavez Day, let us honor not only Cesar Chavez but also Larry Itliong and the Mexican American and Filipino American workers (and others) who worked alongside to help improve the lives of farm workers in California and the United States.
|
<urn:uuid:6c9ae050-bea7-4bae-96ff-a846d5517c87>
|
CC-MAIN-2016-26
|
http://lawprofessors.typepad.com/immigration/2014/03/remembering-the-delano-manongs-on-cesar-chavez-day.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00043-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967314
| 347
| 3.3125
| 3
|
... continued from Part I ...
Copyright © 1975, 1998 by Creation Research Society. All rights reserved.
A DECADE OF CREATIONIST RESEARCH
by DUANE T. GISH, Ph.D.
Creation Research Society Quarterly 12(1):34-46 June, 1975
New Guinea Communities and the Migration Dispersion Model
The origin of the peoples of New Guinea is a subject of dispute among anthropologists. Regardless of their origin, New Guineans in the past have tended to isolate themselves in small groups which have become diversified both linguistically and genetically. R. Daniel Shaw compiled data on the ABO, MNS and Rh blood groups for natives of New Guinea in 37 areas spread over the entire island in an attempt to discover any relationships that might aid in correlating these genetic data, (31) and which might provide some basis for postulating how these diverse groups arose.
Although the data are insufficient to validate any theory, Shaw maintained that his data supported a Migration-Dispersion model for the origin of these New Guinea population groups. According to this model, as individuals migrate in small numbers from a common gene pool, the new group becomes more distinct than the source group. This is so because new generations come from only a limited gene pool and are isolated from the normalizing effect of interbreeding within a large gene pool where all genetic factors are available. Genetic traits peculiar to the group are thus rapidly and strongly expressed because of a high degree of inbreeding.
It is postulated that "Papua-Melanesians" migrated to New Guinea in relatively large numbers. After settling on the coasts of what was probably an uninhabited island, population growth forced these people to migrate up river valleys and into the highlands. These groups became reproductively isolated from one another due to geographic, linguistic and cultural barriers. This gave rise to populations that were genetically diverse from one another, since each migratory group had carried with it only a fraction of the total gene pool.
While evolutionists generally propose that the origin of races required gradual processes over a vast length of time, creationists postulate that a process similar to the one above could have caused the origin of races in a short period of time. The rapid dispersion that took place following the confusion of tongues at Babel (32) would have resulted in the isolation of relatively small groups. Furthermore, the manner in which God bestowed various languages among this previously monolingual human population may have been so directed as to isolate genetically similar individuals in the same language group.
Thus, those individuals having a higher proportion of genes for Negroid features, or for Caucasian features, etc., may have been given a common language. Once the race itself was established through isolation and inbreeding, further migrations and other isolating mechanisms, such as those described above, could account for the diversity within each major racial group.
Pine Cone Spirals and the Fibonacci Series
A curious, but seldom observed, pattern runs through much of nature. (33,34) The reproduction of male bees, the number of spiral floret formations visible in many sunflowers, spiraled scales on pine cones and pineapples, the arrangement of leaves on twigs, and many other structures fit the Fibonacci series. This series, developed by the Italian mathematician Leonardo of Pisa, also known as Fibonacci (1170-1230), is 0, 1, 1, 2, 3, 5, 8, 13, 21, . . . , with each number the sum of the two previous numbers. Harry Wiant's study of the cones of the major southern pines confirmed that, almost without fail, the number of spirals around the cones at a selected point, to the right and left, were adjacent numbers in the Fibonacci series. (34)
Some exhibited counts of 5 and 8, others of 3 and 5. Preliminary studies indicated that approximately 50% of the cones give the maximum count to the right and 50% show the maximum to the left. Wiant suggested that these patterns in nature, in both the plant and animal world, rather than reflecting a random evolutionary process, are indicative of the design of a Creator-God.
Stability of Bacterial Populations
Basic to the orthodox evolutionary model is the belief that the population of an organism is constantly undergoing change due to mutations and pressures brought on by changes in the environment. Jerry Moore studied a pure culture of _Proteus mirabilis_, a bacterial species belonging to the Enterobacteriaceae family of the Eubacteriales order, which he had isolated from a clinical source, in order to determine its stability or variability over a period of time under markedly different conditions. (35)
The organism was serially transferred onto 10 randomly-selected laboratory media and the cultures were held at temperatures ranging from 20-37C. for a period of three months. The conditions of culture and incubation were thus quite varied, yet remaining favorable enough at times for hundreds of bacterial generations to occur. After 62 serial transfers, 30 biochemical and antibiotic sensitivity characteristics had not changed from those initially observed, except for a minimal and variable response to Penicillin G. The variable response to the latter may have been due to cell wall damage from exposure of the bacteria to noxious components in the culture media rather than to exposure to Penicillin G.
Moore's experiment, although admittedly limited in scope and duration, does support a natural biologic stability. In his paper, Moore reviewed some examples in the scientific literature of tremendous biologic stability, including a study which indicated that a bacterium had retained its rigid biological characterization during the 150 years it has been subject to investigation.
As mentioned earlier in this article, fundamental to evolutionary thinking is the concept that new varieties within each species are constantly arising via mutations or other genetic variations. The genetic variants that arise by these processes, due to differences in viability, fertility, etc., contribute, via reproduction, differentially to the gene pool of subsequent generations, some leaving more offspring than others.
Those that reproduce a larger proportion of offspring which, in turn live to reproduce in larger numbers, are said to be the most fit. They are said to have been selected by nature, and the evolutionary process is thus a process of mutation with natural selection.
Another concept that is fundamental to evolutionists is the belief that these minor changes, or micromutations, accumulate in such a way that one basic kind of an organism can change into a basically different kind of an organism, and simple organisms will change or evolve into more complex organisms.
Creationists recognize that all organisms have an ability to vary, but they insist that all empirical evidence indicates that this ability is restricted within relatively narrow limits, and that there is no evidence that one kind of an organism has ever arisen from a basically different kind of an organism. They further believe that this ability to produce normal variants (distinguished from pathological variants) was built into each kind by the Creator to enable each kind to survive under a great variety of conditions, and thus to be perpetuated even though conditions may change. Creationists are interpreting biological data according to this concept rather than within evolutionary concepts.
Galapagos Island Finches
Darwin and other evolutionists have supposed that the varieties of finches now living in the Galapagos Islands, a group of islands lying 600 miles and more west of South America, have arisen from migrants from South America. The original migrants, it is believed, were more or less uniform, but mutation with natural selection has given rise over a long period of time to finches that now inhabit the various islands and which possess differences (mainly in size and shape of the bill) in response to variations in the type of food supply found on the several islands.
Creationists interpret these data in much the same way, with some important exceptions. They point out, first of all, that the variation that has apparently occurred among these finches is very limited, for these finches are not only still birds, but they are still finches. Neither the molecule-to-man idea of evolution, nor the idea that basically different kinds of birds, such as ducks, hummingbirds, and vultures, have arisen from a common ancestor is supported by such evidence.
Secondly, creationists believe that the genetic potential, or gene pool, carried to the Galapagos Islands by the migrant finches from South America was sufficient to permit the variation that has occurred. This variability did not arise via mutations, but the potential was already present in the original migrants, which diverged into various forms as a result of the chance arrangement of their original variability potential (the fact that this variability potential existed was not by chance!).
Finally, as the study of these finches by Walter Lammerts (36) showed, the actual divergence that has occurred among these finches is considerably more limited than represented in much of evolutionary literature. Dr. Lammerts studied the large collection of Galapagos Island finches (sometimes called "Darwin's finches") at the California Academy of Science. He particularly noted: 1) the length of each bird from tip of bill to end of tail, 2) the height from belly to top of back, 3) total length of bill, and 4) width of the ventral side of the lower mandible of the bill.
These finches have been classified into four genera, Geospiza, Camarhynchus, Cactospiza, and Certhidea. Those studied by Lammerts bore 17 different species labels. While Lammerts held that the Certhidea, or Warbler finches, are distinctive from the other genera, he stated that the four species within this genera are hardly more than color variations, and should be placed in a single group with species rank rather than genus rank. Lammerts further observed that if all the species labels were removed from the remainder of the Galapagos Island finches and they were arranged according to body and bill size, complete intergradation would be found. The same is true of bill length and width and plumage coloration.
Lammerts noted that the range in variation among these finches, although they are classified into several genera and many species, is exactly comparable to the variation found within a single species of song sparrow, Melospiza melodia. He further pointed out that these finch "genera" are in no way comparable in distinction to the genera Rosa (roses), Frageria (strawberries), and Pyrus (pears), members of the family Rosaceae.
Lammerts considered that it would be much more realistic to classify these finches into a single species. He also emphatically rejected the idea that the variations in size of bill are "adaptive divergences" resulting from natural selection. Present feeding habits, Lammerts emphasized, are the result of the particular types of bills which happened to occur among these birds, rather than the bills developing slowly as an adaptation to differences in the types of food available.
Crowding and Reproductive Rates in Planaria
E. N. Smith has reported on his study of the effect of crowding on asexual reproduction of the planaria Dugesia dorotocephala. (37) As Smith pointed out, there are two possible mechanisms for regulating population densities. Individuals within a population might reproduce maximally near their physiological limit, with the population density being regulated by negative outside forces (predation, disease, starvation, etc.). Those individuals which are better able to compete against these outside forces and reproduce more offspring are said to be more fit and thus to be selected. Alternately, the individuals within a population might possess some internal regulating force which in some way regulates population density and maintains a form of density homeostasis.
Evolutionists generally prefer the former view. Natural selection is said to favor the individuals that can leave the most reproducing offspring. On the other hand, if the alternate view is correct, there would be no real competition between populations and no selection. The postulated cause of the evolutionary process would fail.
The freshwater planaria, Dugesia dorotocephala, reproduce asexually by fissioning. Smith maintained the planaria in identical containers, and conditions in each experiment were the same in each container, except the population density was maintained at different levels. Smith found that crowding clearly reduced the fissioning rate of the planaria. This reduction did not appear to be due to slime, oxygen depletion or carbon dioxide build-up, but appeared to be due to some water-soluble inhibitor produced by the planaria.
The planaria thus appeared to have a built-in density-dependent reproduction regulatory mechanism. Smith postulated that these creatures (and other animals) regulate their own numbers without the necessity of outside forces such as predation, starvation, and disease. He pointed out that built-in density dependent reproduction rates were mandatory after creation and before the fall, and that it is quite conceivable that living organisms had a mechanism for regulating their numbers without intervention of external conditions such as predation, starvation and disease.
Plant Succession Studies
Walter Lammerts and George Howe used plant succession studies to observe the effect of natural selection under widely divergent conditions. (38) Repeated field analyses were made of variation in five plant species populations including the California poppy, lupine, thistle sage, owl's clover, and a yellow pansy, representing five different plant families. Observations were made over a period of five growing seasons at staked localities in the vicinities of Newhall and Corralitos, California.
Despite great variation in annual precipitation during the study, no gradual shifts or evolutionary trends were evident. The natural selection observed actually restricted the amount of variation, bringing populations back to a typical or normal form during years of moisture stress. Lammerts and Howe concluded that these studies indicated no evidence for natural selection of the type required by evolution theory.
Origin of the great range in variation found in many species of plants were discussed. It was the conclusion of one of the authors, namely Dr. Lammerts, that plant variations were supernaturally derived from the originally small populations of plants of the various kinds which survived the Flood. The alternative possibility exists, however, that a sufficiently diverse gene pool within each plant family survived the Flood to give rise to the many plant varieties existing today. The experiments by Howe discussed in the next article have shed some light on this question.
Seed Germination and Plant Survival Following Submersion in Salt and Fresh Water
George Howe undertook a study of the effect of prolonged submersion of seeds of flowering plants in sea and fresh water as an aid in understanding bow plants were able to survive the Flood. (39) Seeds from the fruits of five different species and families of flowering plants were tested for germination after soaking in sea water, fresh tap water, and an equal mixture of sea and tap water.
Soaking was continued for a maximum of 140 days, which corresponds roughly to the 150 days during which water prevailed upon the earth during the Flood. At intervals of 4, 8, 12, 16, and 20 weeks after initiation of soaking, seeds of each plant species were removed from the various treatments and placed under favorable germination conditions.
Ability to survive the soaking varied among the plant species tested, but even after a soaking period of 140 days in each of the solutions mentioned above, seeds from three out of the five species tested germinated and grew.
The first suggestion that Howe made in answer to the question of plant survival during the Flood was that many plants did not survive! He pointed out that much destruction of plant life would be expected during a prolonged global flood and that extinction of many species would thus be a predictive consequence of such a flood. Paleobotanical studies have revealed that numerous kinds of plants are found as fossils but which are not found living today.
Howe reviewed several other mechanisms for plant survival during the Flood in addition to resistance to soaking by seeds. Vegetation, including trees, have been known to have been torn away by storms and carried out to sea still embedded in soil masses. Survival during prolonged periods of such a process would be possible.
Plant material has been known to have been transported while embedded in icebergs. Seeds that were contained in the carcasses of dead birds floating in sea water have been known to germinate and grow. No doubt many seeds would have been carried on the ark, as well.
From his data and those of others, Howe concluded that a variety of mechanisms were available to account for the survival of plants during the Flood.
Flora and Fauna of the Galapagos Islands
John Klotz visited the Galapagos Islands, made famous by Darwin, and has published an extremely interesting review of the plants and animals which now inhabit these islands, particular attention being given to finches, tortoises, cacti, and iguanas. (40)
About a half dozen of these islands measure 10 to 20 miles across, and one, Albemarle, is 80 miles along. Mountains on these islands rise 2,000 to 3,000 feet above sea level, the highest point being 4,000 feet on Albemarle. Generally the islands are arid and the landscape harsh. Inland and at higher altitudes, there is humid forest with rich black soil and tall trees covered with ferns, orchids, lichens, and mosses. In the very highest areas there is open country with grass, ferns, mosses, and occasional thickets.
Floral and faunal types are relatively few in number. The fauna include only six passerine forms of birds and one species of cuckoo; two types of land mammals (a bat and a rat); and five types of land reptiles, which include a giant tortoise, a lizard, a gecko, a snake, a land iguana, and a marine iguana. There are no amphibians. Domesticated animals have been introduced by settlers.
Klotz devoted a large section of his paper to the finches. He stated that there seems to be no reason to question their origin from a common ancestor. As Klotz noted, evolutionists have generally assumed the origin of all the finch species from a single gravid female, a single pair, or at most a very small number reaching the islands together. Klotz discussed the suggestion of Lammerts (1966), mentioned earlier in this paper, that migration of finches to the Galapagos Islands might have included many pairs, although he did not seem to favor that view.
Klotz, in contrast to Lammerts, maintained that most of the Galapagos Island finch species are actual species rather than mere varieties. There seems to be good evidence on each side, although Lammerts presented some especially convincing evidence. Klotz believes there is no reason to doubt that new species arise or that new species of finches actually did arise on the Galapagos Islands.
Klotz emphasized that origin of species is comparatively only a minor problem for evolutionists. Finches are still finches and there is no evidence of the changes in magnitude required for macroevolution, that is, increase in complexity with origin of one basic kind from another. He thus asserted that the evidence presented by the fauna and flora of the Galapagos Islands did not constitute any real support for amoeba-to-man evolution.
Molecular Approaches to Taxonomy
Taxonomy is the science of classification of plants and animals. It is obvious that there are recognizable groups of organisms in the present world which have many similar characteristics. Such groups have always existed as evidenced both by the fossil record and the Genesis reference to "kinds." The father of taxonomy, Carolus Linnaeus, was a strong believer in creation, and believed, as do modern creationists, that similarities among organisms exist not because of their origin from a common ancestor but because God based His creation on a complex of plans with an underlying thread of unity.
Wayne Frair's approach to taxonomic studies avoids evolutionary presuppositions, his assumption being that the world of life is to be viewed as having risen from certain stem organisms which constitute the original "kinds" mentioned in Genesis. He views the problem of grouping organisms within the kinds and of establishing relationships among the kinds to be the proper function of taxonomists.
Frair's interests as a biologist have included serology and herpetology. He combined elements of both in his taxonomic studies, utilizing antibodies to the serum of turtles as an aid in establishing the taxonomic relationship of these turtles. (41) He injected the blood sera of the turtles into rabbits or chickens in order to establish antibodies to the serum proteins. The antibody-containing serum, or antiserum, was obtained from the rabbits or chickens and mixed with serial dilutions of the serum from the various turtles. The sera from closely related turtles would be expected to give a strong cross-reaction, while sera from distantly related turtles would cross-react weakly or not at all (a cross-reaction is said to be obtained if antiserum generated by injection of serum of species A also reacts, or gives a precipitate, with serum from species B).
Frair's studies did not support the widely held view that snapping turtles belong to a separate family related to the Kinosternidae, but rather should be placed within the Emydid family group. Such a switch is probably minor enough to pose no problem for the evolutionary biologists. Creationists maintain, of course, that taxonomic classification should be established without reference to a supposed evolutionary origin or phylogeny, but should be based strictly on degree of similarity.
Many papers have been published in the CRS Quarterly which were concerned with the relationship of the laws of thermodynamics to the creation-evolution problem. Emmett Williams, in his most recent paper on this subject, presented an excellent review of the papers on this subject. (42) To review these papers here, or even to review in detail Dr. Williams' outstanding series of papers on this subject (43-46) would exceed the scope of this paper. To omit any mention of this work from the present paper, however, even though such work did not involve collection of any new and original data as such, would be a serious omission. I will, therefore, briefly review Williams' series of papers.
Those who hold to the general evolution model postulate that the present universe and all that it contains began in some primordial disordered state. Evolutionary forces have been at work throughout the billions of years since that state existed, it is believed, and have acted in such a way that the highly structured universe and a vast array of incredibly complex organisms have arisen here on the earth. Thus, there has occurred, according to this thinking, at least in the observable part of our universe and particularly on the earth, an immense increase in order and complexity. This supposedly has taken place solely according to mechanistic, naturalistic processes which can be attributed to properties inherent in matter.
If the above were true, then matter obviously must have possessed an inherent ability for organization into higher and higher levels of order and complexity. Scientists should have been able to recognize this universal inherent property of matter and to construct natural laws which describe it. As a matter of fact, scientists have not been able to recognize any such property of matter.
However, scientists have recognized just the opposite tendency in matter. The more probable state of matter is always the more random state. Every change in nature that takes place spontaneously always results in a loss of order. Natural processes always occur in such a way that the complex tend to become less complex, ordered states tend to become disordered. Therefore, this universe is constantly becoming more disordered.
This tendency is so universal and so unfailing it can be expressed as a law - the Second Law of Thermodynamics. The operation of the natural forces which has resulted in man's description of these forces in the form of the Second Law of Thermodynamics has a number of consequences, and thus the Second Law may be defined in several ways. These consequences include the loss of usable energy, the loss of order, and the loss of information. The Second Law may thus be defined in several ways so as to emphasize these several consequences. In discussions of this Law and its relationship to the creation-evolution problem, the loss of order and information consequences are usually emphasized.
In Williams' first paper on this subject, (43) he discussed the operation of the Second Law from the viewpoint of classical thermodynamics (loss of usable energy) and the viewpoint of statistical mechanics (loss of order). Entropy is a thermodynamic quantity which can be defined, in a non-technical sense, as a measure of the randomness of a system - the greater the randomness or disorder within a system the greater the entropy.
An increase in order requires a decrease in entropy, while the reverse is true. The Second Law of Thermodynamics is thus sometimes referred to as the law of increasing entropy. In his first paper, which was the more technical of the series, Williams discussed entropy and the solid state.
Following an excellent introduction, including a thorough definition of terms and of the Second Law in thermodynamic and statistical terms, Williams discussed the effect of entropy on the solid state. Contrary to what is commonly believed, crystalline solids are not structurally ordered. There are many imperfections in the lattice structures of such solids, and these imperfections are thermodynamically stable because the entropy of the solid is increased by their presence. Williams emphasized that the principle of increasing entropy is opposed to evolution and to certain aspects of ruin-reconstruction interpretations of Genesis 1.
A simplified explanation of the First and Second Laws of Thermodynamics was given in non-mathematical language in Williams' second paper. (44) That the total amount of energy in the universe is a constant is expressed in the First Law. Since matter and energy are interchangeable, and therefore equivalent, everything in the physical universe is a form of energy and neither increases nor decreases, in perfect agreement with the Biblical pronouncement of a finished creation. Williams explained that evolution could not have occurred unless both the First and Second Laws of Thermodynamics were violated many times. He shows that the three arguments which are usually offered by evolutionists to circumvent the laws of thermodynamics are invalidated by the evidence.
In his third paper (45) Williams asked the question, "Is the universe a thermodynamic system?" One would have to know the answer to that question before one could assert with authority that the laws of thermodynamics apply to the entire universe in addition to our readily observable portion of the universe, where these laws have been tested. Williams asserted that there is no way scientifically to determine the extent of the universe or its thermodynamic character at the present time.
He pointed out, however, that statements in Scripture support the fact that the laws of thermodynamics do apply to the entire universe. The applicability of the First Law is asserted in Genesis 2:1-3 and in 11 Peter 3:7, and the applicability of the Second Law is made plain in Psalms 102:25, 26, and Romans 8:20-22. Since the universe is subject to these laws of thermodynamics, and no matter or energy exchange can be observed, it is assumed that the universe is an isolated thermodynamic system.
But whether the universe is open, closed or isolated, it is definitely degenerating. No matter what type of a thermodynamic system is chosen, the entropy of the system always increases with the occurrence of an irreversible process. Williams therefore asserted that evolutionists, who demand a decrease in entropy, are in an indefensible position in the face of the Second Law of Thermodynamics.
In his fourth paper (46) Dr. Williams offered an extremely interesting and thorough consideration of the applicability of the laws of thermodynamics to living systems. There is a rather general impression, often stated by evolutionists, that living systems somehow circumvent the Second Law, since the development of a seed or fertilized egg into the adult organism seems to result in an increase in complexity.
As Williams pointed out, this increase in complexity is only apparent and not real. The fertilized egg is as complex, or more so, than any cell in the growing or adult organism. All of the information needed for the production of the adult is present in the egg. No new information is needed or added. As a matter of fact, almost from the moment of conception, loss of information and order via mutations, injuries, and disease begins. This loss of order, or the rate of increase in entropy, slows during development, but never ceases.
The rate of entropy increase accelerates during the aging process and finally results in death, whereupon the organism reaches its maximum entropy state - a pile of dust. If living things circumvented the Second Law of Thermodynamics, they would live forever.
As indicated early in this section, Williams' most recent paper (1973) on thermodynamics in the CRS Quarterly was a review of creationist literature on the relationship of the laws of thermodynamics to the subject of creation and evolution. Publications by Henry M. Morris, R. E. D. Clark, D. Penny, T. G. Barnes, George Mulfinger, Walter Lammerts, I. McDowell, Bolton Davidheiser, G. C. Lockwood, and A. E. Wilder-Smith were cited in this respect. Dr. Williams concluded his 1973 paper with a discussion of evolution in the light of probability considerations, showing that evolution, on the basis of these probability considerations alone, can be shown to be impossible.
A RESEARCH CHALLENGE
In 1970, Larry Butler, then Chairman of the Research Committee of the Creation Research Society, issued a research challenge to creationists in the form of a list of proposed research projects. (47) These included:
(a) experimental demonstration that coal can be formed rapidly under catastrophic conditions (This has actually been demonstrated since then by a University of Utah scientist - see reference 17.);
(b) experimental formation of fossils under a variety of conditions in order to demonstrate that fossilization can take place relatively rapidly;
(c) experimental determination of optimum conditions for rapid growth of coral reefs; investigation of caves, mine shafts, and tunnels of recent origin (100-200 years) to determine growth rates of stalactites and stalagmites;
(d) anthropological measurements of variations in thickness, shape, etc., of contemporary human skulls.
Other suggested research included:
(a) consideration of the thermodynamic effects of the Flood;
(b) surveys of geological formations from high altitude (40,000 feet) and interpretations of the broad features revealed within the context of Flood geology;
(c) continuation of Howe's investigation of the effect of soaking in sea water on the viability of seeds;
(d) a reinvestigation of alleged examples of species formation;
(e) further research to verify the claim that radioactive decay of uranium and thorium has actually produced only a minute fraction of the helium that should have been produced in 4.5 billion years.
Further projects listed were:
(a) research to determine the true origin of cultivated plants;
(b) carbon dating of samples of organic material that is supposed to be millions of years old and which should thus be devoid of radiocarbon (C-14);
(c) taxonomic studies in an attempt to determine the limits of the "kinds" described in Genesis;
(d) a formulation of a list of "living fossils," that is, a list of plants and animals once believed to have been extinct for millions of years but now known to be living;
(e) finally, an investigation of settling rates to, see if differential settling by water action, as proposed by Whitcomb and Morris (48) can account for the way fossils are distributed in the geological formations.
The list of proposals by Dr. Butler is certainly not exhaustive, of course. For instance, there is the need for: (a) Dr. Barnes to continue his fascinating study of the magnetic field of the earth, (b) a continued need for the search for remains of the ark on Mount Ararat, © further investigations of alleged overthrusts, (d) research into the processes and procedures used in radiometric dating, etc.
Butler nevertheless posed a real challenge to creation scientists; and he gave some idea of the important need for creationist research and the possible direction of such research. As is evident from this review, creationists have not been idle during the past decade, and readers can expect that creation scientists will have gained significant insight into many of the problems posed by Dr. Butler before the end of the present decade.
CRSQ = Creation Research Society Quarterly
(1) Creation Research Society is a non-profit organization incorporated in the State of Michigan.
(2) Slusher, H. S. 1966. Supposed overthrust in Franklin Mountains, El Paso, Texas, CRSQ 3(1):59-60.
(3) Lammerts, W. E. 1966. Overthrust faults of Glacier National Park, CRSQ 3(1):61-62.
(4) Burdick, C. L. 1969. The Lewis overthrust, CRSQ 6(2):96-106.
(5) Burdick, C. L. and H. S. Slusher. 1969. The Empire Mountains a thrust fault?, CRSQ 6(1):49-54.
(6) Lammerts, W. E. 1972. The Glarus overthrust, CRSQ 8(4):251-255.
(7) Rusch, W. H., Sr. 1971. Human footprints in rocks, CRSQ 7(4):201-213.
(8) Films for Christ, Route 2, Eden Road, Elmwood, Illinois 61249.
(9) Meister, W. J., Sr. 1968. Discovery of trilobite fossils in shod footprints of human in "Trilobite Beds" - a Cambrian formation, Antelope Springs, Utah, CRSQ 5(3):97-102.
(10) Burdick, C. L. 1973. Discovery of human skeletons in Cretaceous formation, CRSQ 10(2):109-110.
(11) Cousins, F. W. 1966. Fossil man. Evolution Protest Movement. 110 Havant Road, Stoke, Hayling Island, Hants, England; and 1557 Arrow Road, Victoria, British Columbia, Canada. Pp. 47-61.
(12) Burdick, C. L. 1966. Microflora of the Grand Canyon, CRSQ 3(1):38-50.
(13) Burdick, C. L. 1972. Progress report on Grand Canyon palynology, CRSQ 9(1):25-30.
(14) Rusch, W. H., Sr. 1968. The revelation of palynology, CRSQ 5(3):103-105.
(15) Burdick, C. L. 1967. Ararat - the mother of mountains, CRSQ 4(1):5-12.
(16) Coffin, H. G. 1969. Research on the classic Joggins petrified trees, CRSQ 6(1):35-44.
(17) Gish, D. T. 1972. Acts and Facts, 1(4):1-4. (Institute for Creation Research). 1973. Creation: Acts, Facts, Impacts Creation-Life Publishers, San Diego), pp. 15-19.
(18) Coffin, H. G. 1974. (in) Challenge to Education II-B. The Bible-Science Association, Caldwell, Idaho, pp. 36-41.
(l9) Northrup, B. E. 1969. The Sisquoc diatomite fossil beds, CRSQ 6(3) : 129-135.
(20) Peters, W. G. 1971. The cyclical black shales, CRSQ 7(4):193-200.
(21) Nevins, S. E. 1972. Is the Capitan limestone a fossil reef?, CRSQ 8(4):231-248.
(22) Nevins, S. E. 1974. Post-Flood strata of the John Day Country, Northeastern Oregon, CRSQ 10(4):191-204.
(23) Barnes, T. G. 1971. Decay of the earth's magnetic moment and the geochronological implications, CRSQ 8(1):24-29.
(24) Barnes, T. G. 1972. Young age vs. geologic age for the earth's magnetic field, CRSQ 9(1): 47-50.
(25) Barnes, T. G. 1973. Electromagnetics of the earth's field and evaluation of electric conductivity, current, and joule heating in the earth's core, CRSQ 9(4):222-230.
(26) Barnes, T. G. 1973. The origin and destiny of the Earth's magnetic field. The Institute for Creation Research, San Diego.
(27) Lammerts, W. E. 1965. Planned induction of commercially desirable variation in roses by neutron radiation, CRSQ 2(1):39-43.
(28) Lammerts, W. E. 1967. Mutations reveal the glory of God's handiwork, CRSQ 4(1):35-41.
(29) Lammerts, W. E. 1969. Does the science of genetic and molecular biology really give evidence for evolution?, CRSQ 6(1):5-12.
(30) Tinkle, W. J. 1971. Pleiotropy: extra cotyledons in the tomato, CRSQ 8(3):183-185. (See also a relevant article in this issue.)
(31) Shaw, R. D. 1972. Why genetic variation between New Guinea communities (Migration-dispersion model applied), CRSQ 9(3):175-180.
(32) Genesis 11: 1-9.
(33) Time (April 4, 1969), pp. 48 and 50.
(34) Wiant, H. V. 1973. Relation of southern pine cone spirals to the Fibonacci series, CRSQ 9(4):218-219.
(35) Moore, J. P. 1974. A demonstration of marked species stability in Enterobacteriaceae, CRSQ 10(4):187-190.
(36) Lammerts W. E. 1966. The Galapagos Island finches, CRSQ 3(1):73-79.
(37) Smith, E. N. 1973. Crowding and asexual reproduction of the planaria, Dugesia dorotocephala, CRSQ 10( 1 ):3-10.
(38) Lammerts, W. E. and G. F. Howe 1974. Plant succession studies in relation to micro-evolution, CRSQ 10(4):208-228.
(39) Howe, G. F. 1968. Seed germination, sea water, and plant survival in the great Flood, CRSQ 5(3):105-112.
(40) Klotz, J. W. 1972. Flora and fauna of the Galapagos Islands, CRSQ 9(1):14-22.
(41) Frair, W. 1967. Some molecular approaches to taxonomy, CRSQ 4(1):18-22.
(42) Williams, E. L. 1973. Thermodynamics: a tool for creationists (Review of recent literature), CRSQ 10(1):38-44.
(43) Williams, E. L. 1966. Entropy and the solid state, CRSQ 3(3):18-24.
(44) Williams, E. L. 1969. A simplified explanation of the laws of thermodynamics, CRSQ 5(4): 138-147.
(45) Williams, E. L. 1970. Is the universe a thermodynamic system?, CRSQ 7(1):46-50.
(46) Williams, E. L. 1971. Resistance of living organisms to the second law of thermodynamics: Irreversible processes, open systems, creation, and evolution, CRSQ 8(2):117-126.
(47) Butler, L. G. 1970. A research challenge, CRSQ 7(2):88-89.
(48) Whitcomb, J. C. and H. M. Morris 1964. The Genesis Flood.
Presbyterian and Reformed Publishing Co., Philadelphia.
|
<urn:uuid:7f363096-8890-4082-98fb-bc433332ebac>
|
CC-MAIN-2016-26
|
http://www.creationresearch.org/crsq/articles/12/12_1a2.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00124-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946133
| 8,516
| 3.125
| 3
|
Behemoth is the male couterpart to LEVIATHAN, one of the fallen angels and a demon of the deep. Like Leviathan, Behemoth is associated with RAHAB and the sea, and is personified variously as a whale, crocodile and hippopotamus. He also is associated with the ANGEL OF DEATH. Behemoth is sometimes described as being overweight and stupid. It is the reason why he encourages gluttony and the pleasures that satisfy hunger. He shapeshifts to various animal forms, and often is depicted as an elephant with a huge stomach.
The Book of Enoch, an apocryphal work, says that Behemoth and Leviathan were separated by God at the time of creation. Leviathan was sent to the sea and Behemoth was sent to an immeasurable desert named Dendain. In the Bible, Job 40:15 - 24 describes Behemoth as a mighty beast, "the first of the works of God" (40:19). Rabbinic lore holds that on the Day of Judgement, he will slay and be slain by Leviathan. His fate is to produce meat for the Messiah's feast, and his flesh will be distributed to the faithful. Another rabbinic legend says that God destroyed Leviathan on the day he created both monsters, but placed Behemoth in the form of a giant ox, on enchanted mountains to fatten him up. There he eats the grass of one thousand mountains each day; the grass grows back each night. Behemoth is doomed to remain there alone until the end of time, because God realized that such a monster could not be loosed upon the world.
"In Christian lore, Behemoth is considered one of the prime representations of evil. The demonologist Johann Weyer, who catalogued the ranks of hell, did not include Behemoth in his list, but did include him in another work, Praestigiorum Daemonum, in which he suggested that Behemoth represents Satan himself. Other demonologists of medieval times did include Behemoth in their rankings."
~ The Book of Demons, Victoria Hyatt and Joseph W. Charles.
|
<urn:uuid:63d6ba5a-d7eb-4e1b-8fcc-da3421b7e1a9>
|
CC-MAIN-2016-26
|
http://everything2.com/title/Behemoth
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00191-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973569
| 432
| 2.890625
| 3
|
NGLAUBER is a system which models the scientific discovery of qualitative empirical laws. As such, it falls into the category of scientific discovery systems. However, NGLAUBER can also be viewed as a conceptual clustering system since it forms classes of objects and characterizes these classes. NGLAUBER differs from existing scientific discovery and conceptual clustering systems in a number of ways. I. It uses an incremental method to group objects into classes. 2. These classes are formed based on the relationships between objects rather than just the attributes of objects. 3 The system describes the relationships between classes rather than simply describing the classes. 4. Most importantly, NGLAUBER proposes experiments by predicting future data. The experiments help the system guide itself through the search for regularities in the data.
|
<urn:uuid:77e2b888-55d0-447e-9342-11babc07cff9>
|
CC-MAIN-2016-26
|
http://aaai.org/Library/AAAI/1986/aaai86-086.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00165-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.927147
| 163
| 2.90625
| 3
|
If you thought spiders weren’t scary enough on their own, maybe it’s time to meet the social spiders – eight-legged creepy crawlers who actually live in colonies and raise their young together. Scientists studying these spiders with a pillowcase, a vibrator and computer paper have discovered that their personalities line up with their division of labor.
The findings, described in the Proceedings of the National Academy of Sciences, show that the division of labor in social colonies can extend far beyond the "caste" systems of insects such as ants or bees, and may apply to other social animals too.
Anelosimus studiosus spiders can be found from North to South America, spinning communal tangle webs on which several dozen spiders may live. The females appear to be either one of two types: aggressive or docile. Previous studies have shown that the aggressive females are more likely to attack intruders and defend the web, and that colonies with a mix of the two personality types do better in the long run than colonies with all-aggressive or all-docile spiders.
Division of labor tends to be a winning strategy; it worked for Henry Ford’s assembly lines as they pumped out Model T cars, and it works for well-known colony insects like ants, which have workers of different shapes and sizes for different tasks. Having a specific task to specialize in seems to create the most efficient, productive colony overall.
But here’s the thing: Unless individuals are actually shaped or sized slightly differently (as in the case of leaf-cutter ants), are they actually any better at their particular job? After all, simply having a job doesn’t mean you’re particularly good at it – even if it’s a job that you prefer.
To test the theory, scientists at the University of Pittsburgh and the College of Coastal Georgia nabbed spider colonies in eastern Tennessee by placing a pillowcase over each one and took them back to the lab. They formed new spider colonies that each consist of two aggressive and two docile females, daubed a bit of paint on them for easy identification and then watched to see how they did. They exposed them to a range of stimuli -- for example, they took a handheld erotic vibrator and held it to webs to see if the spiders would react to some fake "prey" (a fluttering piece of computer paper).
Sure enough, the dominant females were about three times as likely to hunt for prey, defend the nest against intruders and build or repair the web. The docile females, on the other hand, were roughly three times as likely to care for the young.
But does this mean the aggressive and docile spiders were good at their chosen jobs? They picked out single spiders and subjected them to individual challenges, as if they were on a reality TV show.
When the researchers tested the aggressive and docile spiders on an individual basis, they found that the docile spiders were less likely to pounce on easily available prey and that the webs they constructed were sub-par, compared with their aggressive peers. The aggressive females, on the other hand, weren’t so great at brood-rearing – they tended to attack and kill the babies while sharing food with them.
Given that these spiders are all shaped and sized the same, personality has something to do with division of labor in these creatures. And if personality plays a role in spider communities, it could play a role in any social species (including humans).
“Animal personality could be a powerful organizing force for an impressive diversity of animal societies,” the study authors wrote.
Follow me @aminawrite for more news from the world of creepy crawlers.
|
<urn:uuid:601a6400-a5d9-40ce-87e7-6795e05f100e>
|
CC-MAIN-2016-26
|
http://www.latimes.com/science/sciencenow/la-sci-sn-social-spider-colony-division-labor-20140616-story.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964704
| 760
| 3.296875
| 3
|
ATSDR - Safeguarding Communities from Harmful Chemicals: A Five-Part Webinar Series
APHA and the Agency for Toxic Substances and Disease Registry of the Centers for Disease Control and Prevention are proud to co-sponsor a five-part webinar series highlighting the vital work of the ATSDR. The series explores the Agency's role as an integral partner in: determining chemical threats; supporting communities with their environmental health concerns; protecting children and vulnerable populations; and supporting the specific needs of Native Tribes.
Part I – Introducing ATSDR
Introducing ATSDR provides a broad overview of the invaluable contributions ATSDR has made over the past years – from who they are to how they work to protect our communities from harmful chemical exposures.
Part II – ATSDR: Supporting Communities with Tools and Resources
The second webinar, Supporting Communities with Tools and Resources, shares information about resources and support provided by ATSDR to communities that are concerned about chemicals in their environment.
Part III – Informing Decision-Making through Health Assessment
In our third webinar, Informing Decision-Making through Health Assessment, describes the health assessment process the ATSDR uses to determine whether chemicals in the environment pose a risk to the health of communities.
Part IV — Advancing Environmental Medicine Practice
In our fourth webinar, Advancing Environmental Medicine Practice, explores how ATSDR is integrating environmental health with medicine as well as the Agency’s programs and activities on children’s environmental health and reproductive health.
Part V — Working with Tribal Communities
Our fifth and final webinar, Working with Tribal Communities, shares insights regarding environmental health concerns of Native Tribal communities and how ATSDR effectively supports tribal governments in addressing these concerns.
|
<urn:uuid:88945a9d-f859-434f-ae87-5bc3a9ea65dd>
|
CC-MAIN-2016-26
|
http://www.apha.org/events-and-meetings/webinars/atsdr
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00188-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.897261
| 359
| 2.90625
| 3
|
The same natural forces that trigger cold-weather breaks in Denver Water's more than 3,000 miles of underground mains can cause pipes to burst in your own household or business plumbing. Here are some tips for avoiding costly damage.
Before Cold Weather Hits
Know the location of your water shut-off valve and test it regularly
If a pipe breaks, you won't want to have to find it then or, worse, wait for someone to arrive at your place to find it for you. In most single-family homes, the shut-off valve is in the basement or the crawl space, on a wall facing the street.
Keep your meter pit and curb stop valve accessible
If you cannot operate your shut-off valve inside the building, you may need to have your plumber or Denver Water turn off the water at the curb valve near the street. Many valves cannot be operated because they have seized up over the years, or because they are inaccessible because the valve box is full of debris or out of line. Be sure your property has a curb valve, you know where it is, and the valve box is clear of debris, vertical and centered over the valve.
Turn off and drain automatic and manual sprinkler systems before first freeze.
You'll thank yourself in the spring. The alternate freezing and thawing of water in the system can create cracks and weak spots, triggering silent underground leaks or mini-geysers.
Turn off outdoor faucets and be sure to disconnect hoses from them.
Make sure the faucet and the outside portion of the pipes are fully drained. A valve inside many houses will shut off the water's flow; then open and close the tap outside to release any water in the pipe. Disconnect the hose to ensure that freeze-proof faucets will drain and to avoid damage to the hose from freezing water.
Winterize unheated or vacant buildings.
Significant property damage and water loss can occur before burst pipes are discovered in vacant buildings. If your vacant building has a fire protection system, make sure there is no danger that the water servicing this system might freeze.
Insulate water pipes that may be vulnerable to the cold or have caused problems before.
Pipes close to exterior walls or in unheated basements can be wrapped with pieces of insulation. Don't overlook pipes near windows, which can quickly freeze. For particularly difficult pipes, consult a professional on how to select and apply heat tape. Pay special attention to indoor water meters. If the meter freezes, it can cause your basement to flood, and Denver Water will have to replace the meter (at your cost) before you will have water again. Caution: Improper use of heat tape can cause fires. Never put heat tape on the water meter to avoid damaging plastic components of the meter.
During a Deep Freeze (-5 Degrees and Below)
Keep open cabinet doors leading to exposed pipes (such as access doors for sinks), so that household air can warm them.
The natural flow of warmer air will help combat many problems.
If you have an attached garage, keep its doors shut.
Occasionally, plumbing is routed through this unheated space, leaving it vulnerable to winter's worst.
Crack a faucet farthest from the place where your water enters the house.
A very slow drip will keep water molecules moving, reducing the chance that pipes will freeze. Place a bucket underneath the faucet so the water can be saved for other household uses.
Keep your thermostat set above 65 degrees when leaving your house or business for several days.
If You Think a Pipe Has Already Frozen
Don't wait for nature to take its course:
Thaw the pipe as soon as possible or call a plumber for help.
If you do it yourself, shut off the water or test the shut-off valve.
You don't want water suddenly gushing from the pipe when it thaws.
Remember: When thawing things, slower is better.
Pipes warmed too fast may break.
A hair dryer trained at the frozen area of the pipe is appropriate. A blow torch is not.
|
<urn:uuid:077c0d92-931a-4210-970b-226bf4167ead>
|
CC-MAIN-2016-26
|
http://www.water.denver.co.gov/WaterServiceSupport/SeasonalTips/Winter/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00063-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.926473
| 855
| 2.640625
| 3
|
How do we come to know what we know? What is knowledge? What is truth? What is reality? These are important questions not only for epistemologists or philosophers who study knowledge, but, as well for those interested in science,
language, values, educational psychology, and even for computer programmers developing artificial intelligence systems. Whether we see knowledge as absolute, separate from the knower and corresponding to a knowable, external reality or whether we see it as part of the knower and relative to the individual's experiences with his environment has far-reaching implications.
Wilson (1997) in his description of the evolution of world views notes that, in ancient times, people believed that only God could provide glimpses of the 'real' world. Mathematics and logic had an important role to play in making this knowledge manifest. During the Renaissance, the scientific method evolved as the perceived method of uncovering 'the truth'. The German philosopher Kant later denied this possibility of arriving at a precise grasp of absolute knowledge. Still, the modern view trusted in the ability of science to reveal 'the world'. Postmodernists, argues Wilson, preferred to reject "the idealized view of truth inherited from the ancients and replace it with a dynamic, changing truth bounded by time, space and perspective" (p.2 of online version).
Thus, in the history of epistemology, the trend has been to move from a static, passive view of knowledge towards a more adaptive and active view (Heylighen, 1993). Early theories emphasized knowledge as being the awareness of objects that exist independent of any subject. According to this objectivist view, objects have intrinsic meaning, and knowledge is a reflection of a correspondence to reality. In this tradition, knowledge should represent a real world that is thought of as existing, separate and independent of the knower; and this knowledge should be considered true only if it correctly reflects that independent world. Jonassen (1991) provides a summary of objectivism:
Knowledge is stable because the essential properties of objects are knowable and relatively unchanging. The important metaphysical assumption of objectivism is that the world is real, it is structured, and that structure can be modelled for the learner. Objectivism holds that the purpose of the mind is to "mirror" that reality and its structure through thought processes that are analyzable and decomposable. The meaning that is produced by these thought processes is external to the understander, and it is determined by the structure of the real world. (p.28)
In contrast, the constructivist view argues that knowledge and reality do not have an objective or absolute value or, at the least, that we have no way of knowing this reality. Von Glasersfeld (1995) indicates in relation to the concept of reality: "It is made up of the network of things and relationships that we rely on in our living, and on which, we believe, others rely on, too" (p.7). The knower interprets and constructs a reality based on his experiences and interactions with his environment. Rather than thinking of truth in terms of a match to reality, von Glasersfeld focuses instead on the notion of viability: "To the constructivist, concepts, models, theories, and so on are viable if they prove adequate in the contexts in which they were created" (p.7).
On an epistemological continuum, objectivisim and constructivism would represent opposite extremes. Various types of constructivism have emerged. We can distinguish between radical, social, physical, evolutionary, postmodern constructivism, social constructionism, information-processing constructivism and cybernetic systems to name but some types more commonly referred to (Steffe & Gale, 1995; Prawat, 1996; Heylighen, 1993). Ernest (1995) points out that "there are as many varieties of constructivism as there are researchers" (p.459) . Psychologist Ernst von Glasersfeld whose thinking has been profoundly influenced by the theories of Piaget, is typically associated with radical constructivism - radical "because it breaks with convention and develops a theory of knowledge in which knowledge does not reflect an objective, ontological reality but exclusively an ordering and organization of a world constituted by our experience" (von Glasersfeld, 1984, p.24). Von Glasersfeld defines radical constructivism according to the conceptions of knowledge. He sees
knowledge as being actively received either through the senses or by way of
communication. It is actively constructed by the cognizing subject. Cognition is adaptive and allows one to organize the
experiential world, not to discover an objective reality (von Glasersfeld, 1989).
In contrast to von Glaserfled's position of radical constructivism, for many, social constructivism has emerged as a more palatable form of the philosophy. Heylighen (1993) explains that social constructivism "sees consensus between different subjects as the ultimate criterion to judge knowledge. 'Truth' or 'reality' will be accorded only to those constructions on which most people of a social group agree" (p.2). So, while the differences between objectivism and constructivism can be clearly delineated, such is not the case for the differences between the varying perspectives on constructivism. Derry (1992) points out that constructivism has been claimed by "various epistemological camps" that do not consider each another "theoretical comrades". There is considerable debate amongst philosophers, researchers and psychologists about which brand of constructivism is....what should we say? About which brand...is true? right? viable? corresponds to reality?
Constructivist epistemology is obviously difficult to label. Depending on who you are reading, you may get a somewhat different interpretation. Nonetheless, many writers, educators and researchers appear to have come to an agreement about how this constructivist epistemology should affect educational practice and learning. The following section of this site considers what constructivism means for learning. It is an important consideration if we take into account the large and increasing volume of literature and numerous discussions about this new theory of learning. For many, constructivism holds the promise of a remedy for an ailing school system and provides a robust, coherent and convincing alternative to existing paradigms. Can constructivism effectively translate into a learning theory from an epistemology, and from a learning theory to practice? Such is the question that this inquiry considers.
|
<urn:uuid:68babffc-5504-42e6-8343-8c0506efc38f>
|
CC-MAIN-2016-26
|
http://www.ucs.mun.ca/~emurphy/stemnet/cle2.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00019-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94851
| 1,314
| 3.6875
| 4
|
- Posted February 10, 2013 by
Santa Monica, California
Beating Death: Photons the Cure?
Scientists are racing to capture first film of the Mind leaving the body. Researchers on three continents quitely exploring photon emissions as key to unexplained phenomena - telepathy, out of body experiences, apparitions and life after death.
All these presuppose that the the mind or consciousness can somehow leave the body at death in order to create observations spanning thousands of years.
Before the electric light in hospitals, Doctors and Nurses would occasionally report faint light hovering over the body just before death.
Now special highspeeed cameras with photomultiplier chips are being recruited to capture this event.
If they succeed it will answer many questions which refer to witnessing the mind somehow leaving the body and apparently existing in a variety of states. The earliest reports in writing come from Ancient Egypt , Greece and Rome. All report seeing ghosts including dogs and horses
It is the similarity of these stories which parallel those of today which cause researchers to wonder if it's real. Also, they note the common religous pictures and desribes halos of light surrounding figures pictured in each major religion going back thousands of years,
How would this depiction have arisen to be so common for diverse cultures spanning thousands of years?
Phontons rising from the body are a likely candidate to explain these events. Researchers note that, (1) photons can store unlimited quantities of information as MIT and IBM have recently shown, (2) photons can survive for billions of years as starlight from distant stars demonstrate. and (3) photons respond to powerful quantum laws and can be shaped in the lab to corform to instant transmission of information
Photons are now seen a plausible answer to explain things which seemed formerly impossible.
The American researchers plan to photograph the mind as photons leaving the body and in 3D and IMAX as a wonder of discovery with William Shatner being considered as narrator.
Perhaps once again Mr. Shatner can be recruited to yet, lead us all out into another new Frontier. Read all the info and published research on:
|
<urn:uuid:fe586efe-6e31-45ef-abd6-0f7b854abde0>
|
CC-MAIN-2016-26
|
http://ireport.cnn.com/docs/DOC-924663?ref=feeds%2Flatest
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00008-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.938574
| 431
| 2.828125
| 3
|
Posts with the tag: "mosquito control"
Mosquito control is key in preventing Dengue and Zika mosquito borne illnesses Hawaii is currently experiencing a Dengue outbreak in the Big Island with 259 confirmed cases; additionally it is likely that Zika will become a problem in Hawaii, since we have the mosquito vectors, Aedes aegypti and A. albopictus. Both of these mosquitoes are black and white striped and can vector Zika and Dengue.
Older children and adults tend to have worse cases than young children, and sometimes serious problems can develop, which can include enlargement of the liver and failure of the circulatory system. In the worst cases, the symptoms may progress to excessive bleeding, shock, and death.
Mosquitoes have been a problem all over Hawaii since these stinging insects were introduced from bilge water carried in on whaling ships in the early 19th century. They can be found on all the islands, but the wetter the island, the more mosquitoes there are. Let's take a look at these nasty little buggers, and see if there is something we can do about them driving us crazy.
|
<urn:uuid:30e9327a-8046-489f-9af5-912718871b11>
|
CC-MAIN-2016-26
|
http://www.sandwichisle.com/blog/tag/mosquito-control
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95664
| 237
| 2.671875
| 3
|
Umsdos map Linux files directly to Ms-DOS files. This is a one for one translation. File content is not manipulated at all. Umsdos only works on names. For special files (links and devices for example), it introduces special management.
For each directory, there is a file named
Umsdos can be thought as a general purpose superset of the Ms-DOS file system of linux. In fact this capability or flexibility yields much confusion about Umsdos. Here is why. Try to mount a newly formatted DOS floppy like this.
mount -t umsdos /dev/fd0 /mnt
And do this,
ls / >/mnt/LONGFILENAME ls -l /mnt
You will get the following result
-rwxr-xr-x 1 root root 302 Apr 14 23:25 longfile
So far, it seems that the Umsdos file system does not do much more (in fact nothing at all) than the normal Ms-DOS file system of Linux.
Pretty unimpressive so far. Here is the trick. Unless promoted
a DOS directory will be managed the same way with Umsdos
than the Ms-DOS file-system will. Umsdos use a special
file in each subdirectory to achieve the translation between
the extended capabilities (long name, ownership, etc...) of
Umsdos and the limitation of the DOS file-system.
This file is invisible to Umsdos users, but visible when
you boot DOS. To avoid cluttering the DOS partition
with those file (
--linux-.---) uselessly, the file is now
optional. If absent, Umsdos behave like Ms-DOS.
When a directory is promoted, any subsequent operation will be done with the full semantic normally available to Unix and Linux users. And all subdirectory created afterward will be silently promoted.
This feature allows you to logically organize your DOS partition
into DOS stuff and Linux stuff. It is important to
understand that those
--linux-.--- file do take some place
(generally 2k per directory). DOS generally use large
cluster (as big as 16k for a 500meg partition), so avoiding
--linux-.--- everywhere can save your day.
A directory can be promoted any time using
It can be used at any time. Promoting a directory do the
--linux-.---and the current content of the directory.
/sbin/umssync maintain an existing
It does not create it from scratch all the time. It simply add
missing entries in it (Files created during a DOS session).
It will also removed files which do not exist anymore in the
DOS directory from the --linux-.---.
its name from that. It put --linux-.--- in sync with
the underlying DOS directory.
/sbin/umssyncat boot time
It is a good idea to place a call to
at the end of your /etc/rc.d/rc.S if it's not there. The following
command is adequate for most system:
/sbin/umssync -r99 -c -i+ /
-c option prevent
umssync from promoting
directories. It will only update existing
This command is useful if you access Linux directory during a DOS session. Linux has no efficient way to tell that a directory has been modified by DOS so Umsdos can't do a umssync operation as needed.
--linux-.--- file using DOS. You will
Unless you use
umssync on a directory where files have
been added or removed by DOS, you will notice some problems:
|
<urn:uuid:457936af-99cf-4e13-a37c-2b2b75d1b5bb>
|
CC-MAIN-2016-26
|
http://www.tldp.org/HOWTO/UMSDOS-HOWTO-6.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00160-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.799745
| 770
| 2.5625
| 3
|
Post Classical Era
"World Together Worlds Apart"
Books and Print Resources:
Link to our catalog.
R 200.3 Encyclopedia
Encyclopedia of World Religions
R 909 Global
Global History: The Spread of Religions and Empires
R 909 March
Timeframe AD 600-800: The March of Islam
R 909.07 Middle
The Middle Ages
R 909.098 Western
Western Civilization: Vol. 1: From the Origins of Civilization to the Age of Absolutism
R 911 Harper
The Harper Atlas of World History
R 960.03 New
The New Encyclopedia of Africa
R 970.004 Gale
The Gale Encyclopedia of Native American Tribes
Provides access to articles, primary sources, and images, maps
and charts for the study of world history.
Provides access to documents and multimedia resources relating
to the origin and development of world cultures.
Provides access to in-depth, original profiles of significant
historical and cultural people.
Illustrates the interactions that took place in the Indian Ocean and its impact
on the development of civilizations throughout time.
Provides information about the Mongolian Empire, 1000-1500.
Information about the Byzantine Empire
Navigates through 3000 years of World History with links to important
persons, events and maps of world historical importance.
Provides access to links and internet sources relating to medieval studies.
Provides a list of links to support the study of ancient cultures.
|
<urn:uuid:3e4fdca1-895d-454a-817b-39e00bbc7afc>
|
CC-MAIN-2016-26
|
http://www.mmu.k12.vt.us/library/Pathfinders/Post_classical.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.754134
| 312
| 2.640625
| 3
|
Average wage earner needs definition, too
In a recent letter (Depends on your definition of "rich," May 11), the writer claims to be "an average American paying about a 30 percent tax rate" and would like not to be among those considered "rich" in any new tax law proposed by President Obama.
If we need to define "rich," then we probably need to define "average" as well. U.S. Government statistics provide both the median and average household incomes. The median household income in 2011 was $50,100. The average household income was $69,821. (The grossly inflated incomes of those at the very top pushes the average well above the median.)
The tax owed by those with the average household income is easily calculated using Form 1040 and the Tax Rate Schedules for 2011. For a single person taking one exemption of $3,700 and a standard deduction of $5,800, the adjusted gross income (AGI) becomes a taxable income of $60,321. That person is in the 25 percent tax bracket and would pay $11,205 in income tax, or 16.0 percent of AGI.
For married persons filing jointly, and taking two exemptions and two standard deductions, the household income of $69,281 becomes a taxable income of $50,281. That couple is in the 15 percent tax bracket, and their tax would be $6,692, or 9.6 percent of AGI.
For married persons filing jointly to fall into a tax bracket higher than 28 percent, their taxable income would have to be greater than $212,300. A single person's taxable income would have to be more than $174,400. Such incomes might not make them "rich" but neither do they make them "average."
Even though such incomes are far above the average, they still fall well below President Obama's original suggestion of $250,000 as the income level below which nobody's tax rate would be increased.
Donald G. Westlake
|
<urn:uuid:78256d61-5deb-4f85-bd09-4e274ce4ccfa>
|
CC-MAIN-2016-26
|
https://www.dailyherald.com/article/20130516/discuss/705169987/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979521
| 414
| 2.578125
| 3
|
Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
In This Issue...
Cellular Landscaping: Predicting How, and How Fast, Cells Will Change
A research team at the National Institute of Standards and Technology (NIST) has developed a model* for making quantifiable predictions of how a group of cells will react and change in response to a given environment or stimulus—and how quickly. The NIST model, in principle, makes it possible to assign reliable numbers to the complex evolution of a population of cells, a critical capability for efficient biomanufacturing as well as for the safety of stem cell-based therapies, among other applications.
The behavior and fate of cells are only partially determined by their DNA. A living cell reacts to both its internal and external environment—the concentration of a particular protein inside itself or the chemistry of its surroundings, for example—and those reactions are inherently probabilistic. You can't predict the future of any given cell with certainty.
This inherent uncertainty has consequences, according to NIST biochemist Anne Plant. "In the stem cell area in particular, there's a real safety and effectiveness issue because it's very hard to get 100 percent terminal differentiation of stem cells in a culture," she says. This could be problematic, because a therapist wishing to produce, say, heart muscle cells for a patient, might not want to introduce the wild card of undifferentiated stem cells. "Or effectiveness may be dependent on a mixture of cells at different stages of differentiation. One of the things that is impossible to predict at the moment is: if you waited longer, would the number of differentiated versus nondifferentiated cells change? Or if you were to just separate out the differentiated cells, does that really remove all the nondifferentiated cells? Or could some of them revert back?" says Plant.
The NIST experiments did not use stem cells, but rather fibroblasts, a common model cell for experiments. The team also used a standard tracking technique, modifying a gene of interest—in this case, one that codes for a protein involved in building the extracellular support matrix in tissues—by adding a snippet that codes for a small fluorescent molecule. The more a given cell activates or expresses the gene, the brighter it glows under appropriate light. The team then monitored the cell culture under a microscope, taking an image every 15 minutes for over 40 hours to record the fluctuations in cell behavior, the cells waxing and waning in the degree to which they express the fluorescent gene.
Custom software developed at NIST was used to analyze each image. Both time-lapse data from individual cells and time-independent data from the entire population of cells went into a statistical model. The resulting graph of peaks and valleys, called a landscape, says Plant, "mathematically describes the range of possible cell responses and how likely it is for cells to exhibit these responses." In addition, she says, the time analysis provides kinetic information: how much will a cell likely fluctuate between states, and how quickly?
The combination makes it possible to predict the time it will take for a given percentage of cells to change their characteristics. For biomanufacturing, it means a finer control over cell-based processes. If applied to stem cells, the technique could be useful in predicting how quickly the cells differentiate and the probability of having undifferentiated cells present at any point in time.
* D.R. Sisan, M. Halter, J.B. Hubbard and A.L. Plant. Predicting rates of cell state change caused by stochastic fluctuations using a data-driven landscape model. PNAS 2012 ; published ahead of print October 30, 2012, doi:10.1073/pnas.1207544109.
Media Contact: Michael Baum, firstname.lastname@example.org, 301-975-2763
NIST, UMD Celebrate 25 Years of Research Partnership at IBBR
Officials and researchers from the University of Maryland (UMD) and the National Institute of Standards and Technology (NIST) gathered on Oct. 25, 2012, to celebrate the 25th anniversary of the two institutions' ongoing collaboration to advance bioscience and biotechnology through their combined expertise in the biological and quantitative sciences, medicine and engineering.
The anniversary program at the Institute for Bioscience and Biotechnology Research (IBBR) near Gaithersburg, Md., included a scientific seminar and a ceremony recognizing two of the collaboration's guiding principals, Rita Colwell, Distinguished University Professor at the University of Maryland at College Park (UMCP), and Willie E. May, the NIST Associate Director for Laboratory Programs.
"IBBR is a model for government and public-private collaboration," said Under Secretary of Commerce for Standards and Technology and NIST Director Patrick Gallagher. "Just 25 years after its founding, actually a rather short time in research years, it is an internationally recognized organization."
The partnership between the two research institutions began in the late 1980s with the creation of the Center for Advanced Research in Biotechnology (CARB). CARB was a joint effort of UMD, NIST and the Montgomery County, Md., government, which leased land and financed the construction of the initial CARB facility. Since then, the partnership has grown to encompass additional UMD research centers, including the Center for Biosystems Research (CBR) in College Park, Md., and the University of Maryland-Baltimore's Center for Biomolecular Therapeutics.*
In 2010, CARB and CBR were formally merged into the IBBR, where recently, NIST and the University of Maryland have established a Partnership for the Advancement of Complex Therapeutics with the mission of accelerating the development of measurement science, technologies and standards in the area of complex therapeutics and the diagnostics that support their clinical utility. The initial focus will be on protein biologic drugs and vaccines.
IBBR researchers, drawn from UMD and NIST, work in the areas of structural biology, biophysics, genomics and proteomics, nanobiotechnology, pathobiology, and computational biology. The institution established an early reputation for determining the molecular structure of proteins, one of the core problems in biotechnology. IBBR research has helped to better understand basic protein interactions involved in autoimmune disorders and the mechanisms and possible counter actions for antibiotic resistance, and developed ways to improve the stability of proteins for biotechnology applications. One protein engineered by IBBR researchers has been licensed and applied to tasks as varied as improving stain-removal properties of laundry detergents and purifying other proteins for analysis.
For other examples of IBBR research, see "'Kissing' RNA and HIV-1: Unraveling the Details" (Jan. 30, 2004) at http://www.nist.gov/public_affairs/techbeat/tb2004_0130.htm#kissing, "Long-Sought Protein Structure May Help Reveal How 'Gene Switch' Works" (Feb. 6, 2009) at www.nist.gov/public_affairs/releases/tuberculosis.cfm, and "Fish Flu: Genetics Approach May Lead to Treatment" (Nov. 8, 2011) at www.nist.gov/public_affairs/tech-beat/tb20111108.cfm#fishflu.
More information on the IBBR is available at http://www.ibbr.umd.edu/.
* See the 2007 announcement,"NIST, UMBI to Expand Cooperation in Bioresearch" at www.nist.gov/public_affairs/tech-beat/tb20070816.cfm#umbi.
Media Contact: Michael Baum, email@example.com, 301-975-2763
Princeton/NIST Collaboration Puts Wheels on the Quantum Bus
In yet another step toward the realization of a practical quantum computer, scientists working at Princeton and the Joint Quantum Institute (JQI) have shown how a major hurdle in transferring information from one quantum bit, or qubit, to another might be overcome.* Their so-called "quantum bus" provides the link that would enable quantum processors to perform complex computations.
The JQI is a collaborative institute of the National Institute of Standards and Technology (NIST) and the University of Maryland College Park.
Qubits are unlike a classical bit because they can be not only a 1 or 0 but also both, simultaneously. This property of qubits, called superposition, helps give quantum computers a tremendous advantage over conventional computers when doing certain types of calculations. But these quantum states are fragile and short-lived, which makes designing ways for them to perform basic functions, such as getting qubits to talk to one another—or "coupling"—difficult.
"In order to couple qubits, we need to be able to move information about one to the other," says NIST physicist Jacob Taylor. "There are a few ways that this can be done and they usually involve moving around the particles themselves, which is very difficult to do quickly without destabilizing their spins—which are carrying the information—or transferring information about the spins to light. While this is easier than moving the particles themselves, the interaction between light and matter is generally very weak."
Taylor says you can think of their solution sort of like playing doubles tennis.
"Whether or not a team will be able to return a serve depends entirely on how well they play together," says Taylor. "If they are complementing each other, with one playing the front half of the court and the other playing the back half, they will be able to return the serve to the other set of players. If they are both trying to play in the front court or the back court they won't be able to return the serve and the ball will go past them. Similarly, if the spins of the electrons are complementary, their field will affect the field of the photon as it goes past, and the photon will carry the information about the electrons' spin to the other qubit. When the spins are not coupled, they will not affect the photon and no information will go to the other qubit."
The Princeton/JQI team's quantum bus is a hybrid system that marries two known quantum technologies—spin-orbit qubits and circuit quantum electrodynamics—with some tweaks. The spin-orbit qubits are a pair of indium-arsenide quantum dots that have been engineered to enable strong coupling between the spins of the electrons trapped inside the dot and the electrons' positions within the dot. This in turn allows the magnetic field of the qubit, comprising spins, to couple with the field of microwave photons traveling through a connected superconducting cavity.
The structure makes it possible for information about the qubits' spin to be transferred to the microwave cavity, which, with some additional tweaks could be transferred to another qubit.
The experiment, which was the culmination of five years of effort, took place at Princeton University. NIST/JQI provided assistance with the quantum theory.
* K.D. Petersson, L.W. McFaul, M.D. Schroer, M. Jung, J.M. Taylor, A.A. Houck and J.R. Petta. Circuit quantum electrodynamics with a spin qubit. Nature 490, 380–383 (18 October 2012) doi:10.1038/nature11559
Media Contact: Mark Esser, firstname.lastname@example.org, 301-975-8735
NIST Provides Draft Guidelines to Secure Mobile Devices
The National Institute of Standards and Technology (NIST) has published draft guidelines that outline the baseline security technologies mobile devices should include to protect the information they handle. Smart phones, tablets and other mobile devices, whether personal or "organization-issued," are increasingly used in business and government. NIST's goal in issuing the new guidelines is to accelerate industry efforts to implement these technologies for more cyber-secure mobile devices.
Securing these tools, especially employee-owned products, is becoming increasingly important for companies and government agencies with the growing popularity—and capability—of the devices. Many organizations allow employees to use their own smart phones and tablets, even though their use increases cybersecurity risks to the organization's networks, data and resources.
Guidelines on Hardware-Rooted Security in Mobile Devices defines the fundamental security components and capabilities needed to enable more secure use of products.
"Many current mobile devices lack a firm foundation from which to build security and trust," explains NIST lead for Hardware-Rooted Security Andrew Regenscheid, one of the publication's authors. "These guidelines are intended to help designers of next-generation mobile phones and tablets improve security through the use of highly trustworthy components, called roots of trust, that perform vital security functions." On laptop and desktop systems, these roots of trust are often implemented in a separate security computer chip that cannot be tampered with, but the power and space constraints in mobile devices could lead manufacturers to pursue other approaches such as leveraging security features built into the processors these products use, he says.
The NIST guidelines are centered on three security capabilities to address known mobile device security challenges. They are device integrity, isolation and protected storage. A tablet or phone supporting device integrity can provide information about its configuration, health and operating status that can be verified by the organization whose information is being accessed. Isolation capabilities are intended to keep personal and organization data components and processes separate. That way, personal applications should not be able to interfere with the organization's secure operations on the device. Protected storage keeps data safe using cryptography and restricting access to information.
To attain the security capabilities, the guidelines recommend that every mobile device implement three security components. These are foundational security elements that can be used by the device's operating system and its applications. They are:
The authors of Guidelines on Hardware-Rooted Security in Mobile Devices, Special Publication 800-164 (Draft) request comments to improve the draft. The publication may be downloaded from http://csrc.nist.gov/publications/PubsDrafts.html#SP-800-164. Please submit comments by December 14, 2012, to email@example.com.
Media Contact: Evelyn Brown, firstname.lastname@example.org, 301-975-5661
New NIST Web Resource Hosts Federal Research Technology Transfer Plans
A new website has been launched by the National Institute of Standards and Technology (NIST) to serve as a central resource for technology transfer plans developed by agencies with federal research laboratories.
The plans were developed in response to an Oct. 28, 2011, Presidential Memorandum that directed agencies doing research and development to foster innovation by increasing the rate of technology transfer to private-sector organizations so that research results could be adapted for use in the marketplace.
The plans from 13 federal agencies include agency-defined goals and metrics to measure progress and evaluate the success of new efforts that encourage technology transfer activities. This effort supports the policy of using innovation as a tool to increase economic growth, create jobs, and enhance global competitiveness of U.S. industries.
As part of its own effort to accelerate technology transfer, NIST plans to revise its definition of technology transfer to more accurately report and evaluate a broad range of technical activities. This will lead to expanded metrics tracking the use of Standard Reference Materials and Data, patents and licenses, and collaborations. New metrics will cover software downloads, postdoctoral and guest researchers, and start-up companies, among others.
NIST develops basic science foundations for many technologies with long horizons for eventual commercialization, while other NIST activities benefit the economy through facilitation of consensus standards for trade. As one mechanism for tracking technology transfer activities, the agency plans to expand a database on staff participation in private-sector consensus standards committees. The expanded Standards Committee Participation Database will go beyond statutory requirements for data collection regarding staff participation in committees and on standards developed through their efforts.
In addition to NIST, the DOC technology transfer report includes plans from the:
The agency reports for “Accelerating Technology Transfer and Commercialization of Federal Research in Support of High-Growth Businesses” are available at www.nist.gov/tpo/publications/agency-responses-presidential-memo.cfm.
Media Contact: Jennifer Huergo, email@example.com, 301-975-6343
Deborah Jin of JILA Selected for 2013 Women in Science Award
Deborah Jin, a physicist at the National Institute of Standards and Technology (NIST) who works at JILA, has been selected as the North American recipient for the 2013 For Women in Science Awards.
JILA is a joint institute of NIST and the University of Colorado Boulder.
The award is given annually by the L’Oréal Foundation and UNESCO as part of an international program recognizing women in science and supporting scientific vocations. Five women scientists are recognized each year, one for each of five regions of the world. Since the program was created in 1998, it has honored 77 outstanding women scientists from around the world.
“These five outstanding women scientists have given the world a better understanding of how nature works,” UNESCO Director-General Irina Bokova said in a news release. “Their pioneering research and discoveries have changed the way we think in various areas of the physical sciences and opened new frontiers in science and technology. Such key developments have the potential to transform our society. Their work, their dedication, serves as an inspiration to us all.”
Jin was cited “for having been the first to cool down molecules so much that she can observe chemical reactions in slow motion, which may help further understanding of molecular processes which are important for medicine or new energy sources.”
“The award is definitely an honor,” Jin says. “Part of that comes from the fact it’s just not a local thing, it’s a worldwide program. It will be fun to meet the award winners from other areas, people that I otherwise might not meet, and hear their perspectives.”
The awards will be officially presented in Paris on March 28th, 2013. Each For Women in Science winner receives $100,000.
Jin is a NIST/JILA Fellow and is a world leader in advancing understanding of quantum mechanics, the seemingly curious rules that govern the behavior of atoms and smaller particles. Jin was cited for her work chilling ultracold molecules enough to observe chemical reactions, which may help create practical tools for “designer chemistry” and other applications such as precision measurement.*
Jin is a member of the National Academy of Sciences and winner of numerous previous awards, including the 2008 Benjamin Franklin Medal in Physics and a 2003 John D. and Catherine T. MacArthur Fellowship, commonly called a “genius grant.”
Information about the awards program can be found at www.forwomeninscience.com. This is the second time a NIST-affiliated scientist has won a Women in Science award.**
* See NIST’s 2010 news story, “Seeing the Quantum in Chemistry: JILA Scientists Control Chemical Reactions of Ultracold Molecules,” at www.nist.gov/pml/div689/ultracold_021110.cfm.
** Johanna Levelt Sengers, a scientist emeritus at NIST, was selected as the North American recipient for the 2003 Women in Science Awards. In her 40 years at NIST Levelt Sengers made internationally recognized contributions, both theoretical and experimental, to the fields of thermodynamics and critical phenomena of fluids.
Media Contact: Laura Ost, firstname.lastname@example.org, 303-497-4880
|
<urn:uuid:9c3ea344-7d3c-47d0-8353-72b18fe1bc17>
|
CC-MAIN-2016-26
|
http://nist.gov/public_affairs/tech-beat/tb20121031.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931808
| 4,131
| 3.171875
| 3
|
English Language Arts
The Tennessee state standards in English language arts (ELA) outline the reading, writing, language, speaking, listening, and research skills students will need to succeed in college and the workforce. With a renewed emphasis on the close reading of complex texts, especially literary nonfiction, our ELA standards require all students to graduate ready to read and respond to the academic and technical texts they will encounter throughout their adult lives.
Standards and Shifts
Standards define what students should understand and be able to do in their study of English language arts and reading.
Resources for teachers to engage students in higher levels of thinking and reasoning called for by the Tennessee state standards for English language arts.
- Model units and tasks
- Resources to support teachers in:
- choosing complex texts,
- writing and implementing unit plans aligned with the new standards and close reading tasks, and
- writing text-dependent questions.
- Assessment tasks that can be used for formal testing, incorporated into instruction, and/or to help students prepare for assessments
- Scoring resources for the assessment tasks
- Additional assessment resources
- Information about testing
Response to Instruction and Intervention Framework
This three-tiered framework helps educators differentiate instruction as students need extra help. Tennessee schools are moving to this framework over the next several years.
Educational standards describe what students should know and be able to do in each subject and in each grade. The Tennessee state standards define what students need to learn at each grade level. They provide a chance to improve access to quality content standards for students with disabilities and English Learners.
Links to websites, videos, and blogs for more support in understanding Tennessee ELA standards.
|
<urn:uuid:78cc0a61-7f49-4244-8623-168e04255fdb>
|
CC-MAIN-2016-26
|
http://www.tncore.org/english_language_arts.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.92488
| 345
| 3.78125
| 4
|
The corporate responsibility to respect includes:
a. Respecting children’s rights in relationship to the environment
i. When planning and implementing environmental and resource-use strategies, ensure that business operations do not adversely affect children’s rights, including through damage to the environment or reducing access to natural resources.
ii. Ensure the rights of children, their families and communities are addressed incontingency plans and remediation for environmental and health damage from business operations, including accidents.
b. Respecting children’s rights as an integral part of human rights considerations when acquiring or using land for business operations
i. Where possible, avoid or minimize displacement of communities affected by land acquisition or land use for business purposes. Engage in meaningful, informed consultation with potentially affected communities to ensure that anyadverse impact on children’s rights is identified and addressed and that communities participate actively in and contribute to decision-making onmatters that affect them directly. Seeking the free, prior and informed consent of indigenous peoples is specifically required for any project that affects their communities, and it is a desirable goal for any community impacted by acompany’s use or acquisition of land.
ii. Respect children’s rights – especially their right to education, protection, health, adequate food and adequate standard of living and participation – when planning and carrying out resettlement and providing for compensation.
The corporate commitment to support includes:
c. Supporting children’s rights in relationship to the environment where future generations will live and grow
Take measures to progressively reduce the emission of greenhouse gases from company operations and promote resource use that is sustainable. Recognize that these actions and other initiatives to better the environment will impact future generations. Identify opportunities to prevent and mitigate disaster risk and support communities in finding ways to adapt to the consequences of climate change.
|
<urn:uuid:a947d6be-4982-46fe-8b4b-67fffeb3f33a>
|
CC-MAIN-2016-26
|
http://www.unicef.org/csr/199.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00082-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933068
| 372
| 3.1875
| 3
|
Overloading Electrical Circuits
Electricity has enriched our lives. Despite the many benefits,
electricity can also bring danger -- the most common being house
fires. It is estimated that over 40,000 residential fires are caused
by electrical systems every year in the United States. Causes
include arc faults, short circuits, or overloading of electrical
circuits. This article discusses overloading electrical circuits.
First, we must understand some
basics about typical home electrical systems. The electrical service
enters the house and connects to a main electrical panel. From the
main electrical panel, wires run in different directions throughout
the house to power lights, outlets, ceiling fans, air conditioners,
and various other direct-wired electrical appliances. These
wire-runs are called branch circuits.
In home construction today,
the typical branch circuit consists of three wires -- the hot,
neutral and ground wires. When a light or electrical appliance is
turned on, electricity begins to flow in the hot and neutral wires
of the branch circuit to which that light or electrical appliance is
When electricity flows through
a wire, the wire heats up because of its resistance to the flow of
electrical current. Both the size of the wire (resistance increases
as the wire diameter gets smaller) and how many electrical devices
on the circuit are drawing electricity (more devices increase the
electrical current) affect the amount of heat generated in the wire.
To keep the wire from getting too hot and starting a fire, the
designer of the branch circuit wiring does two things:
- Attempts to size the
wire large enough to handle the estimated electrical load on
- Attempts to contain the
amount of electrical load on the branch circuit by limiting
the number of potential electrical appliances that can be
running at the same time on that circuit (i.e. places only so
many outlets on one branch circuit or puts larger pieces of
electrical equipment on circuits dedicated to that equipment
electrical codes help with the design assumptions, how the homeowner
will use the outlets in the house is just a guess. The homeowner can
plug in and run too many appliances on the same circuit at one time
and overload the circuit.
This is why electrical fuses and circuit breakers are used in the
main electrical panel. Their function is to sense the overloading of
circuits (and short circuits) and shut off power to that branch
circuit before the wires get too hot and start a fire.
However, circuit breakers can malfunction and fail to trip.
Homeowners can try to fix a "nuisance" fuse by placing a larger fuse
in the electrical panel that allows more electrical current to flow
in the branch circuit than what it was designed for. Homeowners can
also use plug adaptors and extension cords to plug in too many
electrical appliances into one electrical outlet.
What Can the Homeowner
Most home circuits are
designed as 15-amp branch circuits. A hair dryer can draw 1400
watts, an iron 1000 watts, a portable heater 1200 watts, a vacuum
cleaner 600 watts, deep fat fryer 1300 watts, and a portable fan
There are no hard-and-fast
rules as to how often a home electrical system should be inspected.
Here are the recommendations from the NESF:
If your last inspection was:
- 40 or more years ago,
inspection is overdue.
- 10-40 years ago,
inspection is advisable, especially if substantial electrical
loads (high-wattage appliances, lights, and wall outlets or
extension cords) have been added.
- Less than 10 years ago,
inspection may not be needed, unless problems are noticed.
It may be difficult to
determine when the last electrical inspection was made. Look on the
inside of the door to the electrical panel. The electrician
performing the last inspection may have written the date there.3
As a homeowner, be aware of your electrical system. Look and listen
for problems. If you hear buzzing or crackling coming from outlets
or light switches, don't ignore it. If appliance or extension cords
are hot to the touch, you have potential problems. Contact a
qualified electrical professional to assess the problem and make the
|
<urn:uuid:e0d629e2-1a00-4c5f-bcfb-bfe7f4f26de8>
|
CC-MAIN-2016-26
|
http://2020inspectiongroup.com/overloading_electrical_circuits.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00171-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.891724
| 895
| 3.765625
| 4
|
Certainly there is a correlation between muscle mass and strength, but there is more to the story. Two reasons why muscle mass and strength may not be completely congruous are:
- Muscle fiber density
- Muscle utilization
Your muscles are composed of four different types of fibers (slow-twitch, and three forms of fast-twitch). These fibers have different profiles in terms of force applied and recovery time. Slow twitch fibers, for example, recover quickly but have less force as there are fewer muscle fibers per bundle, compared with the fast-twitch fibers.
Extra water in the form of glycogen can also cause muscles to take up more volume with the same amount of actual muscle. This glycogen can be a ready source of energy for the muscles, but isn't going to increase their maximum theoretical force for a single heavy lift (per Olympic competition) where endurance through a long set isn't at issue.
The average person is able to utilize 20-30% of their total theoretical muscle strength when trying their hardest. (Ref. Tsatsouline, Power To The People) Top lifters use perhaps 50% of their theoretical strength. Olympic and powerlifting-style training focuses on training the neural pathways to utilize a greater percentage of the available muscle mass. Since muscle fibers contract all internal cells (the all-or-nothing principal), this training is focused on convincing a greater proportion of fiber bundles to contract during a lift.
Can a buff guy be weak?
Well, it depends on your definition of buff. A cut guy can be weak (compared to a strength athlete), because muscle definition is more about having low body fat covering the muscle than it is about having large muscles.
A bodybuilder with decent volume won't be able to lift as much as a comparable powerlifter because he/she doesn't train for strength per se. It seems worth noting that Olympic/power lifters also want to minimize their size (except for the heavyweights) because it affects their weight class in competition, so there is an added incentive to train for neural utilization over additional muscle mass.
|
<urn:uuid:6b7fca32-e4c9-48c2-8bf8-f95dc256c727>
|
CC-MAIN-2016-26
|
http://fitness.stackexchange.com/questions/2017/why-is-muscle-size-not-proportional-to-strength/2018
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00154-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96079
| 424
| 2.5625
| 3
|
The System Management Controller is an internal subsystem in Intel Macs that is responsible for power management of the computer. It controls backlighting, hard disk spin down, sleep and wake, some charging aspects, trackpad control, and some input/output as it relates to the computer sleeping. If your computer is acting up, you may want to reset the SMC. But you may also want to modify the behavior of your Mac when it comes to fan speed. Enter smcFanControl.
When you run smcFanControl, it will place a small indicator in your menu bar showing you the current processor temperature and fan speed. This is is a great way to understand the correlation between temperature and fan speed. There is a fan RPM that you can't set below a certain minimum, but you can use smcFanControl to raise the minimum RPM. You may want to do this if you'd like to keep your Mac's CPU cooler, or if you get into situations where the fans are on full blast. By running the fans at a higher RPM, you can help prevent this situation.
You can set individual minimum RPM values for all of the fans in your system. On a MacBook Pro, you should see two fans, with their minimum RPM set at 2000 RPM. You can change the menubar display so it displays both temperature and fan speed, just temperature, just fan speed, or a simple icon. If you display the temperature, you can choose to display in degrees C or F. You can also choose to check for updates on startup, and autostart smcFanControl on login. Finally, you can load different profiles depending on if your power source is battery, AC power, or if you're charging.
So, make sure your Mac stays cool, and check out smcFanControl today! Have any other gadgets that take the heat off your Mac? Send an email to John and he'll give it a try.
|
<urn:uuid:6a8b6209-cf54-49da-9f0e-3e1acd60eb10>
|
CC-MAIN-2016-26
|
http://www.macobserver.com/tmo/article/want_to_keep_your_mac_cool_check_out_smcfancontrol
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.914313
| 394
| 2.65625
| 3
|
The Holocaust was systematic destruction by the Nazis of Jewish culture, society, and, in die Endlösung (the Final Solution), the lives of all Jews, 1933-1945. Before Hitler and the Nazis came to power in 1933, they demonized the Jews as subhuman and the cause of all Germany's troubles. Once in power, the Nazis first removed the Jews from power and prestige; from 1938-41 they imposed severe restrictions on Jews in Germany (many of whom fled). The killing started in 1941, as the SS under Heinrich Himmler systematically rounded up all the Jews the Nazis could find, and killed about 6 million in extermination camps. Millions of non-Jews were killed in separate Nazi operations.
The Holocaust was known but downplayed between 1945-1960, but since 1960 has become one of the central memories and horrors of World War II, shaping policies by and towards Israel and profoundly shaping the modern conceptions of guilt and evil.
- 1 Origins
- 2 The people involved
- 3 Deaths
- 4 Survivors
- 5 Punishment
- 6 Etymology
- 7 Books and film
- 8 See also
- 9 External links
- 10 Bibliography
- 11 International Justice
Hatred of the Jews has a long history all over Europe, but the modern forms of antisemitism emerged in the 19th century as a new spirit of nationalism allowed some Germans to sharply differentiate themselves from the Jews, a cultural subgroup that was well integrated into German society. By the end of the century, ant-semitic politicians and organized movements had emerged. Antisemitism was muted during World War I, but exploded in 1919 as the defeated war veterans looked about for someone to blame for the national disgrace.
Stage 1: to 1933
The Nazi party emerged among veterans in the early 1920s, with a violent attack on Jews as Adolf Hitler's central theme. After years of struggling to push their ideology into the masses through propaganda and violence, the Nazi Party in Germany came to power in January 1933, with Hitler as Chancellor, with his ideology firmly entrenched within the party. In his book Mein Kampf, Hitler expounded on the idea that the Aryan race, of which he was a part, was the so-called “master race”, and had a moral right and duty to subjugate the world; in the way stood the untermenschen (“sub-humans”), which, according to the Nazis, were meant to serve the master race as slaves. At the very bottom were the Jews, who were depicted as an evil race bent on world domination.
The Nazis violently hated all the Jews and everything they stood for. They worked relentlessly toward the goal of removing all possible Jewish influences. Starting in the 1920s (when they were a small party) with a violently anti-Semitic rhetoric that blamed Jews for all the problems of Germany and the modern world, the Nazis defined Jews as a permanent “race” that would never change and could never be improved. The Nazis also strongly disliked Christianity as being too Jewish. Their goal was to return to a pre-Christian, all-Aryan (imaginary) world.
The Nazi goal was first to remove all Jewish influence, then deport all Jews from Europe.
Stage 2: 1933-38
The second stage of Nazi policy concerning Jews, from 1933 to 1938, when Hitler was dictator in a peacetime Germany, involved the removal of Jews from all public office. The Nazis encouraged the Jews to leave and half of the Jewish population in Germany did so (including famous scientist Albert Einstein and the teenaged Henry Kissinger). The Nazis opened Dachau and other “concentration camps” to punish thousands of their political enemies—including many Jews. About 1,000 Jews were murdered in concentration camps inside Germany before 1939; these were distinct from the killing camps that were opened in 1942 in Poland.
Jewish businesses began feeling the effects of a boycott that began on April 1, 1933, followed by the dismissal of Jewish civil service workers, judges, and university professors a week later. On May 10, some ten days after laws were enacted which prevented Jewish children from attending public schools except by quota, thousands of university students and professors stormed bookstores and libraries to remove books they deemed “un-Germanic” and opposed to Nazi teachings, throwing them into public bonfires. The Nuremberg Laws were enacted in 1935, which revoked German citizenship from Jews as well as declaring the marriage between a Jew and a German illegal. By 1938, the political and economic foundations of German Jewry were completely decimated.
Stage 3: 1938-41
Stage three, from 1938 to 1941, involved increasingly severe and humiliating restrictions for Jews. “Kristallnacht” in November 1938 was a systematic violent attack on all synagogues. World public opinion grew hostile to the Nazi actions, who ; they responded by supporting pro-Nazi, anti-Semitic political movements in France and other countries, including the “German-American Bund.” After invading Poland in 1939, the Nazis forced two million Jews into a few ghettos with below-starvation food allotments. Before the war plans were made to start deporting Jews from Germany. The war meant Nazi control of millions of Jews in the east and occupied west as well, and deportation became impossible. That left extermination as the Nazi plan.
On November 7, 1938, Herschel Grynszpan, a German Jew living in Paris and upset over his family’s forced deportation to Poland, shot the third secretary (Ernst vom Rath) to the German ambassador in France, who died two days later. His assassination touched off a wave of riots on November 9, seemingly at the behest of the Nazi minister of propaganda, Joseph Goebbels, but this was expanded and organized better with the issuing of orders by the head of the S.S., Reinhard Heydrich, later that evening, who specified that S.S. and S.A. units in various cities would march out with sledgehammers against Jewish homes, businesses, and synagogues - but in civilian clothes only (to symbolize the “righteous” anger of the German people). Business could not be looted, as the property inside was deemed property of the state; Jewish property near German shops and homes could not be burned, but smashed; and many Jewish males, particularly the wealthy, were subject to arrest. Over 35,000 men were arrested that night, and according to figures released by Heydrich the total number of arrests exceeded 100,000; 815 Jewish businesses were destroyed, 191 synagogues were destroyed or demolished, and over 2,000 were dead. By the end of the week local jails as well as the new Buchenwald and Dachau concentration camps were quickly filled. The Jews were declared responsible for the damages done to their property and ordered to pay the staggering sum of one billion Reichmarks.
The sidewalks were littered with shards of the expensive storefront glass that was preferred for shops in Germany and neighboring Austria, which was called Kristallglas for its high-quality. The amount of glass left behind gave the incident its name: Kristallnacht (“Crystal Night”), or the Night of Broken Glass.
World War II: 1939-41
When World War II began in the fall of 1939, Jews in Germany were completely marginalized. They could not own property, use parks, associate with Germans, enter a library or museum, work in any professional field or engage in business, nor could their children attend public schools. Public transportation was forbidden to them in 1941, and the wearing of the yellow Star of David badge on their clothing became mandatory. They were also, prior to September 1, forced to migrate from countries and territories which had come under Hitler’s wing (the Rhineland, Austria, the Sudetenland), with many being deported to Poland.
By September 21, 1939, Poland was now the “General Government” protectorate under former lawyer Hans Frank, and on that day Heydrich ordered the establishment of Judenrates (Jewish councils) which comprised 24 men - political leaders and rabbis – and whose personal responsibility was to carry out, to the letter, all German orders. This would include supplying people for work details, usually mundane tasks like digging ditches to amuse their Nazi overlords. Later, they were required to supply thousands of people a day for the “work” camps of Treblinka, Sobibor, Belzec, and new one under construction near the town of Oswiecim, which the Germans called Auschwitz.
Thousands, nearly 30% of the total population of Warsaw, were crammed into just over 2% of the city’s total land, a density of 200,000 people per square mile. Disease and malnutrition would take its toll, but for the German overlords this was a minor inconvenience. The ghetto was a temporary place to hold all of Europe’s Jews until a final solution was determined, and when the Nazis attacked its ally, the Soviet Union, in June 1941, the killing began in earnest.
The Germans did not generally commit atrocities against Allied soldiers in the West in 1940, with one major exception. About 40,000 black African combat troops in the French army became targets of Nazi wrath. Elite German units, acting on the own in accord with longstanding racial hatred of Africans, shot about 1500 to 3000 black soldiers in French uniforms after they had surrendered.
Stage 4: 1941-42
Stage 4 began when the Germans invaded the Soviet Union in June 1941. All Russian Jews were assumed to be Communist agents and large scale killing of political enemies began in Poland. Special units of the SS, the Security Police, and the Security Service (Einsatzgruppen der Sicherheitspolizei und des SD, Einsatz- and Sonderkommandos) not only massacred large numbers of Jews, but routinely included handicapped persons in open-air mass shootings. Seven of the “Einsatzgruppen” rounded up and shot many Polish Catholic priests, intellectuals, and political leaders. Another five units (with 3,000 men) followed the Red Army and executed Communist commissars and partisans, and about 600,000 Russian Jews.
Alongside the German Army were special mobile units whose job it was to locate and kill Jews, Gypsies, Soviets commissars, and others deemed unfit in the areas controlled by the army. These Einsatzgruppen (“special units”) were also aided by local populations who felt the Germans had relieved them of Soviet occupation as well as sharing a hatred for Jews and other minorities. Making no difference between young or old, male or female, the Einsatzgruppen killed 70,000 Jews at Ponary, near Vilnius, Lithuania; 33,771 Jews were machine-gunned in a ravine known as Babi Yar near Kiev, Ukraine, between September 28-29, 1941 ; 9,000 Jews were killed at the Ninth Fort at Kaunas, Lithuania, on October 28, of which half of the dead were children. On November 30 in the Rumbula Forest outside of Riga, Latvia, between 25,000-28,000 were killed.
By mid-1941, the Ukrainian SSR had the largest population of Jews in Europe. The addition of the eastern provinces of Poland in late 1939 as well as the seizure of sections of Romanian territory in June 1940 led to some 2.7 million Jews living within the borders of the newly enlarged republic. About 85% lived in cities. By 1944, 1.6 million of these Jews had died at the hands of the Germans and their allies and auxiliaries. Unlike the majority of the Holocaust's later victims who died in the industrialized mass murder of the death camps, the overwhelming bulk of Ukraine's Jews died in mass shootings during the initial stages of the war.
The killings were done in first and second waves, with the bodies buried in mass graves. When the Soviets threatened and carried out counter-offensives to reclaim lost territory, special units made up of concentration camp inmates (sonderkomandos) would return to the sites, dig up the bodies, and burn them in mass pyres, destroying the evidence of their crimes. The number of individual persons killed by the Einsatzgruppen has been estimated at a bare minimum of one million.
Stage 5: 1942-1944
Stage 5 began at The Wannsee Conference in January 1942 began stage five of Nazi power; it was then that when top Nazis decided on a “Final Solution” —to round up and secretly execute all the Jews of Europe. Killing centers were opened in Poland, and thousands of trainloads of Jews were transported there. Jews were gassed immediately upon arrival. Over three million Jews (and numbers of gypsies and other hated groups) were murdered, mostly in 1942–1943.
On January 20, 1942 at a villa near Berlin named Wannsee a conference was convened by Heydrich to implement methods and ideas for a "final solution to the Jewish question" (die Endlösung der Judenfrage). At the conference were fifteen men, among them Heydrich’s head of Jewish affairs, Adolf Eichmann, who would be instrumental in providing the logistical plans for removing the Jews to the camps. The men represented government agencies, such as the Gestapo, the Race and Resettlement Office, the S.S., as well as a representative from the General Government in Poland. As Heydrich himself explained near the beginning of the conference, ideas were in play on relocating Jews:
- “Another possible solution of the [Jewish] problem has now taken the place of emigration—i.e., evacuation of the Jews to the east…Such activities are, however, to be considered as provisional actions, but practical experience is already being collected which is of greatest importance in relation to the future final solution of the Jewish problem.”
The minutes of the meeting were kept, but had been edited by Heydrich. The language it contains euphemisms in place of what was really said. Evacuation of Jews to the east and resettlement meant relocation to the concentration and extermination camps in Poland; special handling regarded the killing of Jews, either through slave labor in which the Jew was worked to death, or being killed immediately on arrival. The final solution was put into practice within a few months of the conference, as the bullets of the machine guns and the exhaust of carbon monoxide were replaced by the more efficient killing methods installed in the first gas chambers.
Stage 6: 1944-45
Stage 6 arrived when the Soviet armies overran the Polish camps in 1944–45 and, liberated the survivors.
But the killing continued unabated, even to the last week of the war. As territory was regained by Soviet forces, the death camps were evacuated of survivors and destroyed as much as possible in a futile attempt to hide the evidence. The survivors were moved west into Germany, usually in hellish death marches, and interned in concentration camps where death still awaited them; such killing by the S.S. took priority over military matters at times.
In all, six million Jews were murdered; most of the 300,000 survivors emigrated to the United States or Israel.
The people involved
Analytically, the people involved in the Holocaust can be divided into the following groups:
Millions were victimized by the Nazi regime during the Holocaust. The Jews were always the principal targets; Anne Frank in the Netherlands was the most famous victim. However, the Nazis also systematically hunted down and murdered the Roma people (“Gypsies”). They also targeted special enemies, including Communist activists, Jehovah’s Witnesses, homosexuals, and people with disabilities. The last group was the target of euthanasia programs carried out in German hospitals in 1939–1941. Some of these programs were stopped when German Christian leaders mobilized public opinion against them.
Under the guidance of an all-powerful führer (Hitler), the Nazis believed fervently in force, violence, and terror as their best weapons. The most fanatical Nazis joined the SS, which carried out most of the executions. The Final Solution was directed by Heinrich Himmler, commander of the SS and Minister of the Interior. His top aide was Reinhard Heydrich, head of the Gestapo and, after 1939, of all the secret police agencies grouped into the RSHA; he was assassinated by Czech commandos with British help in 1942. Adolf Eichmann was the senior SS bureaucrat in charge of handling deportation and transportation. However, regular German army police units also systematically killed large numbers of civilians and POWs on the Eastern Front.
- To Gruppenführer Heydrich:
- Supplementing the task assigned to you by the decree of January 24, 1939, to solve the Jewish problem by means of emigration and evacuation in the best possible way according to present conditions, I hereby charge you to carry out preparations as regards organizational, financial, and material matters for a total solution (Gesamtlösung) of the Jewish question in all the territories of Europe under German occupation.
- Where the competency of other central organizations touches on this matter, these organizations are to collaborate.
- I charge you further to submit to me as soon as possible a general plan of the administrative material and financial measures necessary for carrying out the desired final solution (Endlösung) of the Jewish question. (Order from Hermann Göring to Reinhard Heydrich, July 31, 1941)
The death camps
In the early years of Nazi Germany concentration camps were built with the expressed purpose of housing political prisoners; this was quickly expanded to Jews and other people the Nazis considered undesirable. But by 1942 new camps were built in eastern Poland as death camps; the victims, once targeted by the Einsatzgruppen coming to them, had been rounded up by units of the Army and Waffen S.S., and forced to travel to their own destruction. The victims were packed tightly into cattle cars - so tight in fact that many would die standing up – and transported by rail to the new extermination camps of Chelmno, Treblinka, Sobibor, Majdanek, and Belzec. The camps were essentially factories which specialized in death, making the process from arrival to counting to shower to disposal coldly efficient.
As they arrived the victims were divided in two: those fit for work, usually young to middle aged men, or possessed a special skill needed in the camp, and the remainder sent for delousing in the showers. Deceived to the end, the “showers” was actually a sealed room in which a chemical tablet known as “Zyklon B” was dropped through a hole in the ceiling. The cyanide-based vapors would kill the entire room within minutes; within thirty, the room was emptied by the sonderkomandos, cleaned, and ready for another group of victims.
The "Operation Reinhardt" camps (Chelmno, Sobibor, Majdanek and Belzec, used a different execution method; the gas chambers were pumped full of carbon monoxide generated by gasoline engines from captured Soviet tanks. At these camps all arrivals were gassed, as the camps were pure extermination facilities with no attached work camps.
Of the death camps, the one at Osweicim, Poland - Auschwitz - was perhaps the most notorious. Auschwitz was three camps: a prisoner-of-war camp (Auschwitz I), a slave-labour camp (Auschwitz III–Buna-Monowitz) and the extermination camp (Auschwitz II–Birkenau). The arrivals would disembark the trains at Auschwitz II, where the old, handicapped, infirm, sick, and pregnant women would face a German doctor (among them the notorious Joseph Mengele) in the selektion, where a flick of a thumb could mean the difference between slave labor in the nearby factory run by I.G. Farben (which took advantage of the forced labor by investing some 700 million Reichsmarks in the project), or to their immediate deaths. Those selected for labor would be worked to death by a combination of hard labor and inadequate food and medical care; a second selektion of their numbers, if they had survived, would mean a trip to the gas chambers.
In recent years much controversy has arisen over when President Franklin D. Roosevelt learned what about the Nazis, and what he did or did not do. Switzerland was neutral and accepted in some refugees, but it also made large profits by trading and banking with Germany. The Swiss were forced in the 1990s to make reparation payments.
In rounding up Jews the Nazis sometimes had the enthusiastic cooperation of pro-Nazi governments (as in France and Slovakia). A few countries, including Italy and Hungary, tried to stall the Nazis, but the Germans took power directly and seized the Jews. Only Bulgaria and Denmark were largely successful in protecting their Jews.
Resistance took many forms, from individual acts to hundreds of examples of organized, armed resistance. The most famous episode was the month-long uprising of 60,000 remaining Jews in the Warsaw Ghetto in April 1943. At the Sobibor death camp, an uprising in October 1943 allowed 600 prisoners to escape.
Jews stood virtually alone against the Nazi war machine and those who collaborated with them, receiving no aid or assistance from outside, as well as having no access to arms with which to defend themselves. Further, the Nazis took great care to prevent their victims from knowing their true plans right up to the moment of their deaths; at Babi Yar many had believed they were being transported to a “family work camp” right up to the point of standing before their own mass grave. There was also the fear of reprisals against large numbers of Jews within the ghettos, which also prevented resistance. But word of the unbelievable atrocities of the death camps filtered into places like Warsaw, and as the trains were leaving packed with Jews many saw that resistance was preferable to the death that awaited them.
Nine months after the Warsaw deportations had commenced, and after confirmation that their destination was the Treblinka extermination camp, 24-year-old Mordecai Anielewicz and his Jewish Resistance began the Warsaw Ghetto Uprising on April 19, 1943, which lasted just over a month.
Dietrich Bonhoeffer was a Christian pastor and theologian who was opposed to the goings on in Germany. He was involved in the German Resistance and took part in a plan to assassinate Hitler. This led to his capture and eventual execution, under Hitler's order, at Flossenbürg concentration camp on April 9, 1945.
Jews fought alongside partisans elsewhere in France, the Balkans, and Soviet Russia during the last three years of the war. Uprisings also occurred in two of the death camps, Treblinka and Sobibor; the latter was closed as a result and the site razed to hide the evidence.
In a unique case of resistance, the Jews of Denmark were almost entirely saved by the good will of their neighbors. The Danish government had arranged a system by which they maintained control of the government except for foreign affairs. This allowed the Jews of Denmark to live unmolested for several years. When the Nazis did move to deport the Jews in 1943 the Danish Government resigned in protest. The Danish people began a process of evacuating all of the Jews, en masse, to Sweden. The universities closed so students could assist the evacuation, congregations were urged to help, and the fishing fleets helped to evacuate the Jews by sea. In the end only 500 Danish Jews were captured and placed in Theresienstadt where they remained until the end of the war thanks in part to the continued attentions of the Danish people.
Rescuers hid potential victims as best they could; the tragic story of Anne Frank is the most famous. The pope helped protect some Italian Jews; it is still being debated whether or not he could have done much more. The most famous rescuer was Oskar Schindler; —“Schindler’s List” the movie is a tells the true story about of how he saved 1,100 Jews from the Nazis by setting up factories that produced defective munitions..
In territory occupied by the Germans the situation was bleak for Jews. Their allies were few and resources were meager. Despite this, many put their lives on the line to provide aid and comfort, as well as putting them in hiding or through a network of underground units to get them to safety. In Poland it was punishable by death to aid Jews, yet a “council for the aid of Jews” known as the Zegota rescued about 5,000 men, women, and children, providing hiding places and forged identity papers. A similar number was hidden by French Huguenots in the little town of Le Chamblon-sur-Lignon.
Although criticized by many for his silence about the Nazi persecution of the Jews, Pope Pius XII hid several hundred inside the Vatican, away from Mussolini and German occupiers and quietly worked behind the scenes to do what they could. The Vatican estimates they were able to save upwards of 150,000 Jews during this horrible time. For those who say the Vatican should have done more to save Jews, it should be noted that they weren't even able to stop the killing of Polish Catholics, of whom more than a million lost their lives, so how could they stop the killing of Jews?
Swedish diplomat Raoul Wallenburg, in an attempt to save the last remaining Jews in Hungary, arrived in Budapest on July 9, 1944, and working with neutral diplomats and the Vatican, secured the release of several thousand; his efforts at the rescue of Jews would total well over 100,000 by war’s end, including Tom Lantos, a survivor who became a powerful member of the U.S. Congress.
A Nazi businessman who took advantage of the slave labor conditions to make a personal profit, Oskar Schindler, would use that profit to bribe camp guards and Nazi officials at the Plazow camp to ensure that the workers he had grown to love and admire would survive the end of the war; among the individuals he played cat and mouse with for their lives was the camp's commandant, Amon Goeth, a sadistic man who shot Jews for target practice from his villa and tortured a captured escapee by shooting the prisoners around him. These men and women, who hid Jews out of a sense of common humanity, would not be forgotten: the state of Israel would recognize them with honorary citizenship several years later.
The Allies liberated the concentration camps in 1945 — but the question remains as to whether they could have bombed the camps or otherwise stopped the Final Solution.
The survivors of the Final Solution were very quiet about their experiences until about 1961, when Adolf Eichmann was captured in South America by Israel, tried in Jerusalem, and executed. Since then the Holocaust has become recognized as the most horrible episode of the twentieth20th century, and it has been analyzed in numerous with many books, courses, museums, and movies. The most important museums are Yad Vashem in Jerusalem, the Holocaust Museum in Washington, and the Museum of Tolerance in Los Angeles.
Jews were not the only victims of Nazi persecution. Members of unions, members of the Social Democratic Party, and political dissidents were also sent to the camps; indeed they were among the first ones incarcerated immediately following Hitler’s appointment as chancellor. Some 20,000 Jehovah's Witnesses also were rounded up and sent to the camps, primarily because of refusal to register for the draft, swear allegiance to the state, or give the “Heil Hitler” greeting. Homosexuals were arrested; they were forced to wear a pink triangle on their prison garments; and sent to the camps. Gypsies as well were rounded up and imprisoned, and like the Jews, were deliberately marked for killing.
The mentally retarded, the disabled, and the insane were selected for the T-4 Program, which was created in 1939. Dubbed “useless eaters” by S.S. general Ernst Kaltenbrunner, these people were murdered as part of a “euthanasia” campaign, usually by placing them in a special room where a vehicle’s engine provided the carbon monoxide gas that flowed in through a hose in a wall.
Following the outbreak of World War II in Poland, the Nazis killed Polish intelligentsia in territories under their control, politicians, priests, and anyone else deemed part of a Polish leadership; the remainder were deemed slaves to serve their new masters; many were forced to perform hard labor, while many of the children who happened to look Aryan were kidnapped and raised as Germans in German households.
The number of Jews put to death were staggering. Beginning in the summer of 1942 a bare minimum of 960,000 were believed killed at Auschwitz during its three years in operation. At Treblinka, between 750,000-900,000 Jews were killed within 17 months, considering the staff and guards there numbered 120. 600,000 Jews died at Belzac within 10 months by a staff numbering 104. In the eighteen months of its operation, Sobibor killed 250,000.
|Jewish Death Statistics during the Holocaust|
|Country||Prior Jewish Population||Estimated Number Killed||Percentage of Total||Estimated Number of Survivors|
More than nine million people were discovered by the Allies to have been displaced throughout the European Theater of the war; of these, six million were returned to their native lands. One million refused, citing either a fear of communist persecution or a fear of being discovered to have collaborated with the enemy. The remainder, more than three and a half million Jews, had nothing. For these survivors, life after the war meant searching for loved ones, as well as recovering from the severe effects of malnutrition and disease at the hands of the Nazis.
As to the future of finding homes for the surviving Jews, that was solved in part by both covert and well-publicized efforts to pressure Great Britain into relinquishing control of Palestine for the purpose of a Jewish homeland, as well as the relaxing of American immigration laws in 1948 which allowed a large influx of Jewish refugees. So shocking was the Holocaust to the Jewish mindset that it caused a determination of survivors to speed the creation of the State of Israel in May, 1948, vowing that a repeat of the Holocaust, as well as previous pogroms against the Jews in the past, would not happen again. Since 1948, Israel has fought in four major wars against their neighbors bent on eradicating it, and each time Israel has emerged victorious.
The Allies were just as shocked over the conditions which prevailed at the Nazi death camps, and set up military tribunals as a result. The most famous was the Nürnberg Trials, taking place 1945-1946 near the site of the Nazi mass rallies. For the first time in history, an international tribunal would try the 22 major living Nazis for crimes against humanity; all but one would be found guilty, and more than half would suffer death by hanging.
Hitler and Goebbels committed suicide as the Russians were capturing Berlin. Himmler was captured in 1945 and committed suicide before his war crimes trial began. The main war criminals were tried at the International War Crimes Tribunals at Nuremburg in 1945–1947, and at smaller trials throughout Europe. The Holocaust was mentioned at the trials, but the major allegation against defendants was the systematic planning of an unjust war. Many Nazis fled justice, reaching Argentina or other dispant locations. Adolf Eichmann, a chief architect of the Holocaust, was captured while hiding in Argentina under an assumed name, brought to Israel, and put on trial in 1961. He was found guilty, and suffered the first and only death penalty carried out in Israel’s history. Other Nazis would eventually be brought to trial: Klaus Barbie, the “Butcher of Lyon”, was tried in France in 1987, as well as Maurice Papon a decade later for collaborating with the Nazis. These trials brought to new generations an awareness of the Holocaust.
The word Holocaust comes from the Greek word holokaustos (holos: complete, and kaustos: a sacrificial or burnt offering to a god); the Hebrew words Sho'ah (Catastrophe) and Hurban (destruction) were also used, and and survivors have used both to refer to what seemed to be the complete and utter destruction of the Jewish people at the hands of the Nazis, specifically in the crematoria of the extermination camps built for that purpose. Many victims have taken offense to the term Holocaust because of the meaning of the word. The term Sho'ah has become the preferred term in some parts of the world
Books and film
- Holocaust denial
- Holocaust Memorial Day
- History of Poland
- Bergen-Belsen, one of the main killing camps
- Heinrich Himmler, in charge of the SS
- SS, in charge of the killing
- Original U.S. Army record of the discovery of camps, recorded on orders of General Dwight D. Eisenhower
- H-HOLOCAUST, daily discussion group, edited by scholars; numerous book reviews and reports on current scholarship
- Holocaust Encyclopedia
- Yad VaShem, the World Center for Holocaust Research, Jerusalem, Israel
- United States Holocaust Memorial Museum in Washington, D.C.
- Babi Yar, poem by Yevgeni Yevtushenko
- Houston Holocaust Museum
- Florida Holocaust Museum
- Virginia Holocaust Museum
- Holocaust Awareness Museum & Educational Center of Philadelphia; America's First Holocaust Museum
- Searchable list of 2300 victims from Nuremberg
- Beth Shalom Holocaust Centre in Newark, England
- Montreal Holocaust Memorial Center Museum
- German Government's Memorial To Jews Murdered During Holocaust
- AMCHA: Israeli Association of Holocaust Survivors
- Hitler's first writing about Jewry. On 16 September, 1919.
- Raffael Scheck, Hitler's African Victims: The German Army Massacres of Black French Soldiers in 1940 (2006) online review.
- After the war, West Germany recognized its guilt and made large financial payments to Israel; Communist East Germany refused to do the same.
- For further information, see
- It is estimated that around 6 million Jews were killed during the Final Solution, along with as many as another 6 million non-Jews.
- The Far East War Crimes Trials were held from 1946 to 1948, and resulted in the conviction of 25 Japanese generals and high officials accused of crimes against peace. Over 2,000 local and regional trials convicted 4,000 Japanese officers accused of mistreating prisoners and civilians.
Surveys and victims
- Bloxham, Donald and Kushner, Tony. The Holocaust: Critical Historical Approaches. (2005). 238 pp.
- Brandon, Ray, and Wendy Lower, eds. The Shoah in Ukraine: History, Testimony, Memorialization. (2008). 378 pp. online review
- Dawidowicz, Lucy. The War Against the Jews, 1933–1945 (1986)
- Hitler's War Against the Jews: A Young Reader's Version of the War Against the Jews, 1933-1945, by Lucy S. Dawidowicz (1978) excerpt and text search
- Edelheit, Abraham et al. History Of The Holocaust: A Handbook and Dictionary (1995) 544pp, a standard reference work online edition
- Friedlander, Saul. Nazi Germany and the Jews: 1933-1945 (2009) abridged version of standard 2 volume history:
- Friedlander, Saul. The Years of Extermination: Nazi Germany and the Jews, 1939-1945 (2007), the standard scholarly history excerpt and text search
- Friedlander, Saul. The Years of Persecution:1933-1939 (1998)
- Friedman, Saul S. A History of the Holocaust. (2004). 494 pp.
- Gilbert, Martin. Never Again: The History of the Holocaust (2000) excerpt and text search
- Gilbert, Martin. The Holocaust: A History of the Jews of Europe During the Second World War (1987) excerpt and text search
- Gilbert, Martin. The Routledge Atlas of the Holocaust (2002)excerpt and text search
- Gutman, Israel. ed. Encyclopedia of the Holocaust (4 vol 1990), a standard reference work
- Lacqueur, Walter, ed. The Holocaust Encyclopedia (2001).
- Landau, Ronnie. The Nazi Holocaust (2002)
- Marrus, Michael A. The Holocaust in History (1989)
- Niewyk, Donald, and Francis Nicosia. The Columbia Guide to the Holocaust. (2000) online edition; online review
- Rosen, Philip. Dictionary of the Holocaust: Biography, Geography and Terminology. (1997)
- Rothkirchen, Livia. The Jews of Bohemia and Moravia: Facing the Holocaust. (2006). 447 pp.
- Spector, Shmuel ed., The Encyclopedia of Jewish Life: Before and During the Holocaust (2001).
- Yahil, Leni. The Holocaust: The Fate of European Jewry, (1990).
- Browning, Christopher. Nazi Policy, Jewish Workers, German Killers (2000)
- Burleigh, Michael. The Third Reich: A New History. 2000. 864 pp., stresses central role of antisemitism.
- Dawidowicz, Lucy. The War Against the Jews, 1933–1945 (1986)
- Friedlander, Henry. The Origins of Nazi Genocide: From Euthanasia to the Final Solution. (1995) 445 pp. online review;online edition
- Friedlander, Saul. Nazi Germany and the Jews: Volume 1: The Years of Persecution 1933-1939 (1998)), the standard scholarly history
- Friedlander, Saul. The Years of Extermination: Nazi Germany and the Jews, 1939-1945 (2007), the standard scholarly history excerpt and text search
- Gaunt, David; Levine, Paul A.; and Palosuo, Laura, eds. Collaboration and Resistance during the Holocaust: Belarus, Estonia, Latvia, Lithuania. (2004). 519 pp.
- Goldhagen, Daniel Joseph. Hitler’s Willing Executioners: Ordinary Germans and the Holocaust (1997).
- Graml, Hermann. Antisemitism in the Third Reich (1992).
- Gutman, Israel, ed. Encyclopedia of the Holocaust, 4 vol (1989)
- Johnson, Eric A. Nazi Terror: The Gestapo, Jews, and Ordinary Germans (2000). excerpt and text search
- Lewy, Guenter. The Nazi Persecution of the Gypsies (2001). excerpt and text search
- Levy, Richard, ed. Antisemitism: A Historical Encyclopedia of Prejudice and Persecution (2005)
- Lower, Wendy. Nazi Empire-Building and the Holocaust in Ukraine. (2005). 307 pp.
- Wachsmann, Nikolaus. "Looking into the Abyss: Historians and the Nazi Concentration Camps," European History Quarterly, 4 2006; vol. 36: pp. 247 - 278. fulltext in Sage; historiography
- Wistrich, Robert S. Hitler and the Holocaust. (2001). 295 pp.
- Bartov, Omer. Mirrors of Destruction: War, Genocide and Modern Identity. (2000). 310 pp. ISBN 978-0-19-507723-0. online review; excerpt and text search
- Bloxham, Donald. Genocide on Trial: War Crimes Trials and the Formation of Holocaust History and Memory. (2001) 292pp ISBN 978-0-19-820872-3. online review
- Carrier, Peter. Holocaust Monuments and National Memory Cultures in France and Germany since 1989: The Origins and Political Function of the Véd'Hiv' in Paris and the Holocaust Monument in Berlin. (2005) 267 pp.
- Douglas, Lawrence. The Memory of Judgment: The Memory of Judgment: Making Law and History in the Trials of the Holocaust (2000) excerpt and text search
- Greenspan, Henry. On Listening to Holocaust Survivors: Recounting and Life History. (1998) 220 pp. ISBN 978-0-275-95718-6. online review
- Haggith, Tony and Newman, Joanna, ed. Holocaust and the Moving Image: Representations in Film and Television. 2005. 317 pp.
- Mikhman, Dan. Holocaust Historiography: A Jewish Perspective: Conceptualizations, Terminology, Approaches, and Fundamental Issues (2003)
- Roseman, Mark. A Past in Hiding: Memory and Survival in Nazi Germany (2001). excerpt and text search
- Rosen, Philip. and Nina Apfelbaum. Bearing Witness: A Resource Guide to Literature, Poetry, Art, Music, and Videos by Holocaust Victims and Survivors (2001) excerpt and text search
- Stone, Dan, ed. The Historiography of the Holocaust. (2004). 573 pp
- Waxman, Zoe Vania. Writing the Holocaust: Identity, Testimony, Representation (2007) excerpt and text search
- Wiesel, Elie, and Robert Franciosi. Elie Wiesel: Conversations (2002) excerpt and text search
- Weisel, Elie. Night (1999).
Reactions and memory in U.S.
- Abzug, Robert H. ed. America Views the Holocaust, 1933-1945: A Brief Documentary History (1999) excerpt and text search
- Lipstadt, Deborah E. Beyond Belief: The American Press and the Coming of the Holocaust, 1933–1945 (1993). excerpt and text search
- Newton, Verne W., ed. FDR and the Holocaust (1996). excerpt and text search
- Novick, Peter. The Holocaust in American Life (1999). excerpt and text search
- Wyman, David. The Abandonment of the Jews: America and the Holocaust, 1941–1945. (1984). excerpt and text search
- Aroneanu, Eugene and Thomas Whissen, eds. Inside the Concentration Camps: Eyewitness Accounts of Life in Hitler's Death Camps (1996) 176 pp, online edition
- Greene, Joshua M, and Shiva Kumar, eds. Witness: Voices from the Holocaust (2000). excerpt and text search
- Klemperer, Victor. I Will Bear Witness: A Diary of the Nazi Years 1942–1945 (2001). excerpt and text search vol 2
- Kremer, S. Lillian, ed. Holocaust Literature: An Encyclopedia of Writers and Their Work (2002).
- Siedlecki, Janusz Nel, et als. We Were in Auschwitz (2000).
- Szpilman, Wladyslaw. The Pianist (2000). excerpt and text search
- Pritchard, R. John, “War Crimes, International Criminal Law, and the Postwar Trials in Europe and Asia.,” Iin Loyd E. Lee ed., World War II in Asia and the Pacific and the War’s Aftermath, with General Themes, edited by Loyd E. Lee (19987).
- “Schindler’s List” (DVD and VHS) (1993).
- USC Shoah Foundation, largest video archive of testimonies of Holocaust survivors and witnesses,
|
<urn:uuid:d1e22d40-0647-4ace-b4f9-838eea186b89>
|
CC-MAIN-2016-26
|
http://www.conservapedia.com/index.php?title=Holocaust&oldid=1030725
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957985
| 9,059
| 4.09375
| 4
|
As the beginning of the school year gets underway many are continuing to prepare for the Common Core. Schools across the nation are at various stages of implementation. Publishers and bookstores are trying to understand more about the standards and how it affects them. Here is a variety of information, from resources to recent articles and postings, to help you stay up to date on the standards. We will include an updated toolkit with each issue of our column, Cut to the Core. Here is the September toolkit:
Just the Basics: Understanding the Common Core Standards
Read the standards here: The Common Core Standards Initiative.
Select “In the States” to find out more about your state and the Common Core.
Click here for a list of frequently asked questions.
A basic article from August of 2013 from the Associated Press: What Are the Common Core State Standards?
A listing of resources provided and maintained by the New York State Education Department: EngageNY Common Core Toolkit
Resources for Publishers
A listing of documents to help publishers support teachers that need to educate their students and help them learn. Teachers need high quality resources, not recycled and repackaged information with a “Common Core Aligned” label slapped on it.
Apps to Help You Understand and Implement the Common Core
Common Core Tracker: A grade book aligned with the Common Core for iPads and computers.
Resources for Librarians
Embracing The Common Core…What does it look like and what’s my role? This pamphlet, created by Paige Jaeger, the Coordinator of School Library Services at WSWHE BOCES, offers a clear explanation of the standards and the important role that the librarian plays in implementing them.
Common Core Writing...Let the Library Help You Poster and Postcards: Also developed by Paige Jaeger this is a wonderful advocacy tool to promote the role the librarian plays in helping to implement the standards within their school. Paige Jaeger is available for presentations, training and consulting.
Declaration for the Right to Libraries: Don't miss this opportunity to support libraries and engage your community. Print out your copy of the Declaration for the Right to Libraries and hold a signing ceremony in October (or whenever possible). School librarians may want to hold the event during Back to School Night. Barbara Stripling the President of the American Library Association is asking that libraries of all types join her Campaign called America’s Right to Libraries, by holding signing ceremonies where community members, organizations, and officials can visibly sign and stand up for their right to have vibrant school, public, academic, and special libraries in their community.
Common Core and School Librarians: An Interview with Joyce Karon: This article from School Library Monthly discusses what the standards are and what librarians should be doing.
School Librarians Valuable Behind Scenes as School Adopt Common Core Standards: This article from the New York State School Boards Association dispels myths and stereotypes that surround the role of librarian and highlight the importance of librarians as the standards ask students to locate, evaluate and synthesize information. This is a good article to share with faculty and administrators.
Communicating Your Message via Web 2.0 Tools
Be visible! Whether you work in a school or public library communicating your role has never been more important. One way to do that is by bringing your message to the web. Listed below are tools to help you get started. Note that authors may find these tools useful as well.
Animoto. You can create an Animoto video slideshow showcasing your program and the Common Core. Here's a great xxample: School Board Presentation: Librarians Integral to Student Achievement
LiveBinders. Create paperless binders to curate information. Here's an example: School Librarians and the Common Core Standards: Resources
Scoop.it. Use this online curation tool to create beautiful "topics" pages to share relevant information with faculty and parents on the Common Core. Better than a simple pathfinder, this tool helps you organize key ideas and supporting articles into one accessible location. Here's an xxample: Common Core State Standards for School Leaders
Smore. Create an online newsletter providing information and resources on your library program, the Common Core, technology, etc. that can easily be emailed to teachers, parents, and administrators. Here's an xxample: Common Core Matters
American Association of School Librarians (AASL) Standards for the 21st-Century Learner Lesson Plan Database. Lesson plans here are aligned with common core crosswalk.
Informative Article from “Getting Smart” on the launch of the LearnZillion Online Lessons. In the fall of 2012 LearnZillion began posting the first of 2,000 online lessons to help students, teachers and districts adopt the Common Core Standards. The lessons come with high quality “screencasts” created by the nation’s top teachers that were recruited for the project.
327 Common Core Aligned Playlists From Mentor Mob and LearnZillion from Cool Tools for 21st Century Learners. In October of 2012 MentorMob and LearnZillion teamed up to provide 327 Common Core Aligned “Playlists.” MentorMob playlists offer an interactive format for students to follow content on a step by step basis, helping them remain engaged during the learning process.
Library Of Congress Unveils Massive Common Core Resource Center. Looking to incorporate primary sources into your lesson plans? The Library of Congress comes to the rescue. You can even search by grade level and standard.
Share My Lesson: Common Core Standards Information Center. Share My Lesson is a free site developed by the American Federation of Teachers and TES Connect, where educators can share resources and lessons. There is a Common Core Standards Information Center providing additional information such as the standards, the latest news, and even a discussion forum. Search for lessons aligned to the Common Core by grade level or standard and you will find lessons with videos, PowerPoint presentations, and word documents that you can use and save under your account.
Tools to Guide the Collection of Evidence of Shifts in Practice. From the EngageNY website - A document for teachers, coaches and instructional leaders to support the development of Common Core Standards aligned practice.
Resources for Communicating With Parents
Three-Minute Video Explaining the Common Core State Standards. This short, concise video offer an informative introduction to the standards, using non- “eduspeak,” clearly introduce the Common Core Standards to parents.
New York City Department of Education Common Core Tips for Families. The New York City Department of Education (DOE) offers general information and tips for parents, caregivers and families on the Common Core Standards on this clean site that offers the information in multiple languages.
Spotlight on The Common Core Standards: What Do Parents Need to Know? This pamphlet offers options for educators and parents to communicate important points about the Common Core to parents in the languages of English and Spanish.
PTA-Four-page Parents' Guides to Student Success. Created by the National PTA, these printable guides offer a detailed description for parents explaining what to expect with the new standards and are broken out by individual grades from k-12. Resources are available in both English and Spanish.
Schenectady City Schools: Understanding the Common Core State Standards. On their website this district showcases an excellent example of flipped communication by sharing information with parents via a video. This also presents a way wonderful way to reach parents who may not be able to attend meetings.
Resources for Educators Working With English Language Learners
Common Core en Español. The Standards Initiative Translation Project is a product of the California Department of Education (CDE) and the San Diego County Office of Education (SDCOE) and is “committed to providing leadership, assistance, and resources so that every student has access to an education that meets world-class standards.” The site has gone through a Peer Review and District Review Process.
Normas para la enseñanza de las artes de lenguaje en español. A document created over the course of two years by Spanish language dual language teachers. Since the majority of literacy skills transfer across languages and are applicable to learning in either language the team created slightly modified translations of many of the CCSS standards.
Common Core Bilingual Standards. Posted on the EngageNY website, this document details the importance of Common Core Standards for English Language Learners.
Resources for Special Education
National Dissemination Center for Students with Disabilities. A question and answer type document for parents, teachers and administrators that have questions about the Common Core Standards and their application for students with disabilities.
Students with Disabilities & the Common Core State Standards - Resources. A comprehensive list of resources provided by Achieve, Inc.
|
<urn:uuid:6ec4efe8-eed0-4b50-ab6f-af5cd61b5621>
|
CC-MAIN-2016-26
|
http://publishersweekly.com/pw/by-topic/digital/retailing/article/58986-the-common-core-toolkit-september-2013.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00100-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.909515
| 1,821
| 3.375
| 3
|
People are up in arms about the Light Brown Apple Moth and the government's spraying program designed to eradicate it. The question everyone is asking is, why are people being sprayed with an unregistered pesticide along with crops and fields?
Drina Brooke wants answers, so she sent a letter to the Department of Agriculture to see what they had to say. Here is their response:
Dear Ms. Brooke,
Thank you for writing about the Light Brown Apple Moth (LBAM) project. I value hearing your thoughts on the project's impact on California. California must work to combat the LBAM because of the complex threat it poses to our diverse range of agricultural and natural plant life. This invasive pest attacks more than 250 crops and 2,000 plants and threatens the native and endangered species that depend on them. If it becomes established statewide, the LBAM has the potential to cause billions of dollars of damage annually and cost the state numerous jobs. California has a duty to prevent the spread of the LBAM before it crosses borders into other states, agricultural regions and environments.
The LBAM is an invasive pest - not native to California - with few natural enemies here to reduce its expanding population. To combat this growing threat, we have proposed an integrated pest-management approach utilizing aerial and ground application of a moth pheromone. However, misinformation about the LBAM and our program continues to spread and cause unwarranted fear - despite constant and open dialogue for more than a year with citizens and local officials. There has been no shortage of grossly exaggerated and completely unsubstantiated claims - such as the pheromone product's being untested and the treatments causing red tide (red tide is a naturally occurring marine algal bloom). Fortunately, the actual facts and due diligence have proven these claims false.
Pheromones are simply chemical signals that resemble a scent. Pheromone treatments have been used in the United States and around the world in agricultural and urban areas (including residential areas of Illinois, Indiana, Ohio, Virginia and Wisconsin) for more than a decade without incident. As recently as last year, more than 3 million acres in the United States were aerially treated with moth pheromones to disrupt the mating of the harmful gypsy moths.
For years, environmentalists have urged farmers to develop alternatives to conventional, toxic, "kill-on-contact" pesticides; pheromones are the alternative. These pheromones do not even harm the moths; they merely mimic a signal "scent" naturally emitted by the female moth, thereby distracting the males so they cannot locate a mate and reproduce.
Recently, the claim that residents became sick from past treatments has held the public's attention and has been the subject of demonstrations. Public health officials with three state departments thoroughly reviewed health claims submitted during and after the aerial pheromone treatments last year in Monterey and Santa Cruz counties and could find no link between the claims and the treatments. As the Governor recently said in Monterey, the spraying is safe, and "there is nothing that says otherwise."
I also hear a number of misleading and inaccurate references to describe the pheromone, including: hormone, carcinogen, mutagen, endocrine disruptor and other inaccurate descriptions. These unsupported claims overlook the fact that the federal Environmental Protection Agency, our state's Department of Pesticide Regulation and numerous health agencies have thoroughly reviewed and unanimously approved these products and their classification as pheromones. In fact, the pheromone products we have used in this program are approved for treating organic crops; they are safe enough that the law states you don't even have to wait or wash them off after a treatment before you eat the produce.
However, to thoroughly ensure everyone's safety, the aerial spraying has been postponed while we complete what's known as "six-pack" toxicology tests in addition to the normal extensive tests on the pheromone products. These tests thoroughly test toxicity for eye, inhalation, respiratory and other potential irritants. I am confident that these additional tests will reassure Californians that we are taking the safest, most health-conscious and most progressive our state of this very real threat to our agriculture, environment and economy. I implore everyone to rely on sound science and to shut the door on false information. For more information about the LBAM project, please visit our website at www.cdfa.ca.gov <http://www.cdfa.ca.gov/> or call the LBAM hotline at 1-800-491-1899.
As a public official, I am sworn to protect the public, the environment and the ecosystems that make California such a uniquely productive and sustainable resource. I take that responsibility seriously, and I vow to pursue only the safest, most environmentally friendly means available.
Again, thank you for writing
Sincerely,A.G. Kawamura, Secretary
California Department of Food and Agriculture
I am so relieved to hear that this agency is sworn to protect the public, and that all those people who got sick from the last spraying must have been mistaken about their illnesses. The following statement from the Agriculture department's Secretary A.G. Kawamura is so reassuring. To bad it is false.
There has been no shortage of grossly exaggerated and completely unsubstantiated claims - such as the pheromone product's being untested and the treatments causing red tide (red tide is a naturally occurring marine algal bloom). Fortunately, the actual facts and due diligence have proven these claims false.
|
<urn:uuid:854ff64f-9e6c-43ec-9c28-f1f51b899c42>
|
CC-MAIN-2016-26
|
http://www.opednews.com/articles/California-Spraying-by-Barbara-Peterson-080604-877.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00197-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955855
| 1,129
| 2.765625
| 3
|
Can tea tree oil be used
as a body lice treatment?
Head and body lice are not the same as head lice. They thrive on crowded and unsanitary conditions such as might be found among transients, the homeless, or those who cannot frequently change into clean garments. The most common cure for them is a good cleaning.
To get rid of body lice, first, take a good long bath or hot shower. Put on some clean clothes, and then get to work washing clothes, bedding, carpets, furniture, and anything else that the infected persons have come in contact with. The life cycle of a body louse is slightly different than that of a head louse or pubic louse.
Body lice lay eggs, or nits, that hatch within 30 days and become mature enough to lay eggs in 7 days.
Body lice can live for ten days without a food source ( your blood) . This can make for a more difficult task in ridding a house of them.
Thankfully there is a treatment for human body lice. If a person is infected with body lice, a doctor might recommend washing all over with a pediculicide shampoo, that contains permethrins, lindane, or malathion. These are considered by many to be a very dangerous treatment.
The Doctor will also probably prescribe an antibiotic.
Instead of a dangerous pesticide wash for your body, ask your doctor to consider a good liquid body wash or tea tree oil shampoo for body lice treatment, and add ten drops of tea tree oil to each use.
It has been found that a 10% solution of tea tree oil can kill lice, and that tea tree oil is more effective than DEET as a repellent
Body lice have been associated with typhus epidemics and louse-borne relapsing fever. Typhus is a louse borne diseases caused by bacteria that develops in the louses gut and is excreted in its feces. When the louse bites it’s victim, there is usually itching at the body lice bite wound site and the feces are scrubbed into the open wound, causing the infection.
Typhus is treated successfully most often when caught early on. If left unchecked mortality rates area as high as 60%. During Napoleon's retreat from Moscow in 1812, more French soldiers died of typhus after body lice bites than were killed by the Russians
According to Wikipedia Epidemic Typhus has the following symptoms of body lice.
“The symptoms set in quickly, and are among the most severe of the typhus family.
They include severe headache, a sustained high fever, cough, rash, severe muscle pain, chills, falling blood pressure, stupor, sensitivity to light, and delirium.
A rash begins on the chest about five days after the fever appears, and spreads to the trunk and extremities but does not reach the palms and soles.
A symptom common to all forms of typhus is a fever which may reach 39°C (102°F). Since it is so serious, it’s important to follow your doctors recommendations for body lice treatment. You will want to do everything possible to be sure you are killing body lice in all their hiding places.
Make sure to wash all linens, clothes, bedcovers, stuffed toys, and anything else where contact has been made, in 130 degrees Fahrenheit water, and dry them in a hot air dryer at the same temperature. Swift and complete action in cleanup will put this in check as quickly as possible.
|
<urn:uuid:ea9af435-c39d-4e6d-895c-db339725a409>
|
CC-MAIN-2016-26
|
http://www.teatreewonders.com/body-lice-treatment.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955046
| 747
| 2.625
| 3
|
What it is:
A market is a location where buyers and sellers meet to exchange goods and services at prices determined by the forces of supply and demand.
How it works (Example):
A market may be a physical location or a virtual one over a network (for example, the internet). Here, people who have a specific good or service (the supply) they want to sell interact with people who wish to buy it (the demand).
Prices in a market are determined by changes in supply and demand. If market demand is steady, an increase in market supply results in a decline in market prices and vice versa. If market supply is steady, a rise in demand results in a rise in market prices and vice versa. These relationships are demonstrated in the following graphs:
Producers advertise goods and services to consumers in a market in order to generate demand. Also, the term "market" is closely associated with financial assets and securities prices (for example, the stock market or the bond market).
Why it Matters:
A market facilitates transactions between buyers and sellers (financial markets) and producers and consumers (consumer goods and services market). Markets experience fluctuations and price shifts resulting from changes in supply and demand. These changes result from fluctuations in many variables including, but not limited to, consumer preferences and perceptions, the availability of materials, and external sociopolitical events (for example, wars, government spending, and unemployment).
|
<urn:uuid:bba850b3-f6d3-47b7-8f20-ed6b7315c651>
|
CC-MAIN-2016-26
|
http://www.investinganswers.com/financial-dictionary/economics/market-3609
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936861
| 288
| 3.75
| 4
|
Gaining weight is troublesome for most people, but especially for athletes who engage in a substantial amount of exercise to improve their sports performance or fitness enthusiasts who are using exercise as a strategy for losing weight. You would think that exercising for 10-plus hours a week (such as in the amount of time required to train for long distance events such as half marathons or more) would induce weight loss rather than weight gain, however many athletes are surprised to experience the latter. There are several possible explanations for this seemingly paradoxical situation.
First, athlete’s diets typically contain a high percentage of their total calories from carbohydrates. Carbohydrates are broken down into glucose and extra glucose that is not needed immediately for fuel is stored in the muscle and liver in the form of glycogen. Glycogen molecules hold a substantial amount of water, 1 gram of glycogen has 2.7 grams of water with it. So, if you are consuming more carbohydrates, your body is going to contain more water.
This additional water is not the same thing as water retention where excess water is held between cells; the water attached to a glycogen molecule is inside the cells, which makes it healthy. Nevertheless, it can increase your body weight by as much as 3 – 5 pounds. This weight gain is only water weight, not fat weight, therefore it should not be of concern to the athlete or fitness enthusiast that experiences this type of weight gain.
In addition to the body having more water weight because of the extra consumption of carbohydrates, endurance training enhances the body’s ability to store more glycogen than it would at a normal non-trained or pre-trained state. The average glycogen storage capabilities for muscles of non-trained individuals is about 80 – 90 mmoles/kg. In contrast, a trained individual has muscle glycogen storage capabilities up to 135 mmoles/kg. So, endurance athletes and fitness enthusiasts can gain water weight by consuming more carbohydrates, as well as by training their system to store more glycogen.
Getting your body to store more glycogen is a strategy for competitive endurance athletes- the more glycogen stores you have, the better your performance. The additional benefit of the extra water is that the cells are hydrated and optimal hydration is essential for good sports performance. So, once again, this water weight gain is a healthy weight gain.
So unless you are just blatantly overeating, the weight gain is probably just water weight. However, there are many instances where athletes gain fat weight as well, because they are just over-eating or eating a lot of high calorie or high fat foods because they think their training will negate the extra calories and keep them from gaining weight. The bottom line is, if you eat more calories than you burn off, you will gain weight. A trained body can use extra calories more efficiently than an untrained body, but weight gain is still possible in highly trained athletes and fitness enthusiasts who eat excessive amounts of calories.
Finally, endurance training typically does not induce weight gain from muscle hypertrophy, however if the individual was not involved in a fitness program prior to the onset of their endurance training, he/she could experience some muscle hypertrophy from running. In addition, already trained recreational runners, who have never trained at high intensities, could also experience some muscle weight gain provided that the muscles were worked at intensities that caused the muscles to work at near maximal force generation, such as in sprinting in interval training or running up hills.
Trish Schwartz, M.Ed., has worked in the fitness industry for 25 years. Her experience includes owning and operating fitness centers, running her own in-home personal training business, working as a physical education instructor at the collegiate level and teaching at a six-month personal training school. She is a certified Health/Fitness Instructor (HFI) through ACSM and Pilates Mat Instructor through Physical Mind Institute. She earned a Bachelor of Science degree in physical education and Master of Education degree in Exercise Physiology from Colorado State University.
< Last Article
How can I avoid Halloween weight gain without missing out on the fun?
Next Article >
Do I burn more calories when it is hot outside or cold?
|
<urn:uuid:a0503631-d7df-46ce-b1fa-03c3bfa70019>
|
CC-MAIN-2016-26
|
http://www.acefitness.org/acefit/expert-insight-article/3/1050/why-do-i-seem-to-gain-weight-when-i-start-to/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00134-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.966168
| 857
| 3.0625
| 3
|
Across the US, 25-million people suffer from the discomfort of swelling and varicose veins of the legs due to superficial venous reflux disease. Traditionally, this condition has led patients to undergo the more intensive vein stripping surgery. However with today’s technologies these types of more aggressive surgeries are rarely required.
Venous reflux disease occurs in leg veins, particularly the saphenous veins, which are designed to carry blood back to the heart. To prevent blood from flowing in the wrong direction these veins have numerous valves. When the valves fail, blood flows back down (refluxes) and pools up in the leg veins, causing them to swell. Venous reflux is therefore the underlying cause of varicose veins.
Varicose veins are swollen, twisted, often-unsightly blue veins in the legs, close to the surface of the skin. Because their valves are damaged, varicose veins hold more blood at higher pressure than normal veins, forcing fluid into the surrounding tissue causing swelling, and often pain. They generally occur in the legs.
Spider veins on the other hand are small clusters of red, blue or purple veins that lay closer to the surface of the skin than varicose veins. They can look like tree branches or spider webs and most commonly appear on the thighs, calves and ankles. Spider veins cause no pain or discomfort and so are considered a "cosmetic" problem.
Varicose veins may ache and itch, and legs can become tired, heavy and painful. The feet and ankles may swell because of poor blood flow. Left untreated, varicose veins can eventually rupture or cause leg ulcers.
Walking, wearing compression hose, elevating and resting the legs may relieve some of the symptoms of varicose veins (weight reduction is also helpful), and may prevent the condition from worsening. Should the veins continue to deteriorate however, medical procedures may be required.
Medical treatment options vary depending on the individual but it is important to know that today’s technologies allow for a much shorter recovery period and minimal discomfit. Below are some examples of the types of options you as a patient may consider:
In most cases spider veins can be remedied using process called Veinwave. Veinwave is an innovative, safe, minimally invasive procedure for the treatment of tiny spider veins primarily for sensitive areas on the face, and on the legs especially the ankles and knees. Veinwave technology can safely pinpoint sensitive locations for treatment. Thermo-coagulation, which involves using microwaves to heat fine blood vessels, transfers the microwave energy to the affected area through an ultra-fine insulated needle causing the vein to seal shut, collapse and instantly disappear. A typical procedure can treat up to 18 to 20 inches of vessels in as many minutes with no recovery time, bandages or anesthetic required
For more pronounced varicose veins one method that works well for many is using a Radiofrequency Catheter named Closurefast. The procedure is accomplished with greater ease, and the average treated length of a leg vein (45 centimeters for example) is completed in 3 to 5 minutes with ClosureFast.
It is an outpatient procedure requiring local or general anesthesia and patients return to normal activities within 1 to 2 days. Besides the relief from discomfit and itch of varicose veins, the legs gain a positive cosmetic result with minimal or no scarring, bruising or swelling.
There are many other options available and you should talk through these options with your physician in order to choose what is right for you.
To learn more please visit http: www.veinspecialists.org/
|
<urn:uuid:476162f2-b6f9-4926-bf47-039798e06224>
|
CC-MAIN-2016-26
|
http://m.wcvb.com/Remedies-for-venous-reflux-disease-get-easier/13088336
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00177-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942142
| 748
| 2.875
| 3
|
You are hereHome ›
Now showing results 1-3 of 3
This is a lesson about the characteristics of planets, comets, asteroids, and trans-Neptunian objects. Learners will classify objects and then apply what they have learned by participating in a formal debate about a solar system object discovered by... (View More) the New Horizons spacecraft and by defining the term planet. (View Less)
This is a lesson about solar system exploration. Learners will understand that combining information gathered by a variety of robots gives us a more comprehensive understanding of our solar system. Learners will explore a planet made up of a... (View More) combination of materials while simulating the perspective of different missions: pre-launch reconnaissance, fly-by, orbit, and landing. Learners will record and share their observations. Requires the book "Seven Blind Mice" by Ed Young. This is lesson 8 of 16 in the MarsBots learning module. This lesson is adapted from "Strange New Planet," an activity in the "Mars Activity Book." (View Less)
Learners will explore the concept of parallax (the apparent displacement of an object caused by a change in the viewer’s position) and then simulate the discovery of Pluto with a Blink Comparator via an online interactive.
|
<urn:uuid:ecebb0a0-4501-49e8-9d63-c8ed5d0b31b7>
|
CC-MAIN-2016-26
|
http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Earth+and+space+science%3ASolar+system%3APlanets&instructionalStrategies=Role+playing
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00175-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.902331
| 265
| 4.1875
| 4
|
Addison's Disease - Topic Overview
What is Addison's disease?
Addison's disease develops when the adrenal glands, which are above the kidneys, are not able to make enough of the hormones cortisol and, sometimes, aldosterone.
Your body needs both of these hormones to work as it should. Cortisol helps the body cope with extreme physical stress from illness, injury, surgery, childbirth, or other reasons. Aldosterone helps the body hold on to the salt it needs, and it keeps your blood pressure steady.
Normally, the level of these hormones increases through a chain reaction. First, the hypothalamus in the brain makes a hormone that the pituitary gland needs to make another hormone called ACTH. ACTH then tells the adrenal glands to make cortisol or aldosterone. But with Addison's disease, the adrenal glands can't make enough of the hormones.
If you have Addison's disease, you need to take medicine for the rest of your life to replace the hormones your body can't make. If you don't treat the disease, an adrenal crisis may occur that can lead to death because of a steep drop in blood pressure.
What causes Addison's disease?
Addison's disease can occur:
- When the body's immune system kills off the part of the adrenal glands that makes cortisol and aldosterone. This is the most common cause.
- When the adrenal glands are harmed by:
- Infections, such as tuberculosis, HIV, and other bacterial or fungal infections.
- Cancer that has spread to the adrenal glands. This is mostly seen in lung cancer.
- Bleeding into the adrenal glands as a side effect of using blood thinners.
- Some types of surgery or radiation treatments.
- The use of certain medicines, such as high doses of ketoconazole.
- If you take a steroid medicine for a long time and then suddenly stop using it.
People can get Addison's disease at any age.
What are the symptoms?
The most common symptoms are:
You may also have other symptoms, such as:
Skin that looks darker than normal.
- Loss of appetite.
- Feeling lightheaded.
- Feeling sick to your stomach or vomiting.
- Craving salt.
|
<urn:uuid:6ed9cbdc-3e38-4231-9009-2b91dfe65a2b>
|
CC-MAIN-2016-26
|
http://www.webmd.com/cancer/tc/addisons-disease-topic-overview
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00088-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.906482
| 478
| 3.734375
| 4
|
Brian Martin's publications on nuclear war
Brian Martin's publications
Brian Martin's website
Since the first nuclear explosions in 1945, scientific and popular attention has focussed at different times on different actual and potential effects of nuclear weapons. First highlighted were the immediate effects of blast and heat. Because the explosions over Hiroshima and Nagasaki were air bursts, the full implications of radioactive fallout were not realised until the extensive atmospheric testing of hydrogen bombs in the 1950s. In the 1970s, it was realised that nuclear explosions could inject large amounts of nitrogen oxides into the stratosphere, acting as a catalyst to reduce ozone levels and thereby allow increased amounts of ultraviolet light to penetrate to the earth's surface.
It was only in 1982 and 1983 that another possible consequence became the subject of intensive scientific investigation and extensive political discussion: severe climatic effects. A major nuclear war would lead to vast amounts of soot and dust being lofted into the atmosphere, most importantly from the burning of cities. This material would absorb incoming solar radiation but continue to allow infrared heat from the earth's surface to escape to outer space. The result could be a significant drop in surface temperatures, especially in continental interiors. The temperature drop could cause massive death by freezing and destruction of ecosystems. The popular term for this is 'nuclear winter', which for convenience I will use in preference to some other less emotive but more cumbersome phrase such as 'global climatic effects of nuclear war, especially temperature decreases'.
The nuclear winter issue illustrates the interplay between what are usually called science and politics. Proponents of the strong nuclear winter position -- those who emphasise the most serious consequences -have consistently adopted the mantle of science, trying to distance themselves from political motives, while at the same time a few of them have been active in spelling out what they believe to be the policy implications of the science. Critics of the strong position -those who emphasise uncertainties and the likelihood that the effects may be less than the worst -- have also adopted the mantle of science. In addition, a few critics have questioned the motivations behind nuclear winter research.
For the sake of exposition, I will continue to talk of 'science' -- scientific knowledge, the methods used in generating and validating it, and the community of people who produce it -- and 'politics' -- the exercise of power and social arrangements embodying the distribution of power -- as distinct entities. I first deal with ways in which politics may have entered the science of nuclear winter, then with ways in which the science of nuclear winter has entered politics and finally with ways by which the distinction between science and politics is maintained. In conclusion, some implications for science and public policy are spelled out.
The approach used here draws on the sociology of scientific knowledge[4-7], which examines the social mechanisms which serve to establish what counts as knowledge. These mechanisms include economic and political structures, potential applications, professional interests and interpersonal dynamics. Data, arguments, claims about method, status and tradition all can be used as 'resources' or 'tools' to persuade other scientists that certain things constitute valid knowledge.
This approach to studying science does not attempt to judge what is scientifically 'correct'. The analysis includes examination of social processes associated with all knowledge claims, whether the balance of informed scientific judgement accepts or rejects those claims now or in the future.
More than 'pure science' is involved when a researcher decides that a particular area is 'scientifically interesting'. Many features of wider society influence the process of choice of research, including the availability of funding, possible applications, technological infrastructure, ideas prevalent in society and the social position of scientists. Each of these factors played a role in turning nuclear winter into a priority research area in the 1980s.
The resurgence of the peace movement in the early 1980s provided fertile ground for discovering the nuclear winter effect. The upsurge in peace activism spread throughout numerous organisations and occupational groups, including doctors, scientists and engineers. In this context, the editors of the environmental journal Ambio, published by the Swedish Academy of Sciences, planned a special issue in 1982 to cover the effects of nuclear war. Paul Crutzen was asked to deal with the effects of nuclear war on the atmosphere for this issue.
Crutzen in his Ph.D. did pioneering work in showing the important effect of nitrogen oxides in regulating the amount of ozone in the stratosphere. His work came just at the height of the debate over supersonic transport (SST) aircraft in the United States. Crutzen, along with Harold Johnston, was the first to draw attention to the possible impact of SSTs on ozone due to the nitrogen oxides in their exhaust[9-10]. So from an early stage Crutzen was attuned to the sensitivity of natural systems to human impacts.
A later development in the SST debate was comparison of the effects of SST exhausts on ozone with the effects of nuclear explosions, which also produce nitrogen oxides. Ironically, the first studies of the effects of the atmospheric nuclear explosions on ozone were done in the early 1970s to show that SSTs would not affect ozone significantly. The debate over the effects of past nuclear tests on ozone continued for a couple of years before a few researchers pointed out that a full-scale nuclear war could have catastrophic effects on ozone. This led to a study in 1975 by the US National Academy of Sciences on the long-term effects of nuclear weapons.
In 1981 journalist Jonathan Schell wrote a series of articles in the New Yorker arguing that nuclear war could cause extinction of human life, principally through destruction of stratospheric ozone. Schell's articles, made into a book, were inspired by the burgeoning peace movement and in turn were widely taken up by it. Yet by the time he made his argument, the basis for massive ozone destruction by nuclear weapons had largely evaporated.
This is what Crutzen and his collaborator John Birks found in 1982 as they ran their computer models dealing with stratospheric ozone to determine the effects of a nuclear war. Because the large multi-megatonne nuclear bombs deployed in the 1950s were being replaced by larger numbers of smaller warheads, not as much nitrogen oxides would be lofted far up into the stratosphere. Crutzen and Birks' model did not predict a significant reduction in stratospheric ozone using the Ambio reference scenario.
Crutzen and Birks each over the years had examined a wide range of physical and chemical processes which could affect the dynamics of the atmosphere. As they dealt with the problem of the effects of nuclear war on the atmosphere, they happened to think about the smoke released by fires caused by nuclear attacks. Quick calculations showed that the smoke could absorb a large fraction of sunlight, leading to 'twilight at noon'. In short order they included this in their now-famous paper for Ambio.
The Crutzen-Birks paper was immediately taken up as heralding an important and hitherto unrecognised effect of nuclear war. The next step, to nuclear winter, was taken by Richard Turco, Owen Toon, Thomas Ackerman, James Pollack and Carl Sagan, the so-called TTAPS group. Taking the Crutzen-Birks idea that smoke and dust from a nuclear war would block out sunlight, they calculated that this would lead to massive cooling at the earth's surface: sunlight in the visual region could not penetrate the smoke, but much infrared radiation from the earth's surface could still escape[17-18].
The nuclear winter idea was spread to a highly receptive audience, including the peace movement, the mass media and much of the general population. Research groups around the world have examined the issue in greater depth.
Previous military research had not pursued the possibility, at least for wider evaluation. Arguably, the military has been more interested in the immediate effects of nuclear war, since those are the ones of significance for fighting wars and providing an obvious deterrent. In addition, military scientists are not as free to report their results in open forums. Edward Teller refers to studies in the 1960s of the climatic effects of dust raised by nuclear explosions done at the Lawrence Livermore National Laboratory, a nuclear weapons design laboratory. But these studies were not perceived or promoted as uncovering an area potentially crucial for nuclear policy-making.
Turning now to the actual research: does the science of nuclear winter embody in any way assumptions about politics? The original TTAPS paper and accompanying Ehrlich et al. paper illustrate the way this can occur. I argue here that these papers make a series of assumptions which emphasise the worst case for the effects of nuclear war.
(1) Targeting. The TTAPS paper uses a baseline case of 5000 megatonnes (MT), supplemented by a wide range of other scenarios which also lead to nuclear winter effects. Though in general terms some of the scenarios appear reasonable, no detailed strategic rationale is offered for any of them. A cynic might say that the key characteristic of the scenarios is that they produce sufficient smoke or dust to produce nuclear winter. This is illustrated by the 100MT scenario, which is often misinterpreted as 100 bombs on 100 cities. Actually it involves 1000 bombs and the burning of a vast number of cities each of just the right size. It is easy to misinterpret the results for this scenario as showing that any 100MT war is enough to trigger nuclear winter, whereas any militarily realistic targeting of 100MT would cause relatively few cities to burn and probably produce little cooling according to present models.
If the scenarios had been designed to produce a spread of soot injections rather than a fairly constant soot injection for different megatonnages, the result of nuclear winter would have seemed more sensitive to variations in targeting.
Ehrlich et al. concentrate on a 10,000MT scenario which generates more severe environmental effects than either the Ambio scenario or the TTAPS baseline case. They state that they take the TTAPS 10,000MT 'severe' case as their reference case because of policy implications. (According to Michael MacCracken, TTAPS in their draft paper presented a 10,000MT baseline. After receiving comments, they corrected an error of a factor of 2 in the smoke density and also reset the baseline to 5000MT. These two changes counteracted each other, leaving the baseline consequences unchanged. Ehrlich et al. considered a maximum but to them plausible scenario which, after the factor of 2 adjustment, turned out to be the TTAPS 10,000MT scenario.)
(2) The threshold. The TTAPS paper suggests the existence of a sharp threshold, above which severe nuclear winter effects are 'triggered'. The 100MT scenario is identified as above the threshold. The idea of a sharp threshold is convenient for policy purposes, since one can argue that arsenals should be reduced below the threshold level, as Sagan has done. Later researchers have discounted or qualified the idea of a sharp threshold[27-28].
(3) One-dimensional model. TTAPS uses a one-dimensional model with annually averaged insolation and temperatures. The model shows dramatic temperature drops over land but little effect over the oceans. The authors comment on the moderating effect of the oceans in the text, but these qualifications have been lost on most readers and commentators who have concentrated on the tables and abstract, where the extreme land results are highlighted. Ehrlich et al. focussed on the land results from TTAPS and applied them over the whole globe in assessing the biological effects of nuclear winter.
(4) Extinction. Ehrlich et al. itemise all sorts of disasters from nuclear war. For example, they raise the issue of decreases in stratospheric ozone and resulting increases in ultraviolet (after the smoke and dust clears), not noting that changes in the size of warheads have made this threat much less serious. They add up a set of hazards to conclude that human extinction may occur, without explaining precisely how everyone could die[30-31].
While listing many dangers from nuclear war, they do not mention factors which might ameliorate the problems. For example, food shortages due to crop failures are highlighted, plus difficulties in transporting stored food to population centres. For the rich countries, there is no mention of changing from a meat diet to a grain diet or of reducing caloric intake, which together would extend food reserves by a large factor. For Third World countries, they emphasise dependence on imports of food from rich countries. They do not mention the exports of food to rich countries, nor the high level of cash cropping for export to industrialised countries, which could be replaced by food crops for local consumption[32-34].
The suggestion that extinction of human life could occur is made without considering any counterexamples. For example, consider Tasmania. As an island in the southern hemisphere, nuclear winter effects would be minimised. It has large hydropower capacity for providing heat and power, and the large sheep population could help tide the modest human population through a failed harvest. Such examples are not addressed by Ehrlich et al.
The possibility of extinction is not even discussed in the text of Ehrlich et al.'s paper. It is only raised in the summary and conclusion.
The combination of these assumptions leads to concentration on worst cases. The selection of results for key diagrams and abstracts makes the drawing of certain policy implications much easier. In other words, the TTAPS and Ehrlich et al. papers are not 'value-neutral' pieces of research, but 'push' certain conclusions on readers through technical assumptions in model construction, selection of evidence and highlighting of results.
One response to these points is that the authors should have been slower to rush into print and more careful in their presentation of results, given that portions of the media are well known for sensationalism. But, as described later, some of the initial researchers were also active in the media promotion of nuclear winter, certainly more so than in issuing qualifications concerning media exaggerations.
The above points have not been lost on critics of nuclear winter. They have homed in on various assumptions and limitations of the research[36-47].
I have devoted more attention here to the assumptions underlying the models of the nuclear winter proponents because in the debate so far they have held the greater scientific status and credibility. The critics too can be assessed as having made assumptions, selected evidence and emphasised results which support their conclusions. For example, some of them have drawn comparisons with volcanic eruptions which have put large amounts of dust into the atmosphere to suggest that nuclear war would be no worse; these comparisons, argue the proponents, have overlooked the differences between soot and volcanic dust in absorbing sunlight. The vigorous responses of the proponents[48-51] provide insights into the ways the critics 'push' their conclusions.
The critics use many of the same techniques as the proponents of nuclear winter in reaching their conclusions. But there is an asymmetry between the two sides in that the critics have not developed their own models. Their usual approach is to offer methodological criticisms and emphasise uncertainties in the existing models. For example, the models have been criticised for not adequately taking account of the coagulation of soot, the raining out of soot and dust, and gaps in soot clouds in the first few weeks after fires.
The differences between proponents and critics can be attributed to differences in assumptions about what it is necessary to prove in the research. TTAPS, Ehrlich et al. and others emphasise worst cases because they assume (and sometimes state) that their task is to show that there is some possibility that these worst cases may actually result. Ehrlich et al. state "decision-makers should be fully apprised of the potential consequences of the scenarios most likely to trigger long-term effects" (p. 1294), namely the worst cases.
The critics, on the other hand, can be interpreted as assuming that it is more appropriate to determine the most likely estimates. They lay the burden of proof on the proponents to demonstrate that nuclear winter will occur with a high degree of certainty; with this assumption, their methodological criticisms and emphasis on uncertainties are natural responses.
As scientific research and the controversy have proceeded[54-61], the distinction between proponents and critics, never an exhaustive nor clearcut categorisation, has become more blurred. A variety of effects have been studied; some increase and some decrease the likelihood of a severe nuclear winter. Starley Thompson and Stephen Schneider, important figures in the early work, have come to the conclusion that the likely effects are better described as 'nuclear autumn', but have resisted the interpretation that this means a rejection of the basic points made about nuclear winter[63-64]. The effect of politics on nuclear winter science becomes harder to assess as the models become more complex and the debate becomes more differentiated[65-66]. What is important here is the basic process involved rather than the intricate details, and the process is best illustrated by the early models and criticisms.
A number of the key figures in the nuclear winter dispute participated in earlier scientific controversies, often concerning the impacts of technological development on the environment. In nearly every case, individual assumptions about the fragility or resilience of ecosystems have remained the same.
Paul Ehrlich is a world-famous ecologist. Over the years he has consistently warned of dangers to ecosystems from various sources. In The Population Bomb and other books he has emphasised dangers to the environment from human activities. In the nuclear winter debate he has taken the same orientation, emphasising worst cases and the possibility of extinction. To a lesser extent, Carl Sagan has commented on a number of environmental issues, emphasising sensitivity to disruption.
Critics of the 'extreme' claims on nuclear winter have included several individuals who have previously attacked prophecies of environmental doom. John Maddox, editor of Nature, who has issued a series of cautionary comments about nuclear winter studies, is a long-time critic of environmental doomsdayism. Edward Teller, who has argued that nuclear winter claims are exaggerated, has been a supporter of a capability to engage in nuclear war-fighting as a method of preventing war. S. Fred Singer, who has made criticisms of nuclear winter studies, earlier did calculations which upset those who claimed that supersonic transports might seriously affect stratospheric ozone. P. Goldsmith, participant in a research team analysing an effect which would reduce nuclear winter effects, earlier was member of a team downplaying the environmental effects of Concorde.
That there is continuity in the perspective that an individual has on the world should be neither surprising nor especially worrying. It does not mean that what a scientist has to say is necessarily wrong. But it does indicate that scientists come to scientific problems with various preconceptions, preferred methods of analysis and background concerns which can shape the way they define the problem, select evidence, build models, treat uncertainties and present results. Nuclear winter is an extremely complex area scientifically, laced with major uncertainties, and this allows a freer range of assumptions and interpretations than many other areas. Nuclear winter is also an area which has considerable potential policy implications, and this means that the impact of 'politics' on the development of nuclear winter 'science' is likely to be much more apparent that in other, more esoteric, research fields.
Compared to the subtle and contentious processes by which politics has entered the science of nuclear winter, the processes by which the science of nuclear winter has entered the political or policy domain are open and transparent. Nuclear winter has been used as a political 'resource' or 'tool'. Particular individuals and groups have used claims about nuclear winter to pursue explicitly political agendas. The two main groupings are members or supporters of the peace movement, who have unreservedly taken up nuclear winter to argue for nuclear disarmament, and defenders of existing military policies who have minimised the impact of nuclear winter for policy-making.
The promotion of nuclear winter for public and policy impact reached high peaks even before scientific publication of the theory. The major tool in this promotion has been the mass media, and the key figure at the interface between the researchers and the media has been Carl Sagan, a media personality in his own right.
The promotion has included Sagan's article in Parade (a Sunday newspaper supplement, circulation 30 million), well publicised scientific conferences, press releases and press conferences, meetings with members of Congress, and television appearances. A minimum of many tens of thousands of dollars have been devoted to public relations about nuclear winter. Activist groups involving scientists have sent large amounts of nuclear winter material to politicians. According to one perspective on 'social problems', the reason nuclear winter is perceived as an important issue is precisely because there is a social movement promoting it as such.
The scientist publicisers of nuclear winter have had much sympathy from members of the media. Without support from journalists and tolerance from proprietors, the massive promotion might not have led to such worldwide coverage. The receptiveness of the media to nuclear winter can be understood at more than one level.
Most directly, nuclear winter is a good story. Doom and destruction are staples of media coverage. The more extreme claims of freezing, darkness and extinction have received much more coverage than cautionary comments about the limitations of the models.
More than this, the great strength of the peace movement in the 1980s has meant that peace concerns are much more acceptable. With a large fraction of the US public supporting a freeze on nuclear arsenals, reporting nuclear winter is not seen as stepping outside the bounds of public opinion.
A comparison with the issue of the effects of nuclear war on ozone is instructive. When this came to the fore in the middle 1970s, it received comparatively little media attention. The scientists concerned did not mount a big media operation, partly because the peace movement was in the doldrums and provided little incentive. For the media, the issue of nuclear war was not a hot topic. Ozone depletion from nuclear war only became a major political issue years later with Jonathan Schell's writings which drew on and inspired peace movement activism.
Another possible reason for the receptiveness of the media to nuclear winter issues is a social structural affinity between scientists and journalists. Both groups make their living by dealing with knowledge. Their respective claims to special understandings, access or presentations of knowledge constitute the basis for their claims to occupational status and economic rewards. They are part of what has been called the intellectual class, the professional-managerial class or the New Class[81-84]. This class or stratum can be contrasted with corporate managers and politicians, whose power derives from control over economic assets and policy-making machinery. Some scientists and journalists orient their work to corporations, government and the military, but others use their claims over knowledge to challenge these groups. Members of the New Class are prominent in the peace movement. Nuclear winter is a prime case of a challenge to traditional political elites -- whose power is rooted in established bureaucratic machinery -- by a group of outsiders whose demands are based on claims to special knowledge and expertise.
The group of intellectuals who traditionally have exercised influence over nuclear policy-making are the strategic experts[86-87]. These are mostly insiders with knowledge about arsenals, technical capabilities, targeting plans, crisis decision-making methods and so forth. Their influence depends on claims to special knowledge, much of which is inaccessible to others for reasons of national security. Included here are some elite scientists connected with weapons development.
Nuclear winter represents a major challenge to the role of the strategic experts. Those with expertise in weapons development, war gaming and international politics were suddenly confronted by a group of atmospheric scientists and ecologists some of whom demanded, on the basis of their special expertise, that certain policy measures be adopted. The nuclear winter scientists have developed their scenarios and drawn their conclusions with little input from nuclear strategists, yet some of the nuclear winter scientists make dramatic demands for policy changes on the basis of their own expertise. Nuclear winter as science thus forms the basis for a major political challenge to the normal basis for strategic policy-making.
The basic implication for policy, as seen by a number of nuclear winter scientists, is towards nuclear disarmament. An extreme nuclear winter implies that more people will die in non-combatant countries, mainly from starvation, than in combatant countries from the direct effects of nuclear attacks. If human extinction is a possibility then, it is argued, nuclear war is unthinkable. Even short of extinction, nuclear war becomes strategically counterproductive, since the aggressor country may suffer from nuclear winter as seriously as the victim of the attacks.
While most scientists have avoided extensive involvement in policy issues, their work has undergirded the platform for a few active scientists. Carl Sagan has argued for 'deep cuts' in nuclear arsenals, to reduce them below the threshold for nuclear winter. Barrie Pittock, the most prominent promoter of nuclear winter in Australia, has argued against Australia's nuclear alliance with the United States. Some Soviet nuclear winter scientists who are close to Gorbachev seem to have used nuclear winter arguments to influence Soviet disarmament proposals. These Soviet scientists seem to have emphasised the worst effects of nuclear war even more than Western scientists.
In spite of all the attempts to affect policy, the influence of nuclear winter has been less than many of its publicisers hoped. What has happened, for the most part, is that nuclear winter has been variously interpreted in ways which provide the least threat to prevailing beliefs and practices.
Most members of peace movements, and indeed the general public, have long believed that nuclear war means the death of most or all people on earth[93-96]. If this was true before, nuclear winter is likely to only affirm the concern of peace activists or the apathy and hopelessness of many other people. On the other hand, when information about nuclear winter is linked to messages of hope, as for example in workshops by Joanna Macy, then this can lead to greater peace movement activism. Arguably, though, this is due less to information about nuclear winter than to contact with activists who show by example what can be done.
On the other hand, governments and militaries of the nuclear weapons powers have only grudgingly acknowledged nuclear winter and for the most part have denied that it has any major significance for policy. The US Department of Defense found that nuclear winter was a serious consideration, but also that it affirmed the necessity to avoid nuclear war by maintaining military strength. The Australian government has used the rhetoric of nuclear winter to support its military alliance with the US, saying that since everyone would be affected by nuclear winter, there was nothing for Australia to gain by removing US military bases from the country and reducing the likelihood of nuclear attack.
Some Third World governments have used nuclear winter to argue for nuclear disarmament by the nuclear powers. But they have been making these arguments well before nuclear winter appeared on the scene, and indeed this demand is written into the Nuclear Non-Proliferation Treaty dating from the 1960s.
Nuclear winter on its own does not automatically lead to certain policy implications. To draw a policy implication, other assumptions or values are involved. Carl Sagan assumes that the risk of global catastrophe or even extinction must be removed, and so argues for deep cuts (an "apparently inescapable conclusion") since civilian and military leaders of nuclear states cannot be trusted with doomsday weapons, nor can technical controls against nuclear war be guaranteed. The promilitary establishment argument is based on the opposite assumption that leaders will continue to be responsible in their use of nuclear weapons in the same way they have used them to maintain peace in Europe since 1945.
The uncertainties associated with the science of nuclear winter aid the drawing of divergent political conclusions. Some will emphasise the possibility of disaster and draw the implication that action is necessary to avoid even a small risk of major catastrophe achieved by one particular route. Others can emphasise the need to avoid rash action, which might also result in catastrophe, until uncertainties are clarified. Just as uncertainties facilitate the subtle building of political assumptions into scientific work, so uncertainties facilitate a drawing of divergent policy conclusions from scientific results.
This is not to say that nuclear winter is a neutral concept which can be used equally easily to justify any policy. In practice, because everyone professes to be opposed to mass killing, nuclear winter is easier to use to attack present nuclear policies than to defend them. This explains why nuclear winter scientists and peace movement activists have highlighted and promoted the possible dangers of nuclear winter while defenders of nuclear establishments have emphasised uncertainties and reservations.
Is science really separate from politics? The answer to this question is itself the subject of a continual struggle to define science and define politics. Science, if it is seen as pure and unadulterated by nonscientific factors, usually takes on a greater status. In this situation, those who have the dominant claim over scientific authority prefer to portray science as not political. Scientists who are challenging scientific orthodoxy tend to use scientific arguments, presenting their own views as apolitical. In both cases, especially when scientific results are overtly being used in a political fashion, those on the other side are alleged to be political. To even say this, to suggest that political factors are entering what is presented as a scientific debate, is to discredit the other side.
The power to be derived by using science to justify political conclusions is greatest when the science is seen as being quite separate from politics. Consequently, the struggle to privilege science as being above politics, which involves a constant redefinition as to what is true, unbiased science, is of fundamental importance in scientific-political disputes.
In the case of nuclear winter, the proponents have held the scientific high ground. They have had the weight of numerous eminent recommendations, prestigious journal publications and scientific committee endorsements. Therefore they have everything to gain by portraying their favoured results as strictly scientific and as aloof from political squabbling. Most of the critics, on the other hand, have not been in the position of presenting alternative model results but have had to resort to raising methodological criticisms and pointing to outstanding uncertainties. By and large they have argued within the scientific context. But being in a much weaker position, they have more often raised overtly political criticisms.
There are many routine processes by which science is socially constructed as being at a distance from politics. One is the alleged separation between motivation for doing research and the results of the work. As noted earlier, the 1980s peace movement provided the context for the discovery and promotion of nuclear winter. This background is normally assumed by all concerned not to affect the validity of the knowledge produced. This is the disjunction between the contexts of discovery and justification, a central feature of Popperian philosophy of science. The relativist sociology of scientific knowledge, used in this paper, rejects this disjunction, noting that a selection of what problems to study and what questions to ask to some extent influences the sort of answers obtained.
The routine separation between motivation and product is inherent in the normal way that scientific papers are written up, which avoids mention of real motivations, preconceptions, failures and reconstructions. Furthermore, in technical journals, explicit treatment of policy issues is frowned upon. The image is maintained that what is being presented is objective scientific knowledge, unsullied by the political context. Although there are occasional statements against war found at the end of nuclear winter papers[106-107], the usual stance of nuclear winter scientists is to try "to refrain from political advocacy".
The way in which the categories of science and politics can be socially constructed as different and separate was most dramatically shown in the Conference on the Long-Term Worldwide Biological Consequences of Nuclear War, held in Washington, D.C. beginning 31 October 1983. This famous conference provided a major media launch for nuclear winter and was designed to reach "educators, scientists, business executives, public officials, and other citizen leaders and representatives of other nations, as well as environmentalists". It happened to start the day after Sagan's Parade article appeared; the TTAPS and Ehrlich et al. Science papers were published a couple of months later. The highlight of the conference was a television link with scientists in Moscow.
The conference was officially set up to discuss only 'science' and to eschew any discussion of 'policy'. The boundary between science and policy was often confronted at the conference, especially in the question and answer periods. For example, when Ralph Nader asked whether a successful nuclear first strike would invite suicide for the aggressor and thus be self-deterring, Carl Sagan answered
"I think I have to decide, Ralph, forgive me, that this is a policy area. I don't want to discuss it at length; but I think that to take out all major fixed strategic targets reliably, you have to exceed the nuclear winter threshold.
"MR. NADER: I think you are drawing too fine a line. My question basically was in terms of the ricochet effect. To put it more simply, what would be the threshold of a ricochet effect on the first launch, first-strike period?
"DR. SAGAN: We have an excellent chance that if Nation A attacks Nation B with an effective first strike, counterforce only, then Nation A has thereby committed suicide, even if Nation B has not lifted a finger to retaliate."
The difficulty of separating 'science' from 'policy' is apparent here. Presumably Sagan decided that to talk briefly about 'Nation A' and 'Nation B' was not policy. Would talking at greater length about this issue or referring to the United States and the Soviet Union be entering policy? The difficulty is that to provide any set of facts in the context of a policy issue can be interpreted as entering policy, if the selection of those facts is in any way affected by their possible policy relevance. It is as if the scientists are taking people a long distance along a particular road (selected on the basis of technical assumptions, etc.), in a situation where more than one road is available due to great uncertainty, and then saying "We can't take you any further because that would be policy".
The same process is found in the presentation of the reports of the Scientific Committee on Problems of the Environment (SCOPE). These impressive reports avoid openly spelling out the policy implications of their findings, yet the Chairman of SCOPE, Sir Frederick Warner, is quoted as saying "anybody who thinks they can read this and not draw policy conclusions is making a big mistake".
The key conclusion of the reports is that more people in non-combatant countries may die from a nuclear war than in combatant countries. Can it be concluded from this that, therefore, nuclear weapons should not be used? This seems to be the implication drawn by some of the reports' authors, speaking in the policy mode. But others, such as the US Department of Defense, might reach a different conclusion[115-116].
While it is easy to criticise the claim that science and policy are kept separate, the key point is that the claim is made. It can be seen as a way to maximise the credibility of the scientists for policy purposes. Scientists claim expertise in scientific areas and claim exclusive rights to judge the quality of the science. As long as they are perceived to stay in the realm of science, they are hard to attack. But to formally enter a policy debate would be to lose credibility, since the scientists have no formal training, positions, long experience or special access to inside knowledge in this area. Furthermore, values commonly play a more explicit role in policy disputes, and it would be hard to obtain 'scientific consensus' on questions of values. For the scientists to outpoint the strategic experts on the latter's home ground would be difficult. The most effective method is to launch a foray into policy while claiming to ground the arguments in the realm of science. The Steering Committee for the 31 October 1983 conference "felt that the inclusion of other considerations such as nuclear strategy and economic, social, and political implications would detract from the central scientific message". Of course, the 'message' was not 'pure science' but rather policy implications embodied in scientific results presented in a particular social context.
The distinction between science and policy is treated by scientists as one between fact and value, the traditional distinction in the positivist philosophy of science. Nuclear winter scientists present what they are doing as the generation of facts, while the policy side is to do with value judgements. Indeed, once this distinction is presupposed, policy is referred to as involving the application of science.
In a scientific-political debate, the side with more orthodox scientific credibility usually prefers to define the debate as a scientific one and to exclude overt discussion of political issues. Because science has an image of objectivity and neutrality, the side which has 'scientific' backing has little to gain by raising the political dimension. By the same token, some of those on the side with lesser scientific credibility may see an advantage is pointing to political factors involved with the orthodox science, while at the same time presenting themselves as scientific. Typically, both sides maintain the science-politics dichotomy in regard to their own claims and allege the interference of politics in science for their opponents. But those with more scientific credibility are less likely to provide a comprehensive political discussion since they are more able to simply dismiss their opponents as 'unscientific'.
This pattern can be seen in a variety of disputes. In the debate over nuclear power, there were few scientist critics, at least in early years. The proponents claimed sole authority on nuclear issues, and dismissed critics as incompetents and malcontents[119-120]. The anti-nuclear movement was seen as lacking any technical credibility. Analyses growing out of the movement challenged the 'nuclear establishment' on technical grounds but also provided a critique of the role of vested interests in promoting nuclear power.
Scientist critics of fluoridation and of pesticides have also come under fierce attack. In defending the orthodox position, it is most important to undermine the scientific credibility of critics; other critics can easily be dismissed as technically uninformed.
In the nuclear winter controversy, the best example of this dynamic is seen in the response to criticisms by Russell Seitz. Seitz is an Associate of the Harvard University Center for International Affairs where earlier he was a Visiting Scholar. While he has presented technical criticisms of nuclear winter on several occasions[123-124], he really raised the hackles of nuclear winter scientists with an article in The National Interest entitled 'In from the cold: "nuclear winter" melts down'. In this article he not only criticises the scientific basis for nuclear winter, but also systematically argues that the whole nuclear winter argument was politically motivated: "a politicization of science sufficient to result in the advertising of mere conjecture as hard fact".
Seitz points out the role of the peace movement in triggering consideration of nuclear winter. He argues that the TTAPS model is filled with assumptions which give results which the researchers wanted to achieve: "worst-case analysis run amok". More damagingly, he claims that the TTAPS results sidestepped peer review. To counter Sagan's testimonials from scientists in support of the TTAPS study, Seitz quotes comments about nuclear winter from a number of prominent scientists (see appendix). These quotes are powerful because they appear to puncture the usual image of nuclear winter, presented by Sagan, Ehrlich and others as being a consensus picture of numerous researchers from many countries.
After a discussion of the media promotion of nuclear winter, Seitz turns to the substantive scientific criticisms, such as Schneider and Thompson's reevaluation that the effect would better be called 'nuclear autumn'. Seitz also offers his own technical criticisms.
Seitz's article is highly provocative with its mix of science and politics and its strong claims. He suggests that nuclear winter is virtually a conspiracy by supporters of Western peace movements: "What is being advertised is not science but a pernicious fantasy that strikes at the very foundations of crisis management, one that attempts to transform the Alliance doctrine of flexible response into a dangerous vision." Seitz favours maintaining US military strength against the Soviet threat, as does The National Interest where his article was published.
If Seitz's claims had been restricted to The National Interest, the proponents of nuclear winter might have ignored them. But just as the idea of nuclear winter struck a resonant chord among the peace movement, Seitz's criticisms found a receptive audience in Conservative circles and received a major airing with publication of a version of his article in the Wall Street Journal.
In principle, there are a number of ways in which a reply to Seitz could have been couched. One is to counter Seitz's scientific claims. More delicate is the discussion of the political motivations behind nuclear winter research. As a piece of political analysis, Seitz's approach could be attacked as being too conspiratorial or as not being grounded in an explicitly acknowledged body of social theory. But to even raise the issue of political factors influencing nuclear winter research would be damaging to the scientific objectivity claimed for the work. It is therefore not surprising that the nuclear winter proponents have not presented their own version of the interplay between science and politics.
The response of TTAPS to Seitz is revealing. Turco in an unpublished letter to The National Interest and TTAPS in a letter to the Wall Street Journal defended the peer review of nuclear winter and reaffirmed their own scientific work, especially by referring to other studies which have confirmed their original claims. Beyond this, the distinctive part of their reply is a vicious attack on Seitz himself. Seitz's claim to be a scientist is challenged; he is alleged by Turco to be "actually a stock investment consultant (at R. J. Edwards, Inc.) now dabbling in atmospheric physics", who "is not the principal author of a single peer-reviewed scientific work in any technical field". TTAPS contrast this with the impressive credentials of nuclear winter scientists: "the American Physical Society (the primary association of physicists in the U.S.) granted its Leo Szilard Award for Physics in the Public Interest to Paul Crutzen, John Birks and the undersigned team, known as TTAPS, for their research on the nuclear winter theory."
(Turco's characterisation of Seitz appears inaccurate at least on some points. At the time, Seitz had a faculty appointment at Harvard University; previously he had worked at R. J. Edwards, Inc. not as a "stock investment consultant" but as Director of Technology Assessment. Seitz has been principal author of peer-reviewed scientific work.)
The TTAPS response is an attempt. to deny any credibility to Seitz as a scientist or a commentator, on the grounds that he lacks scientific experience and has made errors in his scientific comments. The response avoids any substantive comment in regard to Seitz's political analysis, except. to deny it and reaffirm nuclear winter science's separation from political factors. There is no suggestion that there might be a germ of truth in Seitz's political critique.
The TTAPS response thus is one of maintaining the distinction between science and politics, at least for those scientists with credibility who have developed the nuclear winter theory. Seitz, who challenged the science-politics distinction, is attacked not only for getting the facts wrong but also for not being a real scientist. The importance of maintaining nuclear winter science as above politics is suggested by the vehemence of the personal attack on Seitz.
The nuclear winter controversy, like many others, is an interaction between science and politics in which there is an ongoing attempt to define distinct spheres for science and politics and at the same time to use science, seen as something above politics, to intervene in political debates. The proponents of nuclear winter, so far having the greatest claims to scientific credibility, have the greatest interest in portraying their science as untainted by politics. They implicitly promote the idea that on the one hand they can carry out objective science unaffected by political agendas and on the other hand that some of them can legitimately enter policy arenas, using their scientific credibility as a key resource.
The critics of nuclear winter, lacking the same degree of scientific credibility, have somewhat different options. Those critics with status as scientists mostly prefer to argue on scientific grounds, focussing on uncertainties and methodological shortcomings in nuclear winter research. This is a form of loyal opposition, since the key distinction between science and politics is not. challenged (though some complaints about the public promotion of nuclear winter can be heard from this group[135-136]). A few other critics, notably Russell Seitz[137-138], while not. neglecting scientific criticisms, have directly argued that political agendas lie behind nuclear winter research. If such claims are given any public circulation, they are very threatening to nuclear winter researchers, who have counterattacked by disparaging the quality of Seitz's evidence and credentials. Scientist critics of nuclear winter have not leapt to Seitz's defence.
Just. because 'politics' may be involved with nuclear winter research does not automatically mean that the research is scientifically wrong, tainted or inappropriate for use in policy-making. A straightforward response is to be aware of the political context of the research when evaluating it. For example, if the peace movement has provided the indirect or direct stimulation for doing the research, this may suggest that other social movements (or other strands of the peace movement) might have provided the incentive for different research or different emphases in nuclear winter research. If the background and experiences of key nuclear winter researchers lead them towards certain presuppositions in their model-building, such as an emphasis on worst cases, then this is something to be aware of, not necessarily something to be condemned. If nuclear winter research is defended on the basis of verifications (different scientists finding the same results from similar models) rather than attempted falsifications because verifications are better suited to promoting the theory, the implications of this for policy-making should be discussed.
Arguably, all scientific research is shaped by its social context especially research funding and potential applications -- which influences what research is considered worth doing, what conceptual models are available and favoured, what results are considered significant and in what language and forums findings are presented. Nuclear winter may have been subject to these processes, but certainly no more so than decades of research into nuclear weapons where the agenda for science and its applications has been overtly determined by military and political considerations.
Unfortunately, careful consideration of the social context of research is seldom possible because of the heavy investments by scientists and the institutions funding science in portraying science as separate from politics. Scientists engage in a whole set of practices which serve to define science as precisely that which is independent of social factors. The value of science as a legitimator of particular knowledge claims would be undermined, at least in the short term, if political influences were openly discussed. Neither side in a dispute such as that over nuclear winter is likely to discuss its own political dimension. Scientists who acknowledge being influenced in their research work by political influences are opening themselves to the charge of being 'unscientific'.
Nuclear winter can be seen as simply one more meta-level for arguing about military policy. There are quite a number of direct discussions about the fundamentals of military policy, but often these become transformed into other domains. When antiwar activists damage military equipment, this is a direct confrontation. When they are brought before the court and their reasons for their actions are ruled out of order, the confrontation over military policy is turned into a legal issue. Similarly, arms control negotiations are less about the real issues of the arms race and more about managing and continuing the arms race in another forum.
If debates over nuclear winter are, in part, another way of debating military policy, the important question is, what assumptions are built into this meta-debate? One important assumption is that the greater the consequences of nuclear war can be demonstrated to be, the stronger is the argument for nuclear disarmament. Sagan's argument for deep cuts is premised on this assumption; it is also manifest in the tendency of military experts to downplay the effects of nuclear war. Yet it is easy to question the assumption and argue, for example, that the blast, heat and fallout from nuclear war are more than enough to justify the most strenuous efforts to avoid it. In some ways the controversy over the size of the effects of nuclear war is a diversion, because it is only linked to the issue of what to do about the problem of nuclear war by this dubious assumption. The key differences concerning political action are not confronted directly but only in refracted form in a 'scientific' debate.
I have argued that a key social dynamic in the nuclear winter debate is the challenge to strategic experts by newcomers to military policy, namely a small subset of atmospheric researchers and ecologists. The assumption behind this confrontation is that experts -- whether strategic experts or scientific experts -- have a key role in the decision-making. The dispute is over which group of experts has the best or most relevant expertise, not the role of expertise itself. Neither group voluntarily exposes the weak points in its claims to expertise.
The nuclear winter researchers, although strongly influenced by the peace movement and its concerns, have not had the effect of turning the debate over to the public. There have been quite a number of popularisations of nuclear winter, often written by scientists, which aim to inform members of the public about the research and its implications[141-143]. These popularisations, like the research itself, spell out a clear demarcation between science and politics. The role of the public is to digest the science and its implications for action. There has been little attempt by popularisers to offer a critical understanding of the social and political dynamics of doing science.
One of the prime aims of the peace movement has been to demystify the process of military decision-making and to uncover and challenge the assumptions associated with claims about the national interest, foreign threats, 'defence' and so forth. Among the experts who have been exposed to scrutiny are the theorists of nuclear war-fighting, who tend to underestimate or submerge the massive human cost of even their lesser scenarios. By revealing the assumptions and human values underlying the work of the strategists, peace activists and researchers have reclaimed a role for public concern and participation[144-146]. If the experts and policy-makers are not totally objective and concerned about some monolithic social welfare, then decisions should not be left in their hands alone.
Nuclear winter promised to be a tool for peace activists, and many welcomed it with open arms. But in uncritically accepting the science behind it, they allowed the agenda to be set by another group of experts, the nuclear winter theorists. Ironically, it has been a small number of critics of nuclear winter, including a number of defenders of current nuclear weapons policies against. peace movement challenges, who in trying to expose the political agendas associated with the theory are the most analogous to the researchers who have challenged the military establishment.
The other assumption underlying the nuclear winter debate is that the scientific status of nuclear winter makes a big difference to policy. Yet this is not borne out by responses either from militaries or peace movements. In neither case is policy or action derived directly from a rational analysis of 'the facts', whether these are military threats or threats to human survival. Arguably, the role of the military in society is anchored more deeply than just the requirement to defend against enemies; involved is the protection and survival of the state and associated economic, organisational and political structures[147-149]. Likewise, peace movements have been triggered not just by awareness of the dangers or futility of war, but by social stresses, moral concern and organisational imperatives.
In this context, nuclear winter is unlikely to be a major driving force in struggles over military policy, but rather becomes a tool to be used or defended against by competing groups. But it is also wrong to treat nuclear winter as purely a social construct. Just because nuclear winter has been a political tool does not mean that the cold and the dark will be any less real, if and when they occur.
Russell Seitz in his article in The National Interest quoted a number of prominent scientists as expressing critical comments about nuclear winter models and results. The use of some of these comments has been disputed by proponents of nuclear winter. In an attempt to clarify the status of the quotes, I wrote to the individuals quoted by Seitz, referred to the specific quote and asked "Is this quote correct? Does Seitz's use of the quote give an accurate reflection of your past and present views?"
Freeman Dyson, a physicist at Princeton University, was quoted by Seitz as saying about the TTAPS study, "It's an absolutely atrocious piece of science but I quite despair of setting the public record straight. I think I'm going to chicken out on this one: Who wants to be accused of being in favor of nuclear war?" Dyson in May 1987 responded "No" to each of my questions, adding "I don't believe I ever said what Russell Seitz said I said, but I can't prove it."
Richard Feynman, a physicist at the California Institute of Technology, was quoted as saying about TTAPS, "You know, I really don't think these guys know what they're talking about". Feynman on 1 July 1987 replied to me, "Regarding the quote, I'm sorry, but I really don't remember if it's exactly accurate or not."
Jonathan Katz, a physicist at Washington University in St Louis, was quoted as saying about nuclear winter, after a journalist's caution against four-letter words, "'Humbug' is six." Katz on 22 January 1988 wrote to me that Seitz's quotations attributed to him are correct.
Kosta Tsipis of the Massachusetts Institute of Technology, according to Seitz, quoted a Soviet scientist as saying "You guys are fools. You can't use mathematical models like these to model perturbed states of the atmosphere. You're playing with toys." TTAPS in their November 1986 letter to the Wall Street Journal said "A negative comment on mathematical modeling allegedly uttered by a 'Soviet scientist' (indisputably V. V. Aleksandrov of the Moscow-based Climate Modeling Center, the only Soviet at the April 1983 Cambridge review meeting referred to by Seitz), and prominently displayed in a box by the WSJ, was never made. The transcript of the meeting shows no such remark, and Kosta Tsipis of MIT, whom Seitz claims as his source, flatly denies the whole thing."
Tsipis in a memo of 5 January 1987, entitled 'Regarding: Seitz vs. Sagan', gives his account: "When Russell Seitz came to talk to me about Nuclear Winter, I recalled that in the AAAS Meeting (in Cambridge Mass.), a Russian scientist got up and said that we cannot use climate models as if the nuclear war itself would not disturb the atmosphere. The discussion at that point had evolved around the 1-D [one-dimensional] model. Mr. Seitz mentioned this in his Wall Street Journal article, but in a context that implied that the Soviet scientist was referring to all 3-D models, quite generally. Subsequently, I had a telephone call from Carl Sagan who wanted to know what I had said to Seitz. During our conversation, two things became clear: a) that Seitz had confused my statement to mean that it referred to a 3-D model; b) that it would be very difficult to explain to the readers of the W.S.J. the distinction. For this latter reason, we agreed that Carl should simplify his response by saying that I deny discussing the 3-D model with Seitz. In Carl's letter-response in the W.S.J., this statement was further simplified."
Seitz later wrote to me (30 December 1987) saying that Tsipis' original remarks were recorded, that the clear context was 1-D models, and that he is not aware of any confusion between 1-D and 3-D models in the text of his Wall Street Journal article.
Victor Weisskopf, a physicist at MIT, was quoted by Seitz as saying in early 1984, "Ah! Nuclear winter! The science is terrible, but -- perhaps the psychology is good." TTAPS in their November 1986 letter to the Wall Street Journal comment about Seitz that "derogatory quotes are attributed to individuals who forcefully deny them (e.g., Victor Weisskopf)." Weisskopf wrote to me on 10 June 1987 about the quoted comment, "I do not remember having made such a remark. I may have said the science is unreliable, but the psychology is good. I do believe that nuclear winter is not yet proved, but is made rather plausible and therefore the word unreliable is the right characterization. This was my view at the time of the interview and is at present "
One other scientist quoted by Seitz in the same section of his paper, Michael McElroy of Harvard University, did not respond to my letter.
There are at least two lessons to be learned from this material. First, in a scientific area which has important political implications, even off-the-cuff comments can take on a great significance. In this case, the comments are by prominent scientists who are not active researchers in the field in question. Both Seitz and, in response, TTAPS treat the quotes as significant. In disputes over science in public arenas, the credentials of scientists (such as being a Nobel Prize winner) are a key resource in making claims and counterclaims.
Second, the presentation and interpretation of the comments by Seitz and TTAPS, in simplifying the comments or the context in which they were made, tend to reflect the respective cases they are trying to make. Just as the construction and results of mathematical models can reflect the presuppositions of scientists, so can the meaning and significance of 'mere quotes'.
In response to my queries, valuable comments about the nuclear winter debate were provided by Curt Covey, Paul Crutzen, Paul Ehrlich, Michael MacCracken, John Maddox, Barrie Pittock, Stephen Schneider, Russell Seitz and Richard Turco. Valuable comments in response to an earlier draft of this paper were provided by Ted Bryant, Michael MacCracken, Clyde Manwell, David Mercer, Barrie Pittock, Russell Seitz, Richard Turco and an anonymous referee.
1. Samuel Glasstone and Philip J. Dolan (editors), The Effects of Nuclear Weapons (Washington, D.C.: United States Department of Defense, 1977).
2. Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations (Washington, D.C.: National Academy of Sciences, 1975).
3. Steven Lukes, Power: A Radical View (London: Macmillan, 1974).
4. Barry Barnes, Scientific Knowledge and Sociological Theory (London: Routledge and Kegan Paul, 1974).
5. Barry Barnes, T S Kuhn and Social Science (London: Macmillan, 1982).
6. David Bloor, Knowledge and Social Imagery (London: Routledge and Kegan Paul, 1976).
7. Michael Mulkay, Science and the Sociology of Knowledge (London: Allen and Unwin, 1979).
8. Paul J. Crutzen, "The influence of nitrogen oxides on the atmospheric ozone content", Quarterly Journal of the Royal Meteorological Society, 96, 1970, pages 320-325.
9. Paul J. Crutzen, "SST's -- a threat to the earth's ozone shield", Ambio, 1, 1972, pages 41-51.
10. Harold S. Johnston, "Reduction of stratospheric ozone by nitrogen oxide catalysts from supersonic transport exhaust", Science, 173, 6 August 1971, pages 517-522.
11. H. M. Foley and M. A. Ruderman, "Stratospheric NO production from past nuclear explosions", Journal of Geophysical Research, 78, 20 July 1973, pages 4441-4450.
12. Harold S. Johnston, Gary Whitten and John Birks, "Effect of nuclear explosions on stratospheric nitric oxide and ozone", Journal of Geophysical Research, 78, 20 September 1973, pages 6107-6135.
13. John Hampson, "Photochemical war on the atmosphere", Nature, 250, 19 July 1974, pages 189-191; Michael C. MacCracken and Julius S. Chang, A Preliminary Study of the Potential Chemical and Climatic Effects of Atmospheric Nuclear Explosions (Livermore, California: Lawrence Livermore Laboratory, document number UCRL-51653, 25 April 1975).
14. See reference 2.
15. Jonathan Schell, The Fate of the Earth (New York: Knopf, 1982).
16. Paul J. Crutzen and John W. Birks, "The atmosphere after a nuclear war: twilight at noon", Ambio, 11 (2-3), 1982, pages 114-125; reprinted in Jeanne Peterson (editor), Nuclear War: The Aftermath (Oxford: Pergamon, 1983), pages 73-96.
17. R. P. Turco, O. B. Toon, T. P. Ackerman, J. B. Pollack and Carl Sagan, "Nuclear winter: global consequences of multiple nuclear explosions", Science, 222, 23 December 1983, pages 1283-1292; reprinted in Paul R. Ehrlich, Carl Sagan, Donald Kennedy and Walter Orr Roberts, The Nuclear Winter: The Cold and the Dark (London: Sidgwick and Jackson, 1985), pages 161-188.
18. Richard P. Turco, Owen B. Toon, Thomas P. Ackerman, James B. Pollack and Carl Sagan, "The climatic effects of nuclear war", Scientific American, 251, August 1984, pages 23-33.
19. Edward Teller, "Widespread after-effects of nuclear war", Nature, 310, 23 August 1984, pages 621-624 (see page 622).
20. Turco et al., see reference 17.
21. Paul R. Ehrlich and 19 others, "Long-term biological consequences of nuclear war", Science, 222, 23 December 1983, pages 1293-1300; reprinted in Paul R. Ehrlich et al., see reference 17, pages 189-210.
22. Michael C. MacCracken, "Global atmospheric effects of nuclear war", Energy and Technology Review (Lawrence Livermore National Laboratory), May 1985, pages 10-35 (see page 15).
23. Ambio Advisory Group, "Reference scenario: how a nuclear war might be fought", in Peterson, see reference 16, pages 37-48.
24. Ehrlich et al., see reference 21, page 1294.
25. Michael MacCracken, letter to Brian Martin, 22 February 1988.
26. Carl Sagan, "Nuclear war and climatic catastrophe: some policy implications", Foreign Affairs, 62, Winter 1983-84, pages 257-292.
27. A. B. Pittock, T. P. Ackerman, P. J. Crutzen, M. C. MacCracken, C. S. Shapiro and R. P. Turco, Environmental Consequences of Nuclear War. Volume I. Physical and Atmospheric Effects (Chichester: Wiley, 1986), page 193.
28. Starley L. Thompson and Stephen H. Schneider, "Nuclear winter reappraised", Foreign Affairs, 64, Summer 1986, pages 981-1005 (see pages 987-988).
29. Turco et al., see reference 17, page 1286.
30. Ehrlich et al., see reference 21.
31. Paul R. Ehrlich, "The biological consequences of nuclear war", in Ehrlich et al., see reference 17, pages 41-60.
32. Patricia Adams and Lawrence Solomon, In the Name of Progress: The Underside of Foreign Aid (Toronto: Energy Probe Research Foundation, 1985).
33. Susan George, How the Other Half Dies: The Real Reasons for World Hunger (Montclair, New Jersey: Allanheld, Osmun, 1977).
34. Frances Moore Lappé and Joseph Collins, Food First: Beyond the Myth of Scarcity (Boston: Houghton Mifflin, 1977).
35. Ehrlich et al., see reference 21.
36. Ian James Barton and Garth William Paltridge, "'Twilight at noon' overstated", Ambio, 13 (1), 1984, pages 49-51.
37. Sherwood B. Idso, "Calibrations for nuclear winter" (correspondence), Nature, 312, 29 November 1984, page 407.
38. S. B. Idso, "Nuclear winter and the greenhouse effect" (scientific correspondence), Nature, 321, 8 May 1986, page 122.
39. Cresson H. Kearny, "On a 'nuclear winter'" (letter), Science, 227, 25 January 1985, pages 356-357.
40. John Maddox, "From Santorini to armageddon", Nature, 307, 12 January 1984, page 107.
41. John Maddox, "Nuclear winter not yet established", Nature, 308, 1 March 1984, page 11.
42. Russell Seitz, "More on nuclear winter" (correspondence), Nature, 315, 23 May 1985, page 272.
43. Russell Seitz, "Siberian fire as 'nuclear winter' guide" (scientific correspondence), Nature, 323, 11 September 1986, pages 116-117.
44. S. Fred Singer, "Is the 'nuclear winter' real?", Nature, 310, 23 August 1984, page 625.
45. S. Fred Singer, "On a 'nuclear winter'" (letter), Science, 227, 25 January 1985, page 356.
46. Teller, see reference 19.
47. Edward Teller, "Climatic change with nuclear war", Nature, 318, 14 November 1985, page 99.
48. Paul J. Crutzen, "Darkness after a nuclear war", Ambio, 13 (1), 1984, pages 52-54.
49. Carl Sagan, "On minimizing the consequences of nuclear war", Nature, 317, 10 October 1985, pages 485-488.
50. R. P. Turco et al., "On a 'nuclear winter'" (letter), Science, 227, 25 January 1985, pages 358, 360, 362, 444.
51. R. P. Turco et al., "Ozone, dust, smoke and humidity in nuclear winter" (scientific correspondence), Nature, 317, 5 September 1985, pages 21-22.
52. Ehrlich et al., see reference 21, page 1294.
53. George W. Rathjens and Ronald H. Siegel, "Nuclear winter: strategic significance", Issues in Science and Technology, Winter 1985, pages 123-128 (see page 127).
54. Curt Covey, Stephen H. Schneider and Starley L. Thompson, "Global atmospheric effects of massive smoke injections from a nuclear war: results from general circulation model simulations", Nature, 308, 1 March 1984, pages 21-25.
55. Paul J. Crutzen, Ian E. Galbally and Christoph Bruhl, "Atmospheric effects from post-nuclear fires", Climatic Change, 6, 1984, pages 323-364.
56. M. A. Harwell and T. C. Hutchinson, Environmental Consequences of Nuclear War. Volume II. Ecological and Agricultural Effects (Chichester: John Wiley, 1985).
57. Robert C. Malone, Lawrence H. Auer, Gary A. Glatzmaier and Michael C. Wood, "Nuclear winter: three-dimensional simulations including interactive transport, scavenging, and solar heating of smoke", Journal of Geophysical Research, 91, 20 January 1986, pages 1039-1053.
58. Pittock et al., see reference 27.
59. S. L. Thompson, V. V. Aleksandrov, G. L. Stenchikov, S. H. Schneider, C. Covey and R. M. Chervin, "Global climatic consequences of nuclear war: simulations with three dimensional models", Ambio, 13 (4), 1984, pages 236243.
60. United States General Accounting Office, Nuclear Winter: Uncertainties Surround the Long-Term Effects of Nuclear War (Washington, D.C.: March 1986).
61. S. Fred Singer, "Re-analysis of the nuclear winter phenomenon", Meteorology and Atmospheric Physics, 38, 1988, pages 228-239.
62. Thompson and Schneider, see reference 28.
63. Stephen H. Schneider, letter, Wall Street Journal, 25 November 1986.
64. 'Severe global-scale nuclear war effects reaffirmed', statement resulting from SCOPE-ENUWAR workshop in Bangkok, 9-12 February 1987.
65. A. Berger, "Nuclear winter, or nuclear fall?", Eos, 67, 12 August 1986, pages 617-621.
66. Jeannie Peterson, "Scientific studies of the unthinkable - the physical and biological effects of nuclear war", Ambio, 15 (2), 1986, pages 60-69.
67. Paul R. Ehrlich, The Population Bomb (London: Pan, 1971).
68. Paul Ehrlich, "When light is put away: ecological effects of nuclear war", in Jennifer Leaning and Langley Keyes (editors), The Counterfeit Ark: Crisis Relocation for Nuclear War (Cambridge, Mass.: Ballinger, 1984), pages 247-271.
69. Carl Sagan, The Cosmic Connection: An Extraterrestrial Perspective (London: Papermac, 1981).
70. Maddox, see references 40 and 41.
71. John Maddox, The Doomsday Syndrome (London: Macmillan, 1972).
72. Teller, see references 19 and 47.
73. Singer, see references 44, 45 and 61.
74. S. Fred Singer, "Stratospheric water vapour increase due to human activities", Nature, 233, 22 October 1971, 543-545.
75. B. W. Golding, P. Goldsmith, N. A. Machin and A. Slingo, "Importance of local mesosphere factors in any assessment of nuclear winter", Nature, 319, 23 January 1986, pages 301-303.
76. P. Goldsmith, A. F. Tuck, J. S. Foot, E. L. Simmons and R. L. Newson, "Nitrogen oxides, nuclear weapon testing, Concorde and stratospheric ozone", Nature, 244, 31 August 1973, 545-551.
77. Joyce E. Penner, "Uncertainties in the smoke source term for 'nuclear winter' studies", Nature, 324, 20 November 1986, pages 222-226.
78. Carl Sagan, "The nuclear winter", Parade, 30 October 1983, pages 4-5, 7.
79. John Maddox, "Nuclear winter and carbon dioxide", Nature, 312, 13 December 1984, page 593.
80. Armand L. Mauss, Social Problems as Social Movements (Philadelphia: J. B. Lippincott, 1975).
81. B. Bruce-Briggs (editor), The New Class? (New Brunswick, New Jersey: Trans-Action Books, 1979).
82. Alvin W. Gouldner, The Future of Intellectuals and the Rise of the New Class (London: Macmillan, 1979).
83. George Konrád and Ivan Szelényi, The Intellectuals on the Road to Class Power (Brighton: Harvester, 1979).
84. Pat Walker (editor), Between Labour and Capital (Brighton: Harvester, 1979).
85. Frank Parkin, Middle Class Radicalism: The Social Bases of the British Campaign for Nuclear Disarmament (Manchester: Manchester University Press, 1968); Kim Salomon, "The peace movement -- an anti-establishment movement", Journal of Peace Research, 23, 1986, pages 115-127.
86. Gregg Herken, Counsels of War (New York: Knopf, 1985).
87. Fred Kaplan, The Wizards of Armageddon (New York: Simon and Schuster, 1983).
88. Hugh E. DeWitt, "The nuclear arms race as seen from within an American weapons laboratory", Science and Public Policy, 9, April 1982, pages 58-63.
89. Sagan, see reference 26.
90. A. Barrie Pittock, Beyond Darkness: Nuclear Winter in Australia and New Zealand (Melbourne: Sun, 1987).
91. Stephen Shenfield, "Nuclear winter and the USSR", Millennium: Journal of International Studies, 15, 1986, pages 197-208; for a different interpretation see Leon Gouré, "'Nuclear winter' in Soviet mirrors", Strategic Review, 13, Summer 1985, pages 22-38.
92. Yevgeni Velikhov (editor), The Night After ... Climatic and Biological Consequences of a Nuclear War: Scientists' Warning (Moscow: Mir Publishers, 1985).
93. Fred Charles Iklé, The Social Impact of Bomb Destruction (Norman: University of Oklahoma Press, 1958), page vi.
94. Herman Kahn, On Thermonuclear War (Princeton: Princeton University Press, 1961), page 9.
95. Peter Laurie, Beneath the City Streets (Harmondsworth: Penguin, 1972), page 22.
96. Pittock, see reference 90, page 4.
97. Joanna R. Macy, Despair and Personal Power in the Nuclear Age (Philadelphia: New Society Publishers, 1983).
98. Stephen Budiansky, "Nuclear winter: Pentagon says yes, it may happen, but 'so what?'", Nature, 314, 14 March 1985, page 121; Maxine Clarke, "Nuclear winter: US arms control policy doubts", Nature, 317, 10 October 1985, page 466; R. Jeffrey Smith, "Nuclear winter attracts additional scrutiny", Science, 225, 6 July 1984, pages 30-32; R. Jeffrey Smith, "DOD says 'nuclear winter' bolsters its plans", Science, 227, 15 March 1985, page 1320.
99. Bill Hayden, Minister for Foreign Affairs, Uranium, the Joint Facilities, Disarmament and Peace (Canberra: Australian Government Publishing Service, 1984).
100. John Maddox, "Nuclear winter can cross equator", Nature, 317, 5 September 1985, page 11.
101. Sagan, see reference 26, page 259.
102. Ibid., page 286; Carl Sagan, letter, Foreign Affairs, 62, Spring 1984, pages 999-1002 (see page 1001).
103. Thomas F. Gieryn, "Boundary-work and the demarcation of science from non-science: strains and interests in professional ideologies of scientists", American Sociological Review, 48, December 1983, pages 781-795.
104. Karl Popper, The Logic of Scientific Discovery (London: Hutchinson, 1959), page 31.
105. Joseph R. Gusfield, The Culture of Public Problems (Chicago: University of Chicago Press, 1981), pages 83-108; P. B. Medawar, "Is the scientific paper fraudulent? Yes; it misrepresents scientific thought", Saturday Review, 1 August 1964, pages 42-43.
106. Crutzen et al., see reference 55, page 354.
107. Curt Covey, "Environmental studies of nuclear war: a recent synthesis and future prospects -- an editorial/review essay", Climatic Change, 10, 1987, pages 1-10 (see page 9).
108. Stephen H. Schneider, Starley L. Thompson and Curt Covey, "The mesosphere effects of nuclear winter" (scientific correspondence), Nature, 320, 10 April 1986, pages 491-492.
109. Ehrlich et al., see reference 17, pages xiv-xv.
110. Ibid., pages 129-151.
111. Ibid., page 33.
112. Pittock et al., see reference 27; Harwell and Hutchinson, see reference 56.
113. Tim Beardsley, "Nuclear winter: mechanics of SCOPE report", Nature, 317, 19 September 1985, page 192.
115. "What to make of nuclear winter", Nature, 317, 19 September 1985, pages 189-190.
116. Frederick Warner, "SCOPE response" (correspondence), Nature, 24 October 1985, page 666.
117. Ehrlich et. al., see reference 17, page xv.
118. R. M. Hare, The Language of Morals (London: Oxford University Press, 1964), pages 111-126.
119. Leslie J. Freeman, Nuclear Witnesses (New York: Norton, 1981).
120. Brian Martin, "Nuclear suppression", Science and Public Policy, 13, December 1986, pages 312-320.
121. G. L. Waldbott, A Struggle with Titans (New York: Carlton Press, 1965).
122. Robert van den Bosch, The Pesticide Conspiracy (Garden City, New York: Doubleday, 1978).
123. Seitz, see references 42 and 43.
124. Russell Seitz, letter, Foreign Affairs, 62, Spring 1984, pages 998-999.
125. Russell Seitz, "In from the cold: 'nuclear winter' melts down", The National Interest, 5, Fall 1986, pages 3-17 (see page 3).
126. Ibid., page 5.
127. Thompson and Schneider, see reference 28.
128. Seitz, see reference 125, page 17.
129. Eliot Marshall, "Nuclear winter debate heats up", Science, 235, 16 January 1987, pages 271-273 (see page 272); Russell Seitz, letter, and Eliot Marshall, response, Science, 235, 20 February 1987, page 832.
130. Richard Turco, typescript of letter submitted to The National Interest, 22 December 1986.
131. Richard Turco et al., letter, Wall Street Journal, 12 December 1986, page 27.
132. Turco, see reference 130, pages 1 and 10.
133. Turco et al., see reference 131.
134. Seitz, see reference 43.
135. John Maddox, correspondence, Nature, 311, 27 September 1984, page 308.
136. Maddox, see reference 79.
137. Seitz, see reference 125.
138. Brad Sparks, "The scandal of nuclear winter", National Review, 15 November 1985, pages 28-38.
139. Brian Martin, The Bias of Science (Canberra: Society for Social Responsibility in Science, 1979).
140. Alva Myrdal, The Game of Disarmament: How the United States and Russia Run the Arms Race (New York: Pantheon, 1976).
141. Lydia Dotto, Planet Earth in Jeopardy: Environmental Consequences of Nuclear War (Chichester: John Wiley, 1986).
142. Owen Greene, Ian Percival and Irene Ridge, Nuclear Winter: The Evidence and the Risks (Cambridge: Polity Press, 1985).
143. Michael Rowan-Robinson, Fire and Ice: The Nuclear Winter (Harlow: Longman, 1985).
144. Robert C. Aldridge, First Strike! The Pentagon's Strategy for Nuclear War (Boston: South End Press, 1983).
145. Peter Hayes, Lyuba Zarsky and Walden Bello, American Lake: Nuclear Peril in the Pacific (Australia: Penguin, 1986).
146. Peter Pringle and William Arkin, SIOP (London: Sphere, 1983).
147. Richard J. Barnet, Roots of War (New York: Atheneum, 1972).
148. Joel Kovel, Against the State of Nuclear Terror (London: Pan, 1983).
149. Brian Martin, Uprooting War (London: Freedom Press, 1984).
150. Nigel Young, An Infantile Disorder? The Crisis and Decline of the New Left (London: Routledge and Kegan Paul, 1977).
151. Seitz's interpretation of Dyson's views seems to be confirmed by Dyson's discussion of nuclear winter in Freeman J. Dyson, Infinite in All Directions (New York: Harper and Row, 1988), pages 258ff.
152. See also Russell Seitz, letter, Wall Street Journal, 29 January 1987, page 29.
|
<urn:uuid:9264225e-4750-48eb-8626-f87124202460>
|
CC-MAIN-2016-26
|
http://www.uow.edu.au/~bmartin/pubs/88spp.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932846
| 16,621
| 3.125
| 3
|
Promising dna-vaccine stops blood flow to tumors.
New vaccine starves cancer.
A vaccine that restricts the supply of blood to tumors, developed by researchers at Karolinska Institutet in Stockholm, has slowed the growth of breast cancer in tests conducted on rodents.
The concept is based on the fact that, for a cancer tumor to become larger than a few millimeters, it must be able to stimulate the formation of new blood vessels in order to secure its supply of oxygen and nutrients. Drugs that prevent the growth of blood vessels are thus a potential treatment alternative for tumors.ADVERTISEMENT
A protein known as 'Delta-like ligand 4' (DLL4) regulates the formation of new blood vessels. When a new blood vessel starts to grow from an existing vessel, this prevents nearby cells from forming new vessels. When DLL4 is blocked in a tumor, there is a large increase in the formation of new, but non-functional, blood vessels, and this leads to the tumor growing more slowly.
"We hope that it will be possible to use this vaccine to prevent recurrence of breast cancer after surgical treatment", says Kristian Pietras, head of the study.
For both cancer and infectious diseases, DNA vaccination entails injection of a gene for the protein against which it is desired to vaccinate. This leads to recognition of the unwanted protein by the immune system. In the studies, the vaccination did not cause any undesired effects and did not affect the animal'' capacity for healing.
|
<urn:uuid:d5405d18-5c5b-4ca7-8909-5e64e81aa094>
|
CC-MAIN-2016-26
|
http://www.nordstjernan.com/news/education%7Cresearch/2309/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944752
| 309
| 3.078125
| 3
|
Rendezvous Docking Simulator
|Facility 1244 RDS|
|Center:||Langley Research Center|
|Historic Eligibility:||National Historic Landmark|
|Important Tests:||Gemini, Apollo, Jet Shoes|
The Rendezvous Docking Simulator was used to train Gemini and Apollo astronauts in docking procedures they had to master before attempting to land on the moon. NASA engineers decided that the best method of accomplishing President Kennedy's goal of a moon landing by 1969 was through a lunar orbit rendezvous (LOR). The LOR called for a single Saturn V launch of two spacecraft into lunar orbit. One would remain in orbit while the other would descend to the moon and then boost itself back into lunar orbit, rendezvous and dock with the mother ship before returning to earth. To accomplish this task it was essential that Apollo astronauts be trained in all aspects and problems likely to arise in an attempt to dock the Apollo Command and Lunar Excursion Modules in lunar orbit. Failure to dock would result in the failure of the entire mission and the likely loss of the lives of the astronauts. The Rendezvous Docking Simulator gave the astronauts the experience of docking the spacecraft in a safe environment that closely resembled a space environment. Only when the Apollo astronauts had successfully mastered rendezvous and docking skills in the Rendezvous Docking Simulator would NASA give permission for the attempt to land on the moon.
NASA Langley Test Pilot-Astronaut Robert Champine trained with all seven of the original Mercury astronauts. Bob had been performing many flights with the Docking Simulator since the early 1960s. His work to perfect the docking and rendezvous maneuvers of spacecraft led to the flawless operations performed today.
Following the completion of the Apollo program, the Rendezvous Docking Simulator was modified to solve open-and-closed loop pilot control problems, aircraft landing approaches, simulator validation studies and passenger ride quality studies. The name of the facility was changed to the Real-Time Dynamic Simulator. At present, this facility is inactive.
[top] Rendezvous Docking
The docking facility involved a full-size model of the pilot's compartment and nose section of the Apollo command module, associated drive systems, a jet selection and controller interface unit, a general-purpose analog computer, and a full-size model of the Lunar Module ascent stage. There were several rate-command modes the pilots could practice on. Actions of the simulator were controlled with the device pictured below. An interface unit, separate from the analog computer, was required to convert the alternating-current signals from the controller to direct-current signals, and to simulate control-system switching, priority logic, and thrust dynamics.
For more information on the controller and various modes, see this document.
OMEGA, the one-man extravehicular gimbal arrangement, was a Langley-developed device. It was used in the hangar as part of the astronaut training to simulate space walks.
[top] Film Clips
1966 A Full-Sized Pilot-Controlled Docking Simulation of the Apollo Command and Service Module with the Lunar Module. Jack E. Pennington, Howard G. Hatch, Jr., and Norman R. Driscoll. TN D-3688.
[top] For Students and Teachers
NASA Langley Research Center hosted most training for many astronauts and many of the engineers and scientists were available to train and work with the astronauts prior to their flights. Simulator training took place in the large hanger on the Langley Campus. The trainers were suspended from the hanger's ceiling and astronauts sitting in the trainers would maneuver the modules, simulating their future space experiences. The astronauts took graduate space science courses, learning about reentry from space, astronomy, and how to navigate using stars. A large tank, the hydrodynamic tank, was used to practice exiting the capsule. Once techniques were perfected in the tank, the astronauts took the capsule into the river behind the Center for practice in the elements.
- The Air Lubricated Free Attitude Trainer: This trainer allowed astronauts to practice controlling the pitch, yaw, and roll of their space craft using windows displaying Earth, the moon, and celestial bodies as references.
- The Procedures Trainer, built by McDonnell, allowed the astronauts to practice controlling the attitude of the space craft (the pitch, roll, and yaw) while also experiencing space suit pressurization, noise, and heat.
- The Environmental Control Simulator was placed in a decompression chamber, and astronauts would practice using the module controls in pressurized situations that were similar to the conditions they would experience during their trip to space and back.
|
<urn:uuid:4e4342bb-8da0-4cea-8899-4aa8f570f2dd>
|
CC-MAIN-2016-26
|
http://crgis.ndc.nasa.gov/historic/1244RDS
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00138-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93587
| 946
| 3.359375
| 3
|
Research shows that a baby perceives the world differently with every leap he makes. Xaviera Plas, the expert of the worldwide bestseller “The Wonder Weeks”, will be showing unique visuals and movies about a baby’s way of perceiving the world at the Baby Show Birmingham.
“We owe it to our babies to understand what’s going on inside their minds. Letting a baby cry it out is outdated. It’s time we parents and experts took an interest in the perceptional world of a baby. We have been neglecting to do that for too long. It’s time to revolutionize parenting!”
First time a baby sees patterns in a face @ 8 weeks
Is it a cow, dog or wallpaper?
Fashion 2014… pure brain food and visual treats for babies! How trousers can turn from fashion into brain-stimulating objects….
Before your baby is 20 months old he makes ten leaps in his mental development – ten crucial, key periods called wonder weeks. With each of these ten wonder weeks a baby gets a totally new perception of the world. He is suddenly able to perceive things he couldn’t before. Suddenly everything ‘changes’. It’s as if he just woke up on a new planet, where everything he knew had suddenly changed. So just imagine if this happened to you. You’d go to bed and when you woke up everything was different. This would freak you out, wouldn’t it? A baby has to go through that ten times and needs help from his well-informed parents…
Xaviera Plas is an engaging speaker and it’s her mission to inform every parent around the globe about the secrets of their baby’s perceptional world. For exclusive audio-visual materials, the key to unlock the mental development of babies or more information, please contact:
077 45 0455 18
Press release distributed by Pressat on behalf of The Wonder Weeks, on Tuesday 13 May, 2014. For more information subscribe and follow http://www.pressat.co.uk/
Baby Show Birmingham
|
<urn:uuid:bced0fc7-c14d-4b44-b5fb-3f68b7bf3c14>
|
CC-MAIN-2016-26
|
http://www.pressat.co.uk/releases/how-fashion-stimulates-a-babys-mind-ced998331d5caafa7bd5e0e4ded70e4a/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00134-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923526
| 441
| 2.578125
| 3
|
In 1991, Canadian psychologist Robert Hare released a study indicating that psychopaths may have different brains from the rest of us [source: Nichols]. While psychopaths remain intellectually aware of society's rules, they lack emotional intelligence. The profile of a psychopath includes impulsivity, lofty goals without the discipline or focus to achieve them, a propensity for boredom, no close personal attachments and of course, a lack of empathy. When Hare monitored psychopaths' brain waves while they examined certain words, including those that bring up a host of emotions for most people, he found that there was no activity in the parts of the brain involved in emotion. Hare described these psychopaths as "emotionally color-blind" to Maclean's magazine in 1996 [source: Nichols].
Hare's work seems to indicate that psychopaths have abnormal brain functions in areas related to processing emotion and language -- meaning that there's a neurological rationale for some heinous crimes, as opposed to some environmental factor such as child abuse. If these psychopaths were to be tested for IQ, they would likely show up as normal, but it's in a lack of emotional intelligence that we see the disturbances in brain health.
If a person is on the low end of the emotional intelligence spectrum, he or she may have a condition known as alexithymia. Alexithymia is the inability to understand or express emotion. Because of what scientists know about emotions in the brain, they theorize that alexithymia may either relate to a malfunctioning in the right hemisphere or an overactive left hemisphere (leaving the right hemisphere unable to compensate) [source: Bermond et al.]. It's also possible that the corpus callosum, the part of the brain that governs communication between the right and left sides of the brain, is damaged to the point of blocking the messages regarding emotion [source: Becerra et al.].
Alexithymia sometimes manifests itself after a person suffers a brain injury such as blunt trauma. But the condition may eventually be able to tell us more about what happens during brain disorders absent of such trauma. For example, alexithymia has been linked to eating disorders, panic disorders and post-traumatic stress disorder [source: Becerra et al.]. The condition may also provide clues about autism spectrum disorders one day; one common theme of autism disorders is a lack of emotional connection, so that those with the disorder can't pick up on social cues. Decreased cerebellum activity has been linked to autism and Asperger's disorder [source: Bermond et al.].
For more on emotional intelligence, IQ and other brainy topics, take a look at the links on the next page.
|
<urn:uuid:39bdd9d1-75a5-44e6-bf5e-c0b60820d3ea>
|
CC-MAIN-2016-26
|
http://science.howstuffworks.com/life/emotional-intelligence-iq2.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948845
| 551
| 3.28125
| 3
|
|Hutton, Charles Mathematical and Philosophical Dictionary 1795|
<*> circle be described whose diameter AC is = a, and AD be perpendicular and equal to AC; then taking any point P in AC, joining DP, and drawing PN parallel to AD, and NO parallel to AC; and lastly taking PM = NO, the point M will be one point of the Oval sought.
In like manner the equation expresses several very pretty Ovals, among which the following 12 are some of the most remarkable. For when the equation has four real unequal roots, the given equation will denote the three following species, in fig. 1, 2, 3:
When the two less roots are equal, the three species will be expressed as in fig. 4, 5, 6, thus:
When the two less roots become imaginary, it will denote the three species as exhibited in fig. 7, 8, 9:
When the two middle roots are equal, the species will be as appears in fig. 10: when two roots are equal, and two more so, the species will be as in fig. 11: and when the two middle roots become imaginary, the species will be as appears in fig. 12:
, an eminent English mathematician and divine, was born at Eton in Buckinghamshire, 1573, and educated in the school there; whence he was elected to King's-college in Cambridge in 1592, where he continued about 12 years, and became a fellow; employing his time in close application to useful studies, particularly the mathematical sciences, which he contributed greatly, by his example and ex hortation, to bring into vogue among his acquaintances there.
About 1603 he quitted the university, and was presented to the rectory of Aldbury, near Guildford in Surry, where he lived a long retired and studious life, seldom travelling so far as London once a year; his recreation being a diversity of studies: “as often, says he, as I was tired with the labours of my own profession, I have allayed that tediousness by walking in the pleasant, and more than Elysian Fields of the diverse and various parts of human learning, and not of the mathematics only.” About the year 1628 he was appointed by the earl of Arundel tutor to his son lord William Howard, in the mathematics, and his Clavis was drawn up for the use of that young nobleman. He always kept up a correspondence by letters with some of the most eminent scholars of his time, upon mathematical subjects: the originals of which were preserved, and communicated to the Royal Society, by William Jones, Esq. The chief mathematicians of that age owed much of their skill to him; and his house was always full of young gentlemen who came from all parts to receive his instruction: nor was he without invitations to settle in France, Italy, and Holland. “He was as facetious, says Mr. David Lloyd, in Greek and Latin, as solid in arithmetic, geometry, and the sphere, of all measures, music, &c; exact in his style as in his judgment; handling his tube and other instruments at 80 as steadily as others did at 30; owing this, as he said, to temperance and exercise; principling his people with plain and solid truths, as he did the world with great and useful arts; advancing new inventions in all things but religion, which he endeavoured to promote in its primitive purity, maintaining that prudence, meekness, and simplicity were the great ornaments of his life.
Notwithstanding Oughtred's great merit, being a strong royalist, he was in danger, in 1646, of a sequestration by the committee for plundering ministers; several articles being deposed and sworn against him: but upon his day of hearing, William Lilly, the famous astrologer, applied to Sir Bulstrode Whitlocke and all his old friends; who appeared so numerous in his behalf, that though the chairman and many other Presbyterian members were active against him, yet he was cleared by the majority. This is told us by Lilly himself, in the History of his own Life, where he styles Oughtred the most famous mathematician then of Europe.—He died in 1660, at 86 years of age, and was buried at Aldbury. It is said he died of a sudden ecstasy of joy, about the beginning of May, on hearing the news of the vote at Westminster, which passed for the restoration of Charles the 2d.—He left one son, whom he put apprentice to a watch-maker, and wrote a book of instructions in that art for his use.
He published several works in his life time; the principal of which are the following:
|
<urn:uuid:98a45d4e-a5ef-40aa-a65f-62b19eafc018>
|
CC-MAIN-2016-26
|
http://archimedes.mpiwg-berlin.mpg.de/cgi-bin/toc/toc.cgi?page=881;dir=hutto_dicti_078_en_1795;step=textonly
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00042-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.987766
| 980
| 3.453125
| 3
|
By Hadas Kuznits
PHILADELPHIA (CBS) — A new study of cohabitation before marriage indicates a change in the times.
The Centers for Disease Control and Prevention found that 60 percent of couples today live together before marriage, compared with just 10 percent in the 1960s.
And back then, couples who lived together before marriage were more likely to get divorced later on.
Now, however, it’s seen as more the norm. According to the CDC’s research over four years, living together before marriage is not as big of a predictor of divorce as it once was.
“You find out if you can actually tolerate them for the rest of your life,” one woman in center city Philadelphia notes.
The government study examined trends in first marriages. They found that those who were engaged and living together before the wedding were as likely to stay together for at least a decade as those who had never lived together.
“You want to see how it’s going to work out before you commit to something,” explained one man.
Of course this trend it doesn’t ring true for everyone.
“I never lived together with my husband before we got married, but we did end up in divorce,” admitted an older woman.
The CDC also found that those living together without a marriage commitment were less likely to stay together in the long run.
“I’ve lived with my… were not technically married… for 15 years already,” this woman said. But was she still “looking”?
“Your options are always open when you’re not married,” she said with a laugh.
|
<urn:uuid:a45f8543-08e1-48f2-8814-0a073fe8a185>
|
CC-MAIN-2016-26
|
http://philadelphia.cbslocal.com/2012/03/23/study-pre-marriage-cohabitation-no-longer-strong-precursor-to-divorce/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00023-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97797
| 358
| 2.5625
| 3
|
The Zika ThreatASM Acts to Counter Zika Virus Outbreak.
Crohn’s Disease (CD) is a devastating illness in search of a cause and a cure. More than 800,000 people in North America suffer from CD, a gastrointestinal disorder characterized by severe abdominal pain, diarrhea, bleeding, bowel obstruction, and a variety of systemic symptoms that can impede the ability to lead a normal life during chronic episodes that span months to years. Researchers and clinicians agree that onset of CD requires a series of events; implicated are certain inherited genetic traits, an environmental stimulus, and an overzealous immune and inflammatory response. The combination of these factors contributes to a disease whose course is variable among patients and whose symptoms range from mild to devastating on any given day. The economic and social impact of this disease is substantial for the patient, the family, the community, and the healthcare system.
Long considered an autoimmune and chronic inflammatory disorder, current CD therapies are designed to treat symptoms of overactive inflammation in the gut. Chronic inflammation, however, does not generally induce itself. Inflammation is normally caused by a “foreign body,” an inanimate object (i.e., splinter) or animate objects like rogue cells (i.e., cancer) or microorganisms (i.e., bacterium, virus, or fungus). Until the cause of inflammation is eliminated, the body continues to send in its clean-up crew, the white blood cells of inflammation whose job it is to expel the tissue invader. Inflammation only subsides when the causative agent is finally banished.
There is suspicion, supported by reports of genetic inability to interact appropriately with certain bacteria or bacterial products in some patients, that CD may have a currently unrecognized infectious origin, perhaps environmentally derived. That CD is a set of wide-ranging symptoms, more like a syndrome than a specific disease, suggests that if its origin is microbial, more than one etiologic agent may ultimately be identified. Bacterial suspects at the moment include a Mycobacterium and a variant of the normal bacterial flora of the gut, Escherichia coli. The possibility of more than one infectious cause that leads to a similar set of symptoms confounds the research agenda to find both a cause and a cure for CD.
One acknowledged potential microbial agent of CD is Mycobacterium avium subspecies paratuberculosis (MAP), a microorganism that causes a gastrointestinal disease similar to CD in ruminants, including dairy cattle, called Johne’s disease (or paratuberculosis). People with CD have 7:1 odds of having a documented presence of MAP in blood or gut tissues than those who do not have CD, thus the association of MAP and CD is no longer in question (see Figure 1, page 11). The critical issue today is not whether MAP is associated with CD, but whether MAP causes CD or is only incidentally present, not an inciter or participant in the disease process.
If MAP is involved in the disease process of CD or other gastrointestinal disorders, then we need to determine how people are exposed to this microorganism, how to prevent that exposure, and how to treat the infection.
Despite its prevalence in the U.S. population in numbers that exceed most cancers, CD is not a focus of research attention in the same way as these other feared diseases. The American Academy of Microbiology convened a colloquium with experts in medicine, microbiology, veterinary pathology, epidemiology, infectious diseases, and food safety to describe the state of knowledge about the relationship between MAP and CD and to make recommendations for effective research that will move the field forward.
The general consensus of the assembled experts was that there are certainly reasons to suspect a role for MAP in CD:
Circumstantially, these observations appear to make a compelling case for MAP as involved in CD. On the other hand, the ability to definitively identify MAP as the cause of CD, or the cause of a significant number of CD cases, has been stymied by the elusive characteristics the organism itself, the lack of broadly available and validated clinical tools to easily and definitively identify MAP in accessible tissues, and the late symptomatic stage at which CD is finally diagnosed, where the origin of the destructive inflammation could have been years before the patient sought medical care. Most important, however, is the lack of resources, financial and scientific, to generate the tools that clinicians and patients need to determine whether MAP is involved in the disease process or not.
Several important clinical trials of antibiotics have been attempted in CD patients, with variable results. Treating CD patients with existing antibiotics with activity against other Mycobacteria (M. tuberculosis, which causes TB, and M. avium complex, or MAC, which is pathogenic in immune compromised persons) have either failed to provide relief (TB drugs) or produced promising outcomes for some patients, but not all (MAC drugs). Confounding these clinical results is the lack of information about which patients in the clinical trial population were actually infected with MAP, and whether any MAP organisms in infected patients were susceptible to the antibiotics used in the trials. Without sensitive and specific diagnostics that can detect early MAP infection, knowledge of how and where to isolate MAP for antibiotic susceptibility studies, and drugs that are known to be active against MAP itself, alone or in combination, the role of MAP in CD will remain circumstantial and the controversy over CD etiology will continue.
There is little known about where exactly viable MAP can be found in human tissues or, since most pathogenic Mycobacteria are intracellular, in which cells MAP can live and grow in humans. While the site of infection and tissue pathologies of MAP in animals can be assessed at necropsy, there is enough dissimilarity between digestive processes of ruminants and humans that this information may not necessarily inform studies in humans.
Of concern from a public health perspective is the ongoing presence of MAP disease in commercial livestock that supply the U.S. with dairy and meat products. If, in fact, CD is a zoonotic infection (one that is passed from animals to humans) and MAP is the (or one) cause of CD, then early identification of MAP disease in veterinary practice and appropriate management of these animals to safeguard the food supply will be critical to guard the public health.
Even in animals, it is nearly impossible to diagnose Johne’s disease in the early stages of disease. Diagnosis is by a combination of clinical observation (wasting and reductions in milk production in dairy cattle, for instance) and microbiological, histopathological, and immunological testing of Johne’s disease suspects. Although efforts to eliminate Johne’s disease and MAP from livestock herds are ongoing, the lack of an accurate and easily-administered diagnostic for early disease onset is hampering these efforts. The results are mixed, and food products containing MAP or MAP DNA can be found on supermarket shelves. Veterinary diagnostics that are sensitive (detect MAP at early stages of infection) and specific (identify MAP and not other microorganisms) will be necessary to eliminate Johne’s disease from the commercial food supply. Research to discover and validate these techniques may also shed light on diagnosis of human disease.
Colloquium participants agreed that research to elucidate the role of MAP in CD must address two major unknowns: (1) whether MAP from livestock and other animals is transmissible to humans and how it is transmitted and (2) whether humans are susceptible to infection and disease after exposure to MAP. No single study will fill all the gaps in our understanding of the possible relationship between MAP and CD. Furthermore, participants agreed that validated, reproducible biological markers confirming human MAP infection are desperately needed. If MAP can be causally associated with CD using reproducible analytical techniques, appropriate patient populations can be treated with antibiotics that are selected for their MAP activity. Then, at least MAP-infected CD patients will have both a cause and a cure.
|
<urn:uuid:8ab306e2-33ba-4e4a-96ad-79266fb94ddd>
|
CC-MAIN-2016-26
|
http://www.asm.org/index.php/colloquium-program/browse-all-reports/91334-mycobacterium-avium-paratuberculosis-infrequent-human-pathogen-or-public-health-threat
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00000-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937925
| 1,633
| 3.40625
| 3
|
Complex Societies in the West In many ways the early cultures in North America were less developed than those of South America and Mesoamerica. North American groups had no great empires and few ruins as spectacular as those ancient Mexico or Peru, but never the less the first peoples of North America did created complex societies.
Kwakiutl, Nootka, & Haida Peoples All three groups lived in the Pacific Northwest--- form Oregon to Alaska--- and relied on the sea to support their sizable populations. They hunted whales in canoes: the canoes were large enough to carry at least 15 people. In addition they relied on the coastal forest to provide plentiful food.
Hohokam Civilizations also started to emerge in Southwest America--- in the dry, desert lands of Central Arizona--- where the Hohokum used irrigation to produce harvest of corn, beans, and squash.
Anasazi They lived in the four corners region, where Utah, Arizona, Colorado and New Mexico meet. They built impressive cliff dwelling society: These were large houses built on flat hill tops or shallow caves in the walls of deep canyons. These skilled builders used mud-like mortar to construct walls up to 5 stories high and they built small window to keep the burning sun out.
They were large villages of apartment- style compounds made stone and sun- baked clay. These villages were called pueblos. The largest Anasazi pueblo was Pueblo Bonito meaning “beautiful village”. Pueblo Bonito probably housed about 1,000 people and contained 600 rooms.
Mound BuildersThe Mississippian Beyond the Great Plains in the woodlands, east of the Mississippi River, other ancient mount building people emerged. They would build huge earthen mounds in which they buried their dead. These mounds held bodies of tribal leaders and where often filled with gifts and finely crafted copper and stone objects.
The last mound building culture was the Mississippians that lasted form 800 AD until 1500 AD. They created a thriving village based on farming and trade. Perhaps, at its height 30,000 people lived in its capital Cahokia. It was located near the Mississippi and Ohio Rivers which made transportation easy and encouraged trade. Cahokia was led by a priest ruler that regulated farming activities.
Also common to Cahokia and North American clans was the use of totems The totem was used as a symbol of the unity of a group or clan. It also helped define certain behaviors and social relationships of a group. They were usually placed in front of their homes and clans would do rituals and dances around them associated with important group events such as marriages, naming of children, ocupation, religious services, planting crops or gathering harvests.
Maya Created City-States The homeland of the Maya stretched from southern Mexico into northern Central America. This area included a highland region in the South and a lowland region to North. The Highlands: Are a range of cool cloud-weathered mountains that stretch form southern Mexico to El Salvador. The Lowlands: include the dry scrub forest of the Yucatan Peninsula and the dense steamy jungles of southeastern Mexico and northern Guatemala. While the Olmec were developing their civilization along the Gulf Coast the Mayans were evolving. They took on Olmec influences, and blended them with local customs. By AD 250 Maya culture had burst forth a flourishing civilization.
Classic Period AD 250 to 900 is known as the Classic Period of Maya civilizations. During this period archaeologists have discovered at least 50 major Mayan sites all with monumental architecture. (TIKAL) Each of these were an independent city-state, and was ruled by a god-king. Mayan cities featured giant pyramids, temples, palaces and elaborate stone carvings dedicated to the gods and important rulers. Mayan cities also featured a ball court in which they played games that had religious and political significance. The Mayans believed that the playing of games would maintain the cycles of the sun and moon and bring life-giving rains.
Trade and Agriculture Although the Maya city-states were independent of each other, they were linked through alliances and trade. They would exchange their local products such as salt, flint, feathers, shells and honey. They also traded crafted goods like cotton textiles and jade ornaments. And despite having no uniform currency, the cacao (chocolate) beans served as a common currency. As with the rest of the Mesoamericans, agriculture--- particularly the growing of maize, beans, and squash--- provided the basis for Mayan life. They practiced slash-and-burn farming and among other farming techniques.
Mayan Society Successful farming methods led to the accumulation of wealth and the development of social classes. The Noble Class: priests, and leading warriors. Middle Class: merchants, and those with specialized knowledge, such as a skilled artisan. Bottom Class: was the peasant majority. They Mayan King sat at the top of the class structure and were regarded as a holy figure.
Mayan Religion The Mayans believed in many gods: There were gods of corn, gods of death, of rain, and of war. Gods could be good or evil. Sometimes both. Mayans worshiped their gods in various ways: They prayed, and made offerings of food, flowers and incense. They also pierced and cut their bodies and even offered their own blood, believing that this would nourish the gods. Sometimes the Mayan gods even carried out human sacrifices (usually of captured enemies). They believed that human sacrifice pleased the gods and kept the world in balance.
The Mayan religious beliefs also led to the development of the calendar, mathematics and astronomy. The Mayans believed that time was a burden carried on the backs of the gods and thus a day would be lucky or unlucky depending on the mood of the god. So it was very important to have an accurate calendar to know which god was in charge of the day. So, they created a 260 day religious calendar to tell which gods where in charge of what days and a 365 day solar calendar to determine the seasons. The 2 calendars were linked together so that they could identify the best times to plant crops, attack enemies, and crown new rulers.
Written Language The Mayans developed the most advanced writing system in the ancient Americas. Their writing consisted of about 800 hieroglyphic symbols called glyphs which stood for words and syllables. Their writing system also help them record history on stone tables or bark paper know as codex.
Fall of Maya In the late 800s the Mayans suddenly abandoned many of their cities. They are a couple guesses on what the reason for their departure: 1. Because warfare had broken out among various city-states which disrupted trade and resulted in economic hardship. 2. Because there was population growth and over-farmed land lead to a food shortage, famines and disease.
16.2 Maya Kings and Cities Environment – Dry forest of the Yucatan, dense jungles of the south eastern Mexico Urban Centers – City States Such as TIKAL, each ruled by a god-king, comprised of giant pyramids, temples and palaces Economy – based on trade and farming sophisticated methods such as planting on raised platforms above swaps and on hillside terraces Social Structure – three social classes; nobles (priests and warriors), middle class (merchants and artisans), lower class of peasants. Religion – polytheistic, offered human sacrifices Achievements – developed calendar, math astronomy and writing system
Geography of Americas: what is the character of the land? THE CENTRAL VALLEY OF MEXICO WAS THE SITE OF NUMEROUS CIVILIZATIONS. WHY THERE?
An Early City-State The first major civilization of central Mexico was Teotihuacán, a city-state whose ruins lie just outside Mexico City. In the first century A.D., villagers at this site began to plan and construct a monumental city, even Teotihuacán was the largest urban larger than Monte Albán, in center in pre-Columbian America Oaxaca (south central and, in the hey-day of its existence, Mexico). one of the three largest cities in the This civilization predated world, rivaling Rome in Europe and the Aztecs Beijing in Asia. Is city size a reasonable way to measure the complexity of a civilization?
An Early City-State At its peak in the sixth century, Teotihuacán had a population of between 150,000 and 200,000 people, making it one of the largest cities in the world at the time. The heart of the city was a central avenue lined with more than 20 pyramids dedicated to various gods. Two great pyramids (the Sun and Moon) formed the axis of the central avenue
Teotihuacán became the center of a thriving trade network that Teotihuacán extended far into Central America. The city’s most valuable trade item was obsidian, a green or black volcanic glass found in the Valley of Mexico and used to make razor-sharp weapons. There is no evidence that Teotihuacán conquered its neighbors or tried to create an empire. However, evidence of art styles and religious beliefs from Teotihuacán have been found throughout Mesoamerica.
“City of the Gods.” After centuries of growth, the city abruptly declined. Historians believe this decline was due either to an invasion by outside forces or conflict among the city’s ruling classes. Regardless of the causes, the city was virtually abandoned by 750. The vast ruins astonished later settlers in the area, who named the site Teotihuacán, which means “City of the Gods.”
2a. What does this knife suggest about this early culture and theirtechnological and artistic skills?2b. Look carefully at the construction of the knife. For what do youthink it was used? What evidence supports your hypothesis?
Toltecs Take Over After the fall of Teotihuacán, no single culture dominated central Mexico for centuries. Then, around 900, a new people, the Toltecs, rose to power. For the next three centuries, the Toltecs ruled over the heart of Mexico from their capital at Tula. Like other Mesoamericans, they built pyramids and temples. They also carved tall pillars as shown on the next slide
Toltecs Take Over In fact, the Toltecs were an extremely warlike people whose empire was based on conquest. They worshiped a fierce war god who demanded blood and human sacrifice from his followers.
Toltecs Take Over Sometime after 1000, a Toltec ruler named Topiltzin tried to change the Toltec religion. He called on the Toltec people to end the practice of human sacrifice. He also encouraged them to worship a different god, Quetzalcoatl, or the Feathered Serpent.
Toltecs Take Over Followers of the war god rebelled, however, forcing Topiltzin and his followers into exile on the Yucatán Peninsula. There, they greatly influenced late-Mayan culture. After Topiltzin’s exile, Toltec power began to decline. By the early 1200s, their reign over the Valley of Mexico had ended.
The AztecsThis is the flag of MexicoThe eagle represents an ancient Aztec symbol; it is perchedatop a cactus and is eating a snake. An Aztec myth told them tosettle where they found an eagle eating a snake.Historians believe the Aztecs migrated from the deserts ofnorthern Mexico into the central valley, where they created theirempire
AZTEC GEOGRAPHY What does this map tell us about Aztec geography? Mayan lands below and What was their capital and where was it located?
DIAGRAM OF TENOCHTITLANThis is water This is an elevated roadway
POWER AND AUTHORITY Through alliances and conquest, the Aztecs created a powerful empire in Mexico.
The Aztec Empire The Aztecs arrived in the Valley of Mexico around A.D. 1200. The valley contained a number of small city-states that had survived the collapse of Toltec rule. The Aztecs, who were then called the Mexica, were a poor, nomadic people from the harsh deserts of northern Mexico. Fierce and ambitious, they soon adapted to local ways, finding work as soldiers-for-hire to local rulers.
Aztecs Grow Stronger Over the years, the Aztecs gradually increased in strength and number. In 1428, they joined with two other city-states— Texcoco and Tlacopan —to form the Triple Alliance. This alliance became the leading power in the Valley of Mexico and soon gained control over neighboring regions.
Aztecs Grow Stronger By the early 1500s, the alliance controlled a vast empire that covered some 80,000 square miles stretching from central Mexico to the Atlantic and Pacific coasts and south into Oaxaca. This empire was divided into 38 provinces. It had an estimated population of between 5 and 15 million people.
Aztec Power The Aztecs based their power on military conquest and the tribute they gained from their conquered subjects. The Aztecs generally exercised loose control over the empire, often letting local rulers govern their own regions What other civilizations ruled this way? The Aztecs did demand tribute, however, in the form of gold, maize, cacao beans, cotton, jade, and other products. If local rulers failed to pay tribute, or offered any other kind of resistance, the Aztecs responded brutally. They destroyed the rebellious villages and captured or slaughtered the inhabitants (recall how the Romans also did this)
Aztec warriors: what do the costumes tell usabout them? Can you tell anything about their weapons?
Sacrifices for the Sun God Religion The most important rituals involved a sun god, Ruled Huitzilopochtli. Aztec Life According to Aztec belief, Huitzilopochtli made the sun rise every day. When the sun set, he had to battle the forces of evil to get to the next day. To make sure that he was strong enough for this ordeal, he needed the nourishment of human blood.
Aztec priests usedsharp obsidian bladesto cut open victims.Obsidian, a volcanicrock, is like glass.Most sacrifices werecaptive warriors.Why would theyespecially want thestill-beating heart ofthe victim?
Human sacrifice is an extreme and rather rare occurrence historically, especially as it allegedly developed on the scale the Aztecs practiced it. Some scholars claim that Spanish observers deliberately exaggerated human sacrifice among the Aztecs as a means of justifying the conquest of Mexico. But there is indeed evidence that human ritual killing was an Aztec trait. Besides religion, what “purposes” might such killing have had?
Problems in the Aztec Empire In 1502 Montezuma II was crowned emperor. Under Montezuma, the Aztec Empire began to weaken. For nearly a century, the Aztecs had been demanding tribute and sacrificial victims from provinces under their control. Now, Montezuma called for even more tribute and sacrifice. A number of provinces rose up against Aztec oppression. This began a period of unrest and rebellion, which the military struggled to put down.
Over time, Montezuma tried to lessen the pressure on the provinces. Montezuma’s For example, he reduced the demand Reign for tribute payment by cutting the number of officials in the Aztec government. But resentment continued to grow. Many Aztecs began to predict that terrible things were about to happen. They saw bad omens in every unusual occurrence—lightning striking a temple in Tenochtitlán, or a partial eclipse of the sun, for example. Where else have we seen such developments and related responses?
Montezuma’s Reign The most worrying event, however, was the arrival of the Spanish. For many Aztecs, these fair-skinned, bearded strangers from across the sea brought to mind the legend of the return of Quetzalcoatl.
The Inca Create aMountain Empire Chapter 16 section 4
Inca Beginning• Lived in high plateaus in the Andes• Valley of Cuzco 1200’s• Rulers were descended from the sun god Inti• Bring prosperity and greatness• Men from one of eleven families could serve• Believed to be descendants of the sun god
Pachacuti Builds an Empire• 1438 Pachacuti took the throne• Inca conquered all of Peru• 1500 Inca empire stretched 2,500 miles on western coast of South America• “Land of Four Quarters” 80 provinces 16 million people
Pachacuti Builds an Empire• Used diplomacy and conquest• Before attacking offered an honorable surrender• Keep customs and rulers in exchange for loyalty• Many states gave up without resistance• Once defeated Inca tried to gain loylaty
Incan Government Creates Unity• Extensive road system• Rulers divided their territory• Quechau- official language• Founded schools to tech the Incan ways• Groups identified by certain patterns of clothing
Incan Cities Show Government Presence• Built many cities in conquered areas• Architecture was the same throughout the empire• All roads led to the capitol• Cuzco-stone homes, stones fit together without mortar
Incan Government Total control over economic and social life Regulated production and distribution of goods Inca allowed little private commerce Allyu- communinty cooperation
Incan Government• Ayllu-extended family, undertook tasks not too big for one family – Irrigation canals – Cutting agricultural terraces – Stored food to distribute during hard times• Families divided into groups of 10, 100, 1000, 10,000
Incan Government A chief led each group Chain of command stretched all the way to Cuzco Inca ruler and council of state held court If a group resisted Inca control the were relocated
Incan Government• Main demand was for tribute (usually labor)• Mita- was the labor tribute• Have to work for the state a certain number of days• Incan system more like socialism or modern welfare state
Incan Government Aged and disabled taken care of by the state State fed people Freeze-dried potatoes (chunos) Stored in warehouse for food shortages
Public Work Projects 14,000 mile road program Paved to simple paths Built guest houses and shelters along the road Chasquis- traveled the road as a postal service Easy way to move troops
Government Record- Keeping Inca never developed a writing system History and literature done through oral tradition Quipa- series of knots used as an accounting system
Government Record Keeping Position of knots and colors meant different things Res strings- warriors Yellow strings-gold Inca had two different calendars Gods ruled the day and the time
Religion Supports the State Worshipped fewer gods Key nature spirits Moon Stars, thunder Viracocha- the creator Sun worship amounted to worshipping the king
Great Cities Temple of the sun , Cuzco most sacred Decorated in gold Gardens of plants and animals made out of gold and silver Walls of several buildings covered in gold
Great Cities Hiram Bingham in 1912 found Machu Picchu Isolated and mysterious Religious center Retreat for rulers of Pachacuti
Discord in the Empire 1500’s Huayan Cupac ruler Inca’s at their peak Received a gift in Ecuador Filled with butterflies and moths (bad omen) Few weeks later died of small pox
Discord in the Empire Empire split by his sons Atahualpa received Ecuador Huascar received the rest Soon Atahulpa claimed the whole empire Fought each other Tore empire apart
|
<urn:uuid:0362e146-143c-44d9-9650-c0690e973e68>
|
CC-MAIN-2016-26
|
http://www.slideshare.net/jtoma84/16people-and-empires-in-the-americas
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958224
| 4,104
| 4.0625
| 4
|
Venetian School. Son of Jacopo Bellini and elder brother of Giovanni Bellini. Sent to Constantinople by the Venetians in 1479 at the request of the Sultan Wehmet Ali for a distinguished portrait painter.
LONDON, NATIONAL GALLERY
PORTRAIT OF MEHMET ALI. A bust of the Sultan nearly in profile to left, against a black background in an arched opening with a rich carpet falling over the sill. He has a long curved nose, moustache, and pointed beard. He wears a large white turban with red crown, and a red cloak with a broad collar of brown fur, which has been restored.
“As a portrait, this injured piece is still of extraordinary interest; and whilst it presents to us the lineaments of the wiliest of Orientals, it charms us by the wondrous finish of the parts which have resisted the ravages of time.” (C, and C.)
Dated November 25, 1480. Layard Bequest, 1917.
THE PREACHING OF ST. MARK. The foreground of a large square, flanked on either side by plain square buildings, and backed by the fa~ade of a magnificent church, is filled with people forming three principal groups. On the left is a crowd standing behind St. Mark, who is preaching from the top of a small stone bridge. In the centre about twenty women in white Oriental costume are seated on a carpet; and on the right a number of men are standing, several of them with large turbans. The scene is supposed to represent Alexandria, and though the church is reminiscent of St. Mark’s, is given an Oriental air by the introduction of an obelisk, a palm-tree, and a giraffe.
This great picture was originally in the School of San Marco. It was left unfinished at Gentile’s death in 1507, and completed by his brother Giovanni.
“We see in this piece the final creation of the elder and the mature labour of the younger brother. . . The canvas has lost most of its value from abrasion and repainting, yet amidst the ruin we still perceive that the art of Gentile on the eve of his death was better than it had ever been before.” (C. and C.)
THE PROCESSION OF THE HOLY CROSS. “The scene is laid in the Piazza of San Marco, Venice, with the Doge’s palace on the right and the Colonnade on the left. The procession issued from the portal between San Marco and the palace, and gravely proceeding up the Piazza, has turned at right angles to the left; so that, whilst the van, headed by brethren of the school, has been formed into a deep array on the shady side, the middle of the foreground is occupied by the baldequin covering the shrine of the relic, with its white-clad bearers and satellites holding tapers; and on the sunny side the deputation with their flags and maces, the clergy, and the Doge with the umbrella advance in solemn state. Near the shrine kneels the merchant de Salis, whose son was healed by his father’s vow to the Cross. Within the rectangle of the procession, animated groups of spectators and single figures are disposed with much felicity, affording lively illustration of the costume of the period. There is no doubt that this is the most important extant work of the Venetian School previous to the advent of Titian.” (C. and C.) Painted in 1496 for the School of San Giovanni Evangelista, Venice.
THE MIRACLE OF THE HOLY CROSS. The foreground, beyond a narrow ledge of planks across the foot of the picture, is a canal, flanked on either side by high houses, and disappearing through the round arch of a curved foot-bridge which cuts across the middle distance. The left bank of the canal is thronged with people, who watch St. Vendramin rescuing the cross from the water, while two or three boats are being put off to assist him. The procession halts on the bridge, with a tall banner in the centre. In the right foreground are five kneeling figures larger than the rest.
Painted in 1500 for the School of San Giovanni Evangelista, Venice.
|
<urn:uuid:3b27dfac-f6d7-4ff2-ba34-8ce748efd182>
|
CC-MAIN-2016-26
|
http://art.yodelout.com/painterartist-gentile-bellini/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955514
| 904
| 2.703125
| 3
|
The production of normal sperm in the semen, which needs for pregnancy and normal fertility.The other function of the testes is the production of testosterone in men is, hormones and others.So in most patients with sperm from the semen, although it has nothing away still produce sperm and male hormones remains normal.
The causes of azoospermia sperm None
1.The various causes of null sperm are as follows:
2.Blockage in the flow of semen (sperm) from the testicles to the outside through the urethral opening
3.Hormone deficiency and pituitary LH, FSH, prolactin, thyroid hormone causes azoospermia.
4.Maturation arrest is also leading to azoospermia (unemployment spermatids) of primary
spermatocytes to secondary spermatocytes, spermatids or mature sperm.
5.Diseases of the testes (primary dysfunction of Leydig cells), Chromosomal (Klinefelter syndrome and its variants, XX male gonadal dysgenesis), Defects in androgen biosynthesis, Orchitis (mumps, HIV, other viruses)
6.Varicocele (Grade 3 or worse): A varicocele is a varicose vein in the cable that connects the testicle ..Trauma leads to male infertility.
10.Granulomatous disease such as tuberculosis, sarcoidosis of the testicles
11.Neurological diseases such as myotonic dystrophy produces azoospermia and male infertility.
12.Development and structural defects, germ cell aplasia, Sertoli cell only androgen resistance
13.Mycoplasma infection also cauases azoospermia and male infertility.
14.Defects associated with systemic disease, liver disease, kidney failure, sickle cell anemia, celiac disease usually leads to azoospermia and male infertility.
The above are the main causes leading to azoospermia and male infertility.
The diagnosis of azoospermia because of the amount of sperm None
To properly diagnose the cause zero sperm count and azoospermia cure is necessary a detailed history and physical examination.
Examination of the history and physical examination: The first step in proper treatment and cure of azoospermia and male infertility is the accurate diagnosis of the cause of sperm count NIL.
Azoospermia Research and Diagnosis:
To complete the diagnosis of the causes of azoospermia (sperm NIL) or male infertility, one or more of the following tests may be needed such as:
1.Male hormone profile: This profile includes all tests of male hormones that affect testicular development, growth and development of other reproductive organs and genital functions. LH, FSH, testosterone, prolactin, thyroid test.
2.That is, the analysis of the chromosome karyotype (chromosome study)
3.Molecular genetic studies done in some special cases
4.Antisperm antibodies to cure azoospermia
5.USG or Doppler study of scrotum and testes for the treatment of azoospermia
6.Sensitivity of semen culture to determine the cause of male infertility
7.Semen fructose for the diagnosis of azoospermia
8.Genetic studies to treat or cure azoospermia male infertility
9.FNAC to cure testicular azoospermia
10.Egg penetration test to cure male infertility
11.Evaluation of androgen receptors to determine azoospermia
12.Combined pituitary hormone tests are done when necessary as azoospermia Cure
13.Immunobead test for the treatment of male infertility or azoospermia
14.MRI of the head, CBC, systemic disease testing for the diagnosis of azoospermia
14.Factory test is done to heal and find Kallman syndrome to determine azoospermia and male infertility
1.Homeopathic medicines for medicine Homeopathy cures Azoospermia hormone free.
2.Homeopathy is effective medicine to cure 95% of sperm abnormalities in azoospermia, oligospermia, low sperm count, low motility, low sperm number and abnormal morphology of sperm cells that corrects spermatogenesis.
3.Faster progress is homeopathy medicine among all treatments for male infertility, and treatment azoospermia, fourfold increases sperm count treatment of each month until the optimal mobility.
4.Homeopathic treatment is free of any effect or side effects for the treatment of male infertility or azoospermia.
5.During treatment with homeopathy for male infertility azoospermia, while no restriction on dietary restrictions. Only restriction is not taking male hormones, like the male hormone testosterone can block the good effect of this treatment. Therefore, the patient should avoid taking male hormones for at least a month before taking this treatment.
6.Due to faster treatment homeopathy among all treatments, the treatment duration is 4 to 6 months only to cure azoospermia and male infertility.
7.After taking homeopathic medicines will be a gradual improvement and cure of azoospermia to keep for long. Azoospermia male infertility can be controlled and sperm count remains normal 8 to 10 years or more after completion of treatment, while the count is reduced hormonal treatment once the patient stops taking hormones.
|
<urn:uuid:0af13811-af3b-4ceb-9797-b712c9dba05a>
|
CC-MAIN-2016-26
|
http://india-ayurvedic.blogspot.com/2012/01/ayurvedic-medicines-for-azoospermia.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00127-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.849663
| 1,157
| 2.5625
| 3
|
Raised Beds Versus Rows
To choose the best method of setting up a home garden, consider the type of soil in the garden plot. Native top soils in the west can range from light, sandy soils to heavier clays, or to adobe types that dry like concrete. These soils are commonly found in new housing developments, where all the topsoil often has been removed, leaving only the clay subsoil.
If the soil falls somewhere between a loose, sandy soil and a rich, deep loam soil, planting a garden in rows can be simple, inexpensive and quick. Water row gardens by flood irrigation in furrows, or use sprinklers or drip irrigation.
To improve any soil for planting, mix into the soil to one and one-half feet. This process can quickly improve drainage, encourage plant roots to grow deeper and improve soil aeration. Organic material will hold moisture and, as it is broken down in the soil, release nitrogen and help beneficial organisms that live in soil.
For heavy, clay soils, or soils with poor drainage, raised beds are the answer. Raised beds save space, drain faster, heat up earlier in the spring, and save water by keeping it where the plants are growing. Also, because gardeners walk around raised beds rather than on the soil, the soils are kept loose.
Raised beds offer other advantages. They are more comfortable to work on than row plantings and can be designed to be accessible from a wheelchair. Raised beds can offer a solution to gardeners with small yards and limited spaces.
For more information, see the following Colorado State University Extension fact sheet(s).
- Vegetable garden: Soil Management and Fertilization
- Choosing a Soil Amendment
- Perennial Gardening
- Irrigation Water Quality Criteria
Do you have a question? Try Ask an Expert!
Updated Monday, February 22, 2016
|
<urn:uuid:d2590a16-c7d7-4f63-ad7f-4f3e22fb8be5>
|
CC-MAIN-2016-26
|
http://www.ext.colostate.edu/ptlk/1812.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.92747
| 395
| 3.34375
| 3
|
Race to the Top grants would boost math, science teaching
During the presidential campaign, Barack Obama said his administration "would recruit math and science degree graduates to the teaching profession and will support efforts to help these teachers learn from professionals in the field. They will also work to ensure that all children have access to a strong science curriculum at all grade levels."
Obama established the $4.3 billion Race to the Top program in February when he signed the economic stimulus package. Race to the Top is a competitive grant program that is "designed to encourage and reward states that are implementing significant education reforms."
The Race to the Top program addresses both parts of Obama's promise.
On recruiting math and science teachers, the program judges applications in part based on the state's record in "providing alternative pathways for aspiring teachers and principals ... particularly routes that allow for providers in addition to institutions of higher education ... and the extent to which these routes are in use." These alternative pathways can serve as a way for math and science graduates to enter the teaching profession without having earned specific degrees in education.
As for providing access to "a strong science curriculum at all levels," the program will look favorably on applications that emphasize science, technology, engineering and mathematics, or "STEM."
As the program guidelines put it, states should "describe plans to address the need to (i) offer a rigorous course of study in mathematics, sciences, technology, and engineering; (ii) cooperate with industry experts, museums, universities, research centers, or other STEM-capable community partners to prepare and assist teachers in integrating STEM content across grades and disciplines, in promoting effective and relevant instruction, and in offering applied learning opportunities for students; and (iii) prepare more students for advanced study and careers in the sciences, technology, engineering, and mathematics, including addressing the needs of underrepresented groups and of women and girls in the areas of science, technology, engineering and mathematics."
States may apply for Race to the Top grants as early as late 2009, though applications will also be accepted in a second round beginning in the spring of 2010. So while the grant program is only beginning to be implemented, the administration has set in motion the framework for this promise to be carried out. We rate it a Promise Kept.
of the Race to the Top program, July 29, 2009
Education Week , "' Race to the Top' Guidelines Stress Use of Test Data ," July 23, 2009
|
<urn:uuid:a03e7d69-6739-4281-a34c-f987df702ea2>
|
CC-MAIN-2016-26
|
http://www.politifact.com/truth-o-meter/promises/obameter/promise/247/recruit-math-and-science-degree-graduates-to-the-t/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95201
| 495
| 3.1875
| 3
|
Socrates, a Greek philosopher once said: "Each one must know himself." Unfortunately, most of us are not aware of our true character. Social conventions are the main cause making us repress what we really think and feel. Only when unexpected events happen, we do have an opportunity to take a close look at our hidden "self.""The Story of An Hour" by Kate Chopin reflects the dramatic development process of Mrs.Mallard's character through the death of her husband; it demonstrates that the true identity cannot be sheltered forever.
At the beginning of the story, the author describes Mrs. Mallard as a woman having the distinctive trait of self-assertion which is constrained by her marriage. She seems to be the "victim" of an overbearing but occasionally loving husband. Being told of her husband's death, "She did not hear the story as many women have heard the same, with a paralyzed inablity to accept its significance."
This shows that she is not totally locked into marriage as most women in her time. Although "she had loved him--sometimes," she unconsciously does not want to accept blindly the situation of being controlled by her husband. Mrs. Mallard is not a one-dimentional, clone-like woman having an expected, acceptable emotional response for every life condition.
Mrs. Mallard's rather uncommon reaction to the news of Mr. Brently Mallard's death logically foreshadows the complete revelation of her suppressed longing for freedom. Being alone in her room "When the storm of grief" is over, she experiences "something coming to her and she was waiting for it, fearfully. What was it? She did not know; it was too subtle and elusive to name." Finally, she recognizes the freedom she has desired for a long time and it overcomes her sorrow: "Free! Body and soul free! She kept whispering."...
|
<urn:uuid:1ceeef39-20a5-4b71-a98a-fa99fbb4eb4f>
|
CC-MAIN-2016-26
|
http://www.writework.com/essay/analysis-mrs-mallard-s-character-story-hour-kate-chopin
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00112-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979027
| 386
| 3.4375
| 3
|
Numbers of Italy
The Italian nation is made up of sixteen regions and four autonomous zones, each one unique in its own way. The Italian peninsula is also home to two microstates, Vatican City and San Marino.
Slowly, in a process known as the Risorgimento, these regions became incorporated into what would become the modern nation of Italy in 1861. The new nation adopted a tri-color flag of red, white and green based upon the French national flag, brought by Napoleon in 1797. Since the end of World War II Italy has been a republic known for its volatile nature - the nation has seen nearly 60 different governments dissolve in as many years. The Italian government has offices for both a President and Prime Minister, with the PM being the head of government. The voting age in Italy is 18, but the voting age for Senatorial elections is 25.
Today, over 58 million people call Italy home and it is the world's fifth largest economy. Twenty percent of the Italian population is over 65 years old. The average family in Italy has 1.27 children. Italian life expectancy is 79.54 years The average Italian worker can expect to make about $26,700 per year, with salaries almost double in Northern Italy. The Italian unemployment rate is roughly 8% but can be as high as 20% in the south, where farms outnumber factories. About 90 percent of Italians consider themselves Roman Catholic, however there are Protestant and Jewish communities as well as a growing Muslim population. Besides people of ethnic Italian ancestry,Italy also has populations of German, French and Slovene Italians in the north and a growing minority of people from Albania and the former Yugoslavia. These new arrivals as well as the French and German-speaking enclaves on the northern borders add even more color to an already colorful population.
Italy: History and Culture
The Italian peninsula has been occupied since the Neolithic era and in ancient times was home to numerous cultures such as the Latins, Samnites, Greek immigrants and the mysterious Etruscans. The ancient city of Rome was founded in 753 BC by Romulus and became the Roman Republic after the rule of seven kings. Rome became the greatest city in the world and the heart of its mighty empire, which would eventually collapse in 476 AD. With the loss of the Emperor, the Pope became a major figure in European politics and Italy became a hotly contested area, fought over by local warlords, the Papacy, Arab invaders and both the Byzantine and Holy Roman Emperors.
Medieval Italy saw nearly continuous power struggles and warfare, with powerful ruling families and city-states vying for supremacy. The richest and most powerful of these rulers such as the Medici family and the Vatican began commissioning great works of art, leading to the Renaissance. Also during this time Italian merchants brought to Europe exotic goods from the Near East, which led to modern banking as well as sailing knowledge that would help start the Age of Discovery. Famous Italian sailors and explorers include Marco Polo, Christopher Columbus, John Cabot (Giovanni Caboto), Amerigo Vespucci and Giovanni da Verrazano.
From the 16th to the late 19th centuries Italy was ruled over by various foreign powers, including the French and Spanish Bourbon Dynasties. Even the Republic of Venice, a powerhouse of trade and once master of the seas, had been dissolved with the arrival of Napoleon. It was not until 1861 that the Kingdom of Italy was formed under Victor Emmanuel II of the House of Savoy with the help Count Cavour and Giuseppe Garibaldi. However it would take several more years before the entire peninsula would be unified as the Kingdom of Italy. By the 1920's the King was powerless in the face of the Fascist dictatorship of Benito Mussolini and with the regime's defeat in WWII, Italy dissolved the monarchy. In 1946 the Italian Republic was formed and two years later Italy adopted its new constitution.
During its tumultuous history, Italians continued to contribute numerous innovations in virtually every field. The modern Italian language, based upon the Latin of Ancient Rome was developed in the Tuscany region and has given great works ranging from The Divine Comedy to The Prince to Pinocchio. Italian innovators span the breadth of time, from Archimedes of Syracuse, to the incomparable Leonardo DaVinci, to Guglielmo Marconi and Enrico Fermi. Famous Italian inventions include the thermometer, barometer, piano, electric battery, nitroglycerin, eyeglasses, wireless telegraphy and arguably the telephone.
Italy: The Food
Major Italian crops include staples like wheat, rice and corn as well as grapes, potatoes, soy
The city of Naples gave birth to the modern Pizza, being the first to use tomatoes and Mozzarella di Bufala. The same area gave the world the first pasta served with tomato sauce. The Bay of Naples is home to rich volcanic soil thanks to Mt. Vesuvius, giving the famous San Marzano tomatoes their unmatchable flavor. To the outsider, Italian food may seem limited to pasta and pizza, but nothing could be further from the truth. Like its people, the food of Italy varies from region to region.
The food of Northern Italy is characterized by using more butter or lard than olive oil, with polenta and risotto being more common than pasta. When pasta is served, butter, cream or pesto sauces are used instead more often than tomatoes. Meat dishes feature more prominently in northern and central Italian cooking, but seafood is tremendously popular nationwide. Southern Italian food is more well-known outside of Italy and includes the famous Neapolitan pizza as well as numerous varieties of pasta served with a rich tomato sauce. The South is also known for the almost exclusive use of extra virgin olive oil and the unique use of herbs and spices such as cinnamon, nutmeg, wild fennel and mint.
Italian desserts can take the form of uncountable varieties of cookies and biscotti, fresh fruit tarts, pastries filled with zabaglione custard, mascarpone or ricotta sweetened with sugar. Of course there is always gelato, Italian ice cream bursting with flavor and granita, Italian ice. Even with all these tempting goodies, and of course excellent Italian chocolate, it seems most Italians are content with a simple but delicious piece of fresh fruit.
By Justin Demetri
|
<urn:uuid:2aef3fb9-a735-4e53-830d-763e471a4c5e>
|
CC-MAIN-2016-26
|
http://www.lifeinitaly.com/culture/italy.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972602
| 1,307
| 3.125
| 3
|
|Home » Know Astrology » Panchang|
Panchang is a spiritual and scientific Hindu calendar. It is an ancient art of Vedic astrology that helps to maintain the best days and time to come. Panchang has been derived from two words Panch(meaning 5) and ang(meaning aspect). The panchang measures time in lunar months whose names reveal the secret path of stars and constellations. It lists four weeks of seven days, identified with planets and gods. This therefore takes into account five aspects - Din(Vara) or the solar day, Tithi or the lunar day, Nakshatra or the constellation, Yoga and Karan.
It is far more accurate than conventional horoscopes and has loads of empowering practical uses like telling you the best days and times for travel, love, parties, moving, interviews, investments dental visits & lot. To find the auspicious time to start anything new it considers the week day of week (Vara), Tithi, Nakshatra(Star), Yogam of the day, Karana of the day, and the ending moments of all these to determine whether the day is Amurtha, Siddha and Shubha. The Panchang is always used as the spiritual expression of time for Hindus and a guide to a life close to God and religion.
Although it is essential to have the knowledge of arithmetical calculations to understand Indian Astrology, but for the benefit of the common people, astrologers have devised a calendar (Panchang) with the help of which, and simple arithmetical calculations, one can have knowledge about planets, good or bad for him.
It is not necessary for a common man to be an astrologer in order to understand the Panchanga. But for a smooth and systematic running of life, one should know how to interpret 'Phalita'.
Panchang means five organs to understand the Phalita. These five things are
Nakshatra (Group of stars)
Yoga (an auspicious moment)
Karan (Half of the part of Tithi)
Vaar (days of the week)
In the Hindu method of calculations, although the 365 ¼ days of the revolutions of the earth round the sun are recognised, the calculations are done according to the revolution of the moon round the earth, which falls short by approximately 7 days during one year, when compared to calculations done according to the solar calendar.
The panchang, based on the lunar calendar, which also has 12 months in a year comes level with the "Ayanas" or sun calculation by adding a month ( known as Loonth or Purshottam Maas) after every three years.
The time between two consecutive risings of the sun is the day, the Solar day or Din.( according to the Gregorian calendar the day begins at 12.00 P.M.) , according to the Indian calendar the time between two consecutive risings of the moon is taken as LunarDay or Tithi.
The panchang measures time in lunar months whose names reveal the secret path of stars and constellations. The face of the new moon is called Amavasya and it ushers in the new month. The first fortnight of the full moon is known as Shuklapaksha "the bright half" as the moon waxes; while the dark half fortnight of the month is called Krishnapaksha during which the moon wanes.
Know Yourself | DIY Astrology
Career & Money - Relationships - Travel & Relocation - Horoscope - Destiny Shop
Friends & Family - Karma & Past Lives - Future Forecasts - All About You
Love & Passion - Special Packages
Home Site Map Contact Us Enquiry Form E-mail Us
|
<urn:uuid:29cee058-e95c-4cf4-b786-bd7ca9b6f2b7>
|
CC-MAIN-2016-26
|
http://www.vedicprediction.com/panchang.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919231
| 789
| 3.15625
| 3
|
Hybrid cars, powered by a mixture of gas and electricity, have become a practical way to \”go green\” on the roads. Now researchers at Tel Aviv University are applying the term \”hybrid\” to power plants as well.
Most power plants, explains Prof. Avi Kribus of TAU\’s School of Mechanical Engineering and its innovative new Renewable Energy Center, create power using fuel. And solar thermal power plants — which use high temperatures and pressure generated by sunlight to produce turbine movement — are currently the industry\’s environmentally-friendly alternative. But it\’s an expensive option, especially when it comes to equipment made from expensive metals and the solar high-accuracy concentrator technology used to harvest solar energy.
Now, a new technology Prof. Kribus has developed combines the use of conventional fuel with the lower pressures and temperatures of steam produced by solar power, allowing plants to be hybrid, replacing 25 to 50 percent of their fuel use with green energy. His method, which will be reported in a future issue of the Solar Energy Journal, presents a potentially cost-effective and realistic way to integrate solar technology into today\’s power plants.
Taking down the temperature for savings
In a solar thermal power plant, sunlight is harvested to create hot high-pressure steam, approximately 400 to 500 degrees centigrade. This solar-produced steam is then used to rotate the turbines that generate electricity.
Though the environmental benefits over traditional power plants are undeniable, Prof. Kribus cautions that it is somewhat unrealistic economically for the current industry. \”It\’s complex solar technology,\” he explains. The materials alone, which include pipes made from expensive metals designed to handle high pressures and temperatures, as well as fields of large mirrors needed to harvest and concentrate enough light, make the venture too costly to be widely implemented.
Instead, with his graduate student Maya Livshits, Prof. Kribus is developing an alternative technology, called a steam-injection gas turbine. \”We combine a gas turbine, which works on hot air and not steam, and inject the solar-produced steam into the process,\” he explains. \”We still need to burn fuel to heat the air, but we add steam from low-temperature solar energy, approximately 200 degrees centigrade.\” This hybrid cycle is not only highly efficient in terms of energy production, but the lowered pressure and heat requirements allow the solar part of the technology to use more cost-effective materials, such as common metals and low-cost solar collectors.
A bridge to green energy
The hybrid fuel and solar power system may not be entirely green, says Prof. Kribus, but it does offer a more realistic option for the short and medium term. Electricity from solar thermal power plants currently costs twice as much as electricity from traditional power plants, he notes. If this doesn\’t change, the technology may never be widely adopted. The researchers hope that a hybrid plant will have a comparable cost to a fuel-based power plant, making the option of replacing a large fraction of fuel with solar energy competitive and viable.
The researchers are starting a collaboration with a university in India to develop this method in more detail, and are looking for corporate partnerships that are willing to put hybrid technology into use. It\’s a stepping stone that will help introduce solar energy into the industry in an accessible and affordable way, Prof. Kribus says.
|
<urn:uuid:871652fb-d9d5-4c79-805b-a88c679a9925>
|
CC-MAIN-2016-26
|
http://world.edu/hybrid-power-plants-industry-green/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00151-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929417
| 729
| 4.0625
| 4
|
|Page tools: Print Page Print All RSS Search this Product|
Where a family has no person falling into either of these categories, the family head is generally defined to be the eldest person in the family.
No family head is determined for a couple family.
In the LFS families datacubes, the categories for family type are:
1 Couple family
1.1 Couple family with dependants
1.1.1 Couple family with children under 15
1.1.2 Couple family without children under 15, but with dependent students
1.2 Couple family without dependants
1.2.1 Couple family without dependants, but with children 15 years or older
1.2.2 Couple family without children
2 Lone parent family
2.1 Lone parent family with dependants
2.1.1 Lone parent family with children under 15
2.1.2 Lone parent family without children under 15, but with dependent students
2.2 Lone parent family without dependants
3 Other families
A method of calculating an average by dividing the number of observations by the sum of the reciprocals of each observed value. Under the current families estimation method, the harmonic mean is used to calculate the family weights from the person weights. For example, if a family consists of three people, and their person weights are 100, 200 and 300, the harmonic mean will be:
3 (the number of people in the family) divided by [1/100 + 1/200 + 1/300] = 164
A group of one or more persons in a private dwelling who consider themselves to be separate from other persons (if any) in the dwelling, and who make regular provision to take meals separately from other persons, i.e. at different times or in different rooms. Lodgers who receive accommodation but not meals are treated as separate households.
Boarders who receive both accommodation and meals are not treated as separate households. A household may consist of any number of families and non-family members.
Non Private dwelling
An establishment which provides a communal type of accommodation, such as a hotel, motel, hospital or other institution.
A residential structure which is self-contained, owned or rented by the occupants, and intended solely for residential use. A private dwelling may be a flat, part of a house, or even a room, but can also be a house attached to, or rooms above shops or offices.
Relationship in household
The relationship of all persons usually resident in a household to the household reference person. Where the relationship to the household reference person is other than a couple relationship or a parent-child relationship, a closer relationship to another household member is recorded, if one exists.
Sampling error occurs because a sample, rather than the entire population, is surveyed. One measure of the likely difference resulting from not including all dwellings in the survey is given by the standard error. There are about two chances in three that a sample estimate will differ by less than one standard error from the figure that would have been obtained if all dwellings had been included in the survey, and about nineteen chances in twenty that the difference will be less than two standard errors.
Rules applied in household surveys to ensure that each person is associated with only one dwelling, and hence has only one chance of selection.
Unit record data
Data at the finest level of detail. For LFS, the finest level of detail is the person level. For confidentiality reasons, data are aggregated for output purposes.
A person who usually lives in that particular dwelling and regards it as their own or main home.
Factors applied to sample responses to expand them to produce population estimates.
These documents will be presented in a new window.
|
<urn:uuid:538bdbda-998c-4b40-8860-f51ecee1d37b>
|
CC-MAIN-2016-26
|
http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/6224.0.55.002Glossary82008?OpenDocument
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00043-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949663
| 767
| 3.203125
| 3
|
Transient Ischemic Attack (TIA)
A transient ischemic attack (TIA) happens when blood flow to part of the brain is stopped for a short time. It's also called a mini-stroke because the symptoms are like those of a stroke but they don't last long or cause lasting damage.
A TIA is a warning that you may have a stroke in the future. Early treatment can help prevent a stroke.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
|
<urn:uuid:c6005aef-13de-48a4-912c-51ad36d5fd37>
|
CC-MAIN-2016-26
|
http://www.emedicinehealth.com/script/main/art.asp?articlekey=134283&ref=127470
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00031-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933533
| 109
| 2.84375
| 3
|
Also found in: Thesaurus, Encyclopedia, Wikipedia.
1. A structure of open latticework, especially one used as a support for vines and other climbing plants.
2. An arbor or arch made of latticework.
tr.v. trel·lised, trel·lis·ing, trel·lis·es
1. To provide (an area) with a trellis.
2. To cause or allow (a vine, for example) to grow on a trellis.
[Middle English trelis, from Old French, from Vulgar Latin *trilīcius, from Latin trilīx, trilīc-, woven with three threads : tri-, tri- + līcium, thread.]
1. (Horticulture) a structure or pattern of latticework, esp one used to support climbing plants
2. an arch made of latticework
3. to interweave (strips of wood, etc) to make a trellis
4. to provide or support with a trellis
[C14: from Old French treliz fabric of open texture, from Late Latin trilīcius woven with three threads, from Latin tri- + līcium thread]
1. a frame or structure of latticework; lattice.
2. such a framework used as a support for growing vines or plants.
3. a summerhouse, arch, etc., made chiefly or completely of latticework.
4. something with interwoven or interconnected parts suggesting a latticework.v.t.
5. to furnish with a trellis.
6. to enclose in a trellis.
7. to train or support on a trellis.
8. to form into or like a trellis.
[1350–1400; Middle English trelis < Middle French (n.) < Late Latin trilīcius (for Latin trilīx) woven with three threads = Latin tri- tri- + -līcius, adj. derivative of līcium thread]
Past participle: trellised
Switch to new thesaurus
|Noun||1.||trellis - latticework used to support climbing plants|
espalier - a trellis on which ornamental shrub or fruit tree is trained to grow flat
|Verb||1.||trellis - train on a trellis, as of a vine |
train - cause to grow in a certain way by tying and pruning it; "train the vine"
|
<urn:uuid:73be8e72-b01b-4494-9e8a-63dcba4c54ac>
|
CC-MAIN-2016-26
|
http://www.thefreedictionary.com/trellis
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00058-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.73643
| 549
| 3.359375
| 3
|
Unique PTSD Treatment for Children
Positive initial results for USF professor’s innovative treatment for children with PTSD used at Crisis Center of Tampa Bay.
USF Assistant Professor Alison Salloum is conducting research on an innovative approach to treating children suffering from trauma. Photo: Aimee Blodgett | USF News
TAMPA, Fla. (July 2, 2013) – Trauma in children’s lives is all too prevalent. But the number of therapists is limited as are families’ financial resources for extensive therapy in many cases.
Her approach is being used in a research study now being conducted at the Crisis Center of Tampa Bay and it is showing positive initial results.
Children’s traumas range from serious illnesses or accidents to sexual or physical abuse, domestic violence, death of someone close or disasters. Post-traumatic symptoms include uncharacteristic irritability, anger or temper tantrums, difficulty sleeping or nightmares, aggressive behavior and changes in personality.
Working with the Crisis Center, Salloum has used her Stepped Care Trauma-focused Cognitive Behavior Therapy (TF-CBT) with children, ages three to seven, and their families to study its efficacy over a six- to eight-week period.
In this novel trauma treatment program, all children in the study are under Salloum’s supervision working with a team of highly trained and skilled therapists and a written program guide. Parents and guardians are taught how to help a child who has experienced serious trauma to feel safe again, overcome anxiety and other emotional problems, and reclaim his or her childhood. They provide the therapy to the children at home which limits office visits and can potentially save time and scarce funds.
“Empowering parents to help children cope with the impact of a traumatic event is our goal. Through research and evidence-based practices, we are teaching parents ways to help their children,” said Salloum who is on the faculty of the USF School of Social Work in the College of Behavioral &Community Sciences.
The Phase 1 results showed that 83 percent of the children who completed treatment responded positively to the program implemented at the Crisis Center.
This is considered step one. A “step up” to more traditional therapist-directed in-office treatment may be necessary for some children.
Following receipt of a grant from the National Institute of Mental Health, Salloum began conducting her pilot study focusing on how well the treatment was working – immediately afterwards and three months later. She also looked at how well parents accepted the approach and the economic cost.
The therapy is provided free of charge and compensation is provided to participants for completing assessments and committing to participate in the program. It is still possible for families to take part in the study. Interested individuals can learn more by calling (813) 264-9955 or by clicking here or visiting http://www.crisiscenter.com/files/dsp/Stepped%20Care%20for%20Young%20Children.pdf.
“Our first priority at the Crisis Center,” said President and CEO David Braughton, “is to ensure that our clients get the help they need to make tomorrow a better day. This joint research project with Dr. Salloum and USF aims to provide children and their families a highly effective, low cost alternative to traditional counseling when dealing with the aftermath of serious trauma. What’s equally important is that parents and children report that their relationship also improves after going through the program.”
Salloum points out that ignoring the immediate and long-term effects of trauma can be problematic.
“Left untreated, many of these children may suffer for a lifetime. We want to change that and make sure children are getting the help they need.”
About Crisis Center of Tampa Bay
The Crisis Center of Tampa Bay brings help, hope and healing to people facing serious life challenges or trauma resulting from sexual assault or abuse, domestic violence, financial distress, substance abuse, medical emergency, suicidal thoughts, emotional or situational problems. Services include free crisis counseling, suicide prevention and support, educational programs, specialized trauma counseling and therapy, case management and financial counseling, and TransCare Medical Transportation Services available 24 hours a day. For more information on Crisis Center, please visit www.crisiscenter.com. The project is supported by Award Number R34MH092373 from the National Institute Of Mental Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Mental Health or the National Institutes of Health.
Barbara Melendez can be reached at 813-974-4563.
|
<urn:uuid:7c224ebb-e620-4de3-a24b-a5f60ae58605>
|
CC-MAIN-2016-26
|
http://news.usf.edu/article/templates/?a=5524&z=210
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00087-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941477
| 969
| 2.671875
| 3
|
Novel Aspects of Insect-Plant Interactions Edited by Pedro Barbosa and Deborah Letourneau This volume represents the forefront of two rapidly advancing areas of ecology: three-trophic-level interactions and the interdisciplinary field of chemical ecology The book focuses on the role of microorganisms as mediators of interactions between insects and plants, providing critical appraisal of studies and suggesting ways to integrate competing hypotheses of insect-plant dynamics. 1988 (0 471-83276-6) 362 pp. Arthropod Biological Control Agents and Pesticides Brian A. Croft Examining the effects of pesticides on predators and parasites and exploring methods for reducing negative impacts of pesticide use, this book focuses on the interaction of pesticides with entomophagous arthropods. It surveys the history of research in the field and discusses susceptibility assessment, lethal, sublethal, and ecological effects of pesticides, and selectivity, resistance, and resistance management. 1990 (0 471-81975-1) 723 pp. Lepidopteran Anatomy, John Eaton This single-source treatment on the anatomy of Lepidoptera provides a detailed exposition of its anatomy plus all its life stages, including the larva and adult forms of the exoskeleton, musculature, organ systems, and specialized structures. As the only thorough examination of the morphology of this insect group, it is an essential acquisition for entomologists, morphologists, and insect physiologists. 1988 (1-05862-9) 257 pp. Integrated Pest Management Systems and Cotton Production Edited by Ray Frisbie, Kamal El-Zik, and L. Ted Wilson The most complete and authoritative work available on the subject, this book brings together information on integrated pest management strategies that are applicable to cotton. It addresses economic, agronomic, and biological factors of pest management and focuses on plant resistance to pests and the genetic rationale for improving plant health. 1989 (0 471-81782-1) 437 pp.
Back to top
Rent Biology of Grasshoppers 1st edition today, or search our site for other textbooks by Anthony Joern. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Wiley-Interscience.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Entomology tutors now.
|
<urn:uuid:b1c27fa9-fe41-4090-a58f-7b6c185fcc70>
|
CC-MAIN-2016-26
|
http://www.chegg.com/textbooks/biology-of-grasshoppers-1st-edition-9780471609018-0471609013?ii=7&trackid=8942cb9d&omre_ir=1&omre_sp=
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00010-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.880828
| 489
| 2.6875
| 3
|
LONDON, England (CNN) -- Treating flu-stricken children with anti-viral medication including Tamiflu and Relenza could do more harm than good, a new report has warned.
Researchers say not enough study has been done into the long-term effects of anti-virals on children.
Researchers from the University of Oxford found that while the anti-virals reduced the duration of illness by up to one day and a half, they had "little or no effect" on the likelihood of the children developing complications.
The researchers conceded that they didn't know the extent to which their report applied to the current swine flu pandemic, but said, "based on current evidence, the effects of anti-virals on reducing the course of illness or preventing complications might be limited."
In compiling their report, published in the British Medical Journal, the Oxford University researchers searched the world for trials of Tamiflu and Relenza on children under 12. They found seven in total; four relating to flu treatment, and three to prevention.
They say none offered a big enough study to determine whether anti-virals have any effect on the chances of children developing serious flu-related complications.
"We've got very little data to go on. These drugs have been used on tens of thousands, in fact millions of children worldwide, and we've found only four trials of treatments involving less than two thousand children," said the report's author, Dr Matthew Thompson, a senior clinical scientist at the Department of Primary Health Care, the University of Oxford.
"We didn't find any trials of children under one. And none of the trials was big enough to show if there's any effect on serious complications like pneumonia or being hospitalized," he said.
The report found that while anti-virals reduced the duration of flu in children, they had little or no impact on the likelihood of the child developing ear infections or any other condition that may require antibiotics.
A review of one study into the effect of anti-virals on asthmatic children, who are considered to have a higher risk of developing complications from the flu virus, found that they did not reduce the risk of the asthma attacks.
The report said that one in 20 children who take Tamiflu suffer nausea and vomiting, as indicated in warnings from the drug's manufacturer. "That obviously can be a particular problem in young children and infants where getting dehydrated is a complication of influenza," Thompson said.
The three studies that focused on the use of anti-virals to prevent influenza taking hold, showed that their potential to stop the spread of flu was "fairly small."
"We'd need to treat 13 children with the preventive course of one of these drugs to prevent one of them from getting flu," Thompson said.
The report's authors suggested governments were too quick to recommend anti-virals as the first defense against the spread of swine flu. In the United Kingdom, people who suspect they have the virus are urged to phone a government helpline. If enough symptoms match the operator's list, they're given an online voucher so a "flu friend" can collect a course of Tamiflu.
The British Department of Health said the report was right to suggest bed rest and over the counter remedies for people with mild cases of flu, but added that it was potentially dangerous to deter people with severe cases of flu from taking Tamiflu, including children.
In a statement, a spokesman said: "Whilst there is doubt about how swine flu affects children, we believe a safety-first approach of offering anti-virals to everyone remains a sensible and responsible way forward. However, we will keep this policy under review as we learn more about the virus and its effects."
The British Medical Association (BMA) also adopted a cautious stance. The chairman of the BMA's general practitioner's committee, Dr Laurence Buckman, said doctors always have to balance the risk of major complications from swine flu with the potential side-effects of anti-virals.
"While we know they are safe, we do know that vomiting and diarrhea can occur in some children and adults who take them," he said, adding "The more we learn about these drugs the more we will know how to treat patients with the most up-to-date clinical evidence."
|
<urn:uuid:a0995edb-dbcf-48f3-aaef-b934d4580125>
|
CC-MAIN-2016-26
|
http://www.cnn.com/2009/WORLD/europe/08/11/influenza.children.tamiflu.relenza/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00097-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970654
| 887
| 2.984375
| 3
|
by Kathiann M. Kowalski
People love to drink bubbles. Why else would Americans drink an average of 56 gallons of soft drinks per person each year?
Getting Bubbles into the Bottle
Soda is a pressurized mixture of flavored syrup and water to which carbon dioxide gas has been added. In modern bottling plants, very cold liquid and carbon dioxide (CO2) are mixed together in big tanks called carbo-coolers, or carbonators. The colder the temperature, the more gas can dissolve in the liquid.
The bottling plant transfers the carbonated soda under pressure to bottling or canning machines. Machines fill containers and cap them immediately so that contents stay under pressure. Water sprays bring the sealed containers to room temperature so that condensation won't form when they're packed in boxes. Then the soda goes to stores and restaurants.
More Bottled Bubbles
Sparkling water is generally bottled or canned like soda. Even with naturally carbonated mineral water, bottlers usually filter the water and add carbon dioxide the way soda bottlers do. That way, potential contaminants don't give the water an “off” taste. Beer starts with a porridge of malted (soaked) barley, to which brewers add hops and special yeast. Soaking releases some of the sugar in the grain. “When the yeast gobbles up the sugar, it spits out alcohol and it spits out CO2,” explains Jonathan Satayathum at Cleveland's Great Lakes Brewing Company. Some breweries release the carbon dioxide and then carbonate their product as soda companies do. Others, like Great Lakes, maintain a certain pressure in the tank. Then they transfer the naturally carbonated beer under pressure to machines that fill kegs or bottles.
Why do beer and nonalcoholic root beer form foamy “heads” when poured? As their bubbles rise, they absorb chemicals that act as surfactants. These “surface-active” chemicals like to be on the surface, says Ira Leifer at the University of California, Santa Barbara. When they are, they weaken surface tension and delay bubble bursting.
Wineries that produce champagne and fine sparkling wine use a first step of fermentation to produce a low-alcohol still wine. They bottle that, add yeast and sugar, and let it ferment again. Afterward, wineries concentrate the lees (dead yeast) by placing bottles at an angle on racks and rotating them periodically. The bottlenecks are frozen in a chemical solution, the bottles go on a conveyor belt, and the tops are opened. Gas pressure shoots the lees out, machines top off fluid levels, and within seconds the bottles get corked and caged. The pressure inside the corked bottle is six atmospheres, or three times the pressure in a car's tires. No wonder the bottle goes “Pop!” when it's finally uncorked!
Be a Fizz Whiz
Look at an unopened soda bottle, and you won't see bubbles. That's because carbon dioxide is dissolved in the soda. Some is also in the pressurized space at the container's top. Gas pressure inside the container is higher than the air outside the bottle.
When you open the container, you hear the whoosh of escaping carbon dioxide. As Ranjan Patro at Memorial University of Newfoundland explains, “The difference in pressure between inside the soda bottle and its outside surroundings causes the gas to flow from the soda bottle.” Basically, the soda obeys a principle called Henry's Law: The amount of gas dissolved in a solvent is proportional to the partial pressure of that gas over the solvent. So, reduce pressure over the liquid by opening the bottle, and the amount of dissolved gas is reduced.
Pour soda into a glass, and tiny bubbles form. “As bubbles rise, they grow bigger and bigger until they reach the surface,” observes Leifer. That's because additional CO2 flows into the bubble from the soda.
“It's really hard for a bubble to form in the middle of nowhere, like in the middle of water,” adds Leifer. But tiny defects or irregularities in the glass provide places for gas to collect into a bubble. “That's why the bubbles seem to form in the same place and all go up in the same line,” notes Leifer.
Put your nose close to the top of the soda. As soda bubbles reach the surface, they burst and release tiny aerosols, or droplets. Thus, your nose feels wet. Sip the soda and feel your tongue tingle. Carbon dioxide is converted to carbonic acid inside the mouth. As that breaks down, its byproducts bind to receptors on the tongue, and you taste fizz.
Eventually, all soda goes flat. Dissolved gas moves out of solution and bubbles up. Finally, the gas concentration outside the soda equals that of the surrounding air. “For most opened beverages, equilibrium is not a tasty situation,” observes Patro.
- A chemical process by which the sugar in a liquid turns into alcohol and a gas. Yeast or certain bacteria can cause fermentation in fruit juices.
- The dried, ripe flowers of a twining vine that give the characteristic bitter taste to beer.
- surface tension:
- A property of liquids arising from unbalanced forces at or near the surface of the liquid. This causes the surface to contract and have properties that resemble a stretched elastic membrane.
- What causes carbon dioxide to separate from flavored syrup and water in a can of soda? Why?
[anno: Opening the can of soda causes carbon dioxide gas to separate from the flavored syrup and water. Before a can is opened, the gas is under higher pressure than the surrounding air. When the can is opened, the pressure changes. The bubbles of carbon dioxide gas rise through the mixture and move out of the flavored syrup and water because there is no pressure holding them inside the mixture anymore. The bubbles of carbon dioxide gas weigh less than the surrounding mixture, so the gas bubbles rise through the mixture.]
- What is Henry's Law?
[anno: Henry's Law states that the amount of gas dissolved in a solvent is proportional to the partial pressure of that gas over the solvent. If the pressure over the liquid decreases, the amount of dissolved gas will also decrease.]
- You have probably seen champagne served in a special glass, called a champagne flute. A champagne flute is usually a tall glass with a narrow mouth. Why do you think these glasses are designed this way? Think about what you learned about surfaces and pressure from Henry's Law.
[anno: Answers may vary but could include that a champagne flute is designed so that a smaller surface area of the liquid is exposed. The glasses are designed this way to slow down the rate at which carbon dioxide bubbles leave the champagne.]
|
<urn:uuid:23ad120e-85b8-4ba7-bab5-ff96636f8a6a>
|
CC-MAIN-2016-26
|
http://www.eduplace.com/science/hmsc/4/e/cricket/cktcontent_4e133.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929308
| 1,424
| 3.5625
| 4
|
Electron Affinities of the Main-Group Elements*
The electron affinity is a measure of the energy change when an electron is added to a neutral atom to form a negative ion. For example, when a neutral chlorine atom in the gaseous form picks up an electron to form a Cl- ion, it releases an energy of 349 kJ/mol or 3.6 eV/atom. It is said to have an electron affinity of -349 kJ/mol and this large number indicates that it forms a stable negative ion. Small numbers indicate that a less stable negative ion is formed. Groups VIA and VIIA in the periodic table have the largest electron affinities.
* Alkali earth elements (Group IIA) and noble gases (Group VIIIA) do not form stable negative ions.
|
<urn:uuid:a311290d-b34d-486b-88c0-231e15ee75ff>
|
CC-MAIN-2016-26
|
http://hyperphysics.phy-astr.gsu.edu/hbase/Chemical/eleaff.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.809817
| 167
| 3.796875
| 4
|
Based around the concept of the world tree, which is cut down by hero of the Kalevala
Vainamoinen, the Finnish world view and mythology was of a old, Shamanic tradition, fairly removed from what became the Norse Myth
cycle, and far older. I am sure that at one point, the peoples of Sweden
also shared the same mythos, but it seemed to have become usurped by the Germanic
mythos. In the Finnish mythology one finds that the world was at first but an ocean, with a water spirit who floated around in the waters, but she was impregnated by the All Father and became the water Goddess who birthed Vainamoinen, and in her labour created Finland
. Interestingly in reality Finland itself was raised from the water as the ice cap retreated after the last Ice Age
. Finland continues to rise today, returning to its natural hight after so long under the hundreds of thousands of tons of weight inflicted by the ice. Much of Finland still lies below sea level, thus the name of the "Land of Thousand Lakes."
Out of this newly formed land comes man, and the bearded one, Vainamoinen helps them to live. And when the world tree grew too large, and was stifling the life of man, it was Vainamoinen who cut it down, teaching the peoples of Finland how to create farm land out of the endless forests. From here on out, much of the "Kalevala" is just various stories and little to enlighten the reader of the religious and mythological beliefs of the peoples of Finland before Christianity. Also written down at the middle of the 19th century, it can't be fully relied on. However, many scholars think that the basic tendency of the Finnish peoples was towards a Shamanic system, much like other Northern peoples. Some think that the old Finnish system was very close to the Sami culture, which is/was a nature spirit worshiping religion. The spirit of the old culture was perverted to a point by the intrusion of Swedish settlers, but due to the fact that most of Finland was left alone, much of the practices survived. Indeed in many areas they survived until the time of Lonnrot.
Obviously the Finnish mythology has much more in common with the Russian's and the other Baltic countries, particularly Estonia. Some of the gods of Thunder, etc. carry over between the two, but I am unsure of how much similarity in that sense existed in Finland, as I have only read that in one book.
Much more study must be done for the Mythology in Finland, or translated to English, as I understand the Finnish Folk Archives are one of the best in Europe. Most of the body of Finnish folk poetry for example, exists only in Finnish, as no one has attempted a wide-scale translation of the material into English, making it difficult for non-Finnish readers of speakers to make a study of the mythology.
|
<urn:uuid:e82bf92e-fb02-4950-838b-2ba9e04401ef>
|
CC-MAIN-2016-26
|
http://everything2.com/user/Wolfang/writeups/Finnish+Mythology
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00189-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.978111
| 607
| 2.75
| 3
|
WASHINGTON -- America's stealth bombers may be in danger of having their cover blown by a new type of radar that uses cell phone technology, researchers say.
The Air Force says it's a limited problem and America's unique stealth fleet is in no danger. Yet U.S. intelligence reports label the radar a serious threat, and several scientists agree.
''We're talking about radar technology that can pinpoint almost any disturbance in the atmosphere,'' said Hugh Brownstone, a physicist at the Intergon Research Center in New York who has worked for the cell phone giant Nokia.
''You might not be able to distinguish between a stealth plane and a normal one, but you might not need to. The point is, you can see the stealth plane as a blip.''
The potential risk comes from radar towers used by cell phone companies to draw in signal patterns. The new technology, called passive radar, watches signals from common cell phone transmissions. When a plane passes through, it leaves a hole in the pattern, giving away its location.
Traditional radar -- the kind stealthy B-2 and F-117A bombers can fool with their angles and radar-absorbing paint -- sends out signals and waits for them to bounce off large objects in the sky and return.
Some aviation experts suspect the Serbs used a rough version of passive radar -- plugging computers into their existing air defense system -- to locate an F-117A Nighthawk stealth bomber, shot down in 1999.
There are more than 100,000 cell phone towers and other sites within the United States. Industry analysts estimate there are 210,000 sites in Europe. The rest of the world is unevenly covered, but even the smallest and poorest nations often have several cell phone towers.
The passive radar system has drawbacks. It can't effectively pinpoint whether a plane is indeed a stealth plane or some other aircraft, scientists say. It's also much more difficult to make work.
''The success rate of these systems is just below the success rate of traditional radar,'' said Air Force Capt. Eric Knapp.
A major hurdle is the complex math necessary to translate cell phone signals into easy-to-understand blips that move across a computer screen. Without the computer programming to make sense of the cell phone signals, it would be impossible to fire a missile at a plane.
Still, the passive radar technology is basically sound, said Nick Cook, an aerospace consultant for Jane's Defence Weekly.
''It needs further work, but the theory is there,'' he said. ''Still it would be some time before I could imagine something like this compromising stealth technology completely.''
John Hansman, professor of aeronautics and astronautics at the Massachusetts Institute of Technology, said passive radar is still in its ''infancy, but is something that will lead to new stealth research.''
''This is another trick that will force stealth researchers to push forward,'' Hansman said.
The British defense contractor Roke Manor Research is in the forefront of passive-radar technology.
Peter Lloyd, head of research there, said, ''We would be utilizing technology that we already have available. The mobile telephone base stations would not have to be altered at all. ''
His company's Web site claims existing stealth technology already has been rendered obsolete.
Brownstone believes China, Japan and Russia already have passive radar in various stages of development. He is concerned that those countries might sell the technology to smaller countries that are hostile to the United States.
Keeping stealth planes safe from enemy radar has always been a back-and-forth contest, pitting American ingenuity against developing concepts in radar.
The F-117A, developed in great secrecy in the 1970s, was not disclosed until 1988. It saw its first combat in the 1989 invasion of Panama and was a star of the 1991 Gulf War.
The B-2 bomber, which saw its first combat in NATO airstrikes against Yugoslavia, uses stealth technologies that are more advanced than the F-117A's. An even newer version of stealth is used in the F-22 fighter now in development. No other country has stealth aircraft in active use, although Russia and others have researched the idea.
Six of the $2 billion B-2s, in their first combat use, flew about 50 secret missions out of a total 30,000 NATO bombing runs over Kosovo in 1999. They dropped about one of every 10 bombs in the campaign.
On The Net:
U.S. Air Force: http://www.af.mil/
Jane's Defence: http://jdw.janes.com/
Roke Manor Research: http://www.roke.co.uk/news/stealth--aircraft.htm
© 2016. All Rights Reserved. Contact Us
|
<urn:uuid:1e8d0636-d1be-48c0-bfef-ef4c7e6e9699>
|
CC-MAIN-2016-26
|
http://staugustine.com/stories/062101/nat_0621010013.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00170-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944935
| 975
| 2.609375
| 3
|
Status Quo Side: Costa Rica
Non-Status Quo Side: Nicaragua
Region: Western Hemisphere
Conflict Type: Interstate
Issues in Dispute: Governance
An attempt to prevent the installation of the President-elect of Costa Rica (CR) by a faction supported by Nicaraguan planes and officers, was put down by socialist Jose Figueres who became President. The faction's leader, T. Picado Michalski, was close to Nicaraguan temporary President Luis Somoza Debayle despite alleged communist support. During Figueres' brief tenure a rebel incursion from Nicaragua brought OAS involvement.
With Figueres again President, relations with Nicaraguan President Anastasio Somoza Garcia were strained, each country charging the other with assassination plots. On January 9 1955 Figueres accused Somoza in the OAS of permitting "adventurers" to train openly in Nicaragua.
A small airborne force of CR rebels commanded by Picado's son landed and seized the northern border town of Villa Quesada in Costa Rica. Despite denials, President Figueres charged Nicaraguan aggression and asked for OAS military aid. The town was recaptured by CR forces on January 12 after "strafing" by enemy aircraft. After an exchange of epithets, Somoza challenged Figueres to a duel on the border. Following an OAS recommendation of assistance, the US on January 16 sold fighter planes to Costa Rica.
The OAS stationed observers on the borders and created a frontier buffer zone. On January 25 300 CR rebels crossed back into Nicaragua and were interned. Both countries asked the OAS to establish a peace commisssion to settle further disputes after then-US Vice-President Richard Nixon was engaged in conciliation.
Agreement was reached on a 5-nation conciliation commission with free access to both countries to check the passage of armed terrorists.
In the early 1980's Costa Rica, burdened with hundreds of thousands of exiles from other Central American countries, charged anti-Sandinista Nicaraguan refugees with plotting to overthrow its government. Border incidents increased tensions until CR President Oscar Arias negotiated the Contadora regional peace plan for which he was awarded the Nobel Peace Prize. In 1987 the Nicaraguan Sandinista government [see NIC] negotiated with the "contra" rebels for a cease-fire; it was subsequently voted out of office in UN-monitored elections.
Copyright © 1999 Lincoln P. Bloomfield and Allen Moulton
|
<urn:uuid:5a7398c9-37fb-4182-9c8d-c4d8681f9d58>
|
CC-MAIN-2016-26
|
http://web.mit.edu/cascon/cases/case_ncr.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943721
| 514
| 2.984375
| 3
|
The authors and abridgers of the Book of Mormon saw our day and were inspired to include in the book events from their history that would best serve us. Mormon told us, for instance, that he wrote of things that he saw and heard, “according to the manifestations of the Spirit which had testified of things to come.” (Morm. 3:16.) Moroni wrote that the Lord had shown him our day and that he was writing to us “as if [we were] present, and yet [we were] not.” (Morm. 8:34–35.)
Among the conditions that Mormon and Moroni knew would exist when the world received the Book of Mormon were wars and rumors of wars. This may be one reason that, in the middle of his account of the missionary efforts of Alma and the four sons of Mosiah during the eighteenth year of the reign of the judges (see Alma 35:13), Mormon changed emphasis:
“Now we shall say no more concerning their preaching, except that they preached the word, and the truth, according to the spirit of prophecy and revelation. …
“Now I return to an account of the wars between the Nephites and the Lamanites, in the eighteenth year of the reign of the judges.” (Alma 43:2–3.)
This passage of scripture raises some questions. Although he has given preference to the major missionary work during the period, Mormon goes back to reporting wars. Why? Of course, one reason is that wars were part of the history of that period, and Mormon was reporting the major events of that time. But why does he interrupt his account of missionary work to focus on war? And why does he spend so much time on it?
These questions are underscored by the balance among the historical sections in the book. The history of the Nephite nation is recorded in 1 Nephi 1 to Mormon 6. First Nephi 1 to Mosiah 29 covers 509 years, or 50 percent of the history of the Nephites; it takes 207 pages, or 43 percent of the Book of Mormon. Alma 1 to 3 Nephi 10 covers 125 years, or 13 percent of the history; it takes 220 pages, or 46 percent of the book. By contrast, 3 Nephi 11 to Mormon 6 covers the final 351 years, or 37 percent, of the Nephites’ history, yet it takes only 51 pages, or 11 percent of the book.
In other words, Mormon used an inordinate amount of space—46 percent of the pages—to cover a diminutive amount of history—13 percent—from Alma 1 to 3 Nephi 10. Why? What is so significant about that period of Nephite history, and what message is Mormon presenting to us, his readers?
One answer may be that the historical pattern found in Alma 1 to 3 Nephi 10 applies to us today. Certain themes and conditions of the Nephite society highlighted in those chapters parallel conditions in our own day.
In Alma 1 to 42, Mormon emphasizes the social problems of priestcraft, materialism, socio-economic inequality, and abuse of freedom. In Alma 43 to 63, he focuses on the wars and civil disruption that led to conspiracy, secret combinations, materialism, sensualism, and corruption in government, as found in the book of Helaman. A record of anarchy and the collapse of government is found in 3 Nephi 1 to 10, preceding the coming of the Lord. This pattern, which can also be seen in many prophecies of the last days, seems to foreshadow a similar pattern of events in our time. (See Rev. 17–19; D&C 45:22–44; D&C 88:87–95; JS—M 1:23–37.)
Another answer may be that, by focusing on war, Mormon gave our generation a chance to see how an ancient people met the challenges and disruption of war.
Motivations for War
The Book of Mormon indicates that among its cultures some wars were fought for better causes than others. Mormon addressed the issue of when a person should go to war and whether anything worthwhile can come from war.
In Alma 43, we see these and related issues confronted head-on. Under the leadership of Zerahemnah, the Zoramites and the Lamanites joined forces with a group of apostate Nephites called Amalekites. The Amalekites hated the Nephites, and when the Lamanites went to war, Zerahemnah made Amalekites his captains in order to “stir up the Lamanites to anger” and “gain power over the Nephites by bringing them into bondage.” (Alma 43:8.)
On the other hand, “the design of the Nephites was to support their lands, and their houses, and their wives, and their children, … that they might preserve their rights and their privileges, yea, and also their liberty, that they might worship God according to their desires.” (Alma 43:9.)
The motivations for going to war determined the different approaches the two sides took. While the Lamanites gathered themselves together in anger, the Nephites gathered themselves together with deliberate resolve, preparing themselves under Moroni’s leadership with breastplates, shields for their arms and heads, and thick clothing. (See Alma 43:18–19.)
“Now the army of Zerahemnah was not prepared with any such thing; they had only their swords and their cimeters, their bows and their arrows, their stones and their slings; and they were naked, save it were a skin which was girded about their loins.” (Alma 43:20.)
After obtaining as much information as he could from his spies, as well as from the Lord through the prophet Alma, Moroni prepared the Nephites for battle. As the Lamanites crossed the river Sidon, the fighting began. The Lamanites’ motivation and reasons for war were powerful, for “the Lamanites … were inspired by the Zoramites and the Amalekites, who were their chief captains and leaders, and by Zerahemnah, who was their chief captain … ; yea, they did fight like dragons, and many of the Nephites were slain by their hands, yea, for they did smite in two many of their head-plates, and they did pierce many of their breastplates, and they did smite off many of their arms; and thus the Lamanites did smite in their fierce anger.” (Alma 43:43–44.)
The Nephites, however, “were inspired by a greater cause, for they were not fighting for monarchy nor power but they were fighting for their homes and their liberties, their wives and their children, and their all, yea, for their rites of worship and their church.” (Alma 43:45.)
The Lord had also commanded them, “Ye shall defend your families even unto bloodshed. Therefore for this cause were the Nephites contending with the Lamanites, to defend themselves, and their families, and their lands, their country, and their rights, and their religion.” (Alma 43:47.)
These verses suggest that a war in defense against an aggressor is acceptable to the Lord. The Lord does not justify war waged in order to gain power or to gain control. Neither is it to be waged in anger. President David O. McKay pointed out that “there are conditions when entrance into war is justifiable, and when a Christian nation may, without violation of principles, take up arms against an opposing force.
“Such a condition, however, is not a real or fancied insult given by one nation to another. When this occurs proper reparation may be made by mutual understanding, apology, or by arbitration.
“Neither is there justifiable cause found in a desire or even a need for territorial expansion. The taking of territory implies the subjugation of the weak by the strong—the application of the jungle law.
“Nor is war justified in an attempt to enforce a new order of government, or even to impel others to a particular form of worship, however better the government or eternally true the principles of the enforced religion may be.” (In Conference Report, Apr. 1942, p. 72.)
The Law Given to the Ancients
Mormon states that in defending themselves, the Nephites felt they were following a law given them by God. That law included patience. Mormon explained that the Lord had instructed them, “Inasmuch as ye are not guilty of the first offense, neither the second, ye shall not suffer yourselves to be slain by the hands of your enemies.” (Alma 43:46.)
The Lord gave Joseph Smith similar counsel, urging even greater patience: “If men will smite you, or your families, once, and ye bear it patiently and revile not against them, neither seek revenge, ye shall be rewarded;
“But if ye bear it not patiently, it shall be accounted unto you as being meted out as a just measure unto you.” (D&C 98:23–24.)
He then instructed his disciples that, if they patiently bore their enemies’ second and third attacks, not reviling their foes, their reward would be greatly increased. These three testimonies would stand against the attackers. Then, if the wrongdoers escaped God’s vengeance and judgment, the Saints should warn them. After all this, should the Saints suffer another attack, their enemies would be in their hands. The Saints could spare their foes or reward them according to their evil works. (See D&C 98:25–31.)
The Lord compares this to the law given to Nephi, Abraham, and other ancients. (See D&C 98:32–36.) The disciples of old could go to battle only when the Lord commanded them. They were to lift a standard of peace to an enemy three times before bringing their case to the Lord, after which he would justify them in going to war. This law was not a law of first attack. It demanded that a righteous people do all they could to proclaim and preserve peace.
The Book of Mormon relates one time when a prophet-general refused to lead the Nephites into battle—a time when the Nephites did not follow the Lord’s law. (See Morm. 3–4.) During a ten-year period of relative peace, the Nephites prepared for war. When the Lamanites attacked, the greatly outnumbered Nephites made a stand at the city of Desolation. This time the Nephites won, “insomuch that [the Lamanites] did return to their own lands again.” (Morm. 3:7.) The following year, the Lamanites again came to battle, and the Nephites again defeated them.
At that point, the Nephites “began to boast in their own strength, and began to swear before the heavens that they would avenge themselves of the blood of their brethren who had been slain by their enemies.
“And they did swear by the heavens, and also by the throne of God, that they would go up to battle against their enemies, and would cut them off from the face of the land.” (Morm. 3:9–10.)
When revenge and destruction became the Nephites’ motivation for war against the Lamanites, Mormon “did utterly refuse … to be a commander and a leader of [the] people.” (Morm. 3:11.)
It wasn’t only their wickedness that kept Mormon from leading the Nephites, for he wrote, “Notwithstanding their wickedness I had led them many times to battle, and had loved them … with all my heart.” (Morm. 3:12.) It was because “they had sworn by all that had been forbidden them by our Lord and Savior Jesus Christ, that they would go up unto their enemies to battle, and avenge themselves of the blood of their brethren.” (Morm. 3:14.)
Earlier, Mormon had exhorted his people to “stand boldly before the Lamanites and fight for their wives, and their children, and their houses, and their homes.” (Morm. 2:23.) But now the Nephites were not going to war to defend anything. They had not issued a proclamation of peace, nor had they tried to gain peace by other means. Instead, they were going to war out of vengeance.
From that point, the Nephite nation began to lose its battles and was eventually destroyed. The Nephites entered into a vicious cycle of vengeance begetting vengeance and wickedness begetting wickedness. “Because the armies of the Nephites went up unto the Lamanites … they began to be smitten; for were it not for that, the Lamanites could have had no power over them.
“But, behold, the judgments of God will overtake the wicked; and it is by the wicked that the wicked are punished; for it is the wicked that stir up the hearts of the children of men unto bloodshed.” (Morm. 4:4–5; italics added.)
Divine Guidance Needed
In Alma 46, Mormon tells of internal strife among the Nephites, touching upon another problem. That is, what do you do to preserve a free government that has been set up by inspiration and divine guidance?
After Moroni and his armies defeated the Zerahemnah-led Lamanites, internal strife developed within the Nephite nation. Amalickiah, who desired to be king, tried to persuade his countrymen to accept him as their ruler. His flattery won over many in the Church, who broke away from their faith. “Thus were the affairs of the people of Nephi exceedingly precarious and dangerous, notwithstanding their great victory which they had had over the Lamanites.” Mormon noted how quickly people could leave the counsels of God and be led by Satan. (See Alma 46:3–8.)
When Moroni heard of these dissensions, he was angry with Amalickiah. Tearing his coat, he took a piece, wrote on it, “In memory of our God, our religion, and freedom, and our peace, our wives, and our children,” and fastened it to a pole. Those words became his rallying cry. He fastened on his armor and prayed mightily to God that his fellowmen might enjoy the blessings of liberty as long as a band of Christians remained in the land. (See Alma 46:11–13.) He clearly felt that the cause of freedom was worth fighting for.
Traveling through the land, he waved his rent coat, which he called the title of liberty, and challenged the people to covenant to maintain their rights and religion, that God might bless them. The people responded, running to him with their armor on, rending their garments in token that they would not forsake God. (See Alma 46:19–21.)
Moroni reminded his people that, as a remnant of Joseph, they ought to preserve their liberty and that, if they did not stand fast in the faith of Christ, they would be part of the remnant that would perish. With that exhortation, the faithful followed Moroni against Amalickiah and defeated him. (See Alma 46:24–33.)
In relating this narrative, Mormon demonstrated that liberty must be protected from within as well as without, that sometimes a righteous people must oppose the enemies of freedom when those enemies band together to overthrow the government. This episode was paralleled later when king-men internally threatened to destroy the Nephite nation, which was also under attack from the Lamanites. (See Alma 51.)
In the examples of war discussed so far, the faithfulness of a people and its leaders proved crucial to knowing when to go to war and what to do in that war. Righteousness is required for a people to know by revelation the answers to questions involving any specific war.
The following verses underscore this concept:
“The Nephites were taught to defend themselves against their enemies, even to the shedding of blood if it were necessary; yea, and they were also taught never to give an offense, yea, and never to raise the sword except it were against an enemy, except it were to preserve their lives.
“And this was their faith, that … God would prosper them in the land, … if they were faithful in keeping the commandments of God … ; yea, warn them to flee, or to prepare for war, according to their danger.
“And also, that God would make it known unto them whither they should go to defend themselves against their enemies, and by so doing, the Lord would deliver them.” (Alma 48:14–16.)
Obedience to Government
Many persons in modern times question their role in supporting a government that prepares for war. What should an individual’s role be in such a case?
Mormon gave us a key by which we can know what to do when we are faced with the prospect of war. When Captain Moroni was preparing the Nephites for battle, he sent messengers to the prophet Alma to ask what the Nephites should do. Alma informed them that the armies of the Lamanites were on the move, planning to attack the weaker part of the nation. (See Alma 43:24.)
The righteous Nephites kept their eyes on the prophets and followed their counsel. Because of faith in the prophets of God, the Nephites were blessed. In the latter days, this principle still applies—follow the counsel of the Lord’s prophet. Latter-day Saints have been given the standard through the Prophet Joseph Smith, who wrote, “We believe in being subject to kings, presidents, rulers, and magistrates, in obeying, honoring, and sustaining the law.” (A of F 1:12.)
Mormon and Moroni themselves are examples of righteous men who supported their country when it was attacked by a wicked nation, despite their own nation’s wickedness. In 1831, the Lord gave Joseph Smith clear instruction that came at a time when Church members were oppressed by others. Notwithstanding the persecution of the members of the Church, the Lord taught a significant principle: “Let no man think he is ruler; but let God rule him that judgeth, according to the counsel of his own will. …
“Let no man break the laws of the land, for he that keepeth the laws of God hath no need to break the laws of the land.” (D&C 58:20–21.)
Latter-day Saints were involved on both sides in World War I and World War II. Yet they were encouraged to obey the laws of the countries in which they lived. This same principle is what the Saints in the New Testament received from the Apostle Peter in his counsel regarding their obligations of citizenship:
“Submit yourselves to every ordinance of man for the Lord’s sake: whether it be to the king, as supreme;
“Or unto governors, as unto them that are sent by him for the punishment of evildoers, and for the praise of them that do well.
“For so is the will of God, that with well doing ye may put to silence the ignorance of foolish men.” (1 Pet. 2:13–15.)
The Individual during War
War poses some crucial problems for the individual. What happens when a peaceful, loving, and God-fearing person is trained to take the lives of others? What happens when that person enters an environment in which neither life is respected nor God revered? Can an individual in those circumstances survive spiritually?
Answers to those questions also lie in the Book of Mormon. Several examples show that people can live righteously under the most adverse conditions. Zeniff and Gideon, for instance, were two fine men who fought in mixed armies—good and bad—and survived civil war. (See Mosiah 9:1–3; Mosiah 19:1–8, 18–24; Mosiah 20:17–22.) And, as mentioned previously, Mormon and his son Moroni led a people that had abandoned God. Yet each remained committed and loyal to God and lived a righteous life.
One of the best examples is Captain Moroni. As we have learned, he did not desire to shed blood. Instead, he loved peace and sought to keep the commandments. Yet he had to spend much of his time in combat. As a military leader, he had to take good men to battle and see many of them die. Under those circumstances, what type of man was he?
Mormon’s description of him is enlightening: “If all men had been, and were, and ever would be, like unto Moroni, behold, the very powers of hell would have been shaken forever; yea, the devil would never have power over the hearts of the children of men.” (Alma 48:17.)
Moroni remained righteous, strong, and powerful, even though he lived in an environment of suffering, pain, hatred, and death. Can a person be righteous in a military environment? The answer is yes. Moroni was, and we can be, too.
Of course, Moroni was a rare leader, but the Book of Mormon also shows us other Nephites, some quite young, who were righteous despite their situations. Helaman and his stripling warriors are an excellent example.
We don’t know how old the warriors were, but Helaman says “they were all of them very young.” (Alma 56:46.) They had never fought before, but they were prepared spiritually. They prized liberty and were deeply faithful, trusting God. Mormon described them as being “exceedingly valiant” and “true at all times in whatsoever thing they were entrusted.” (Alma 53:20.) These qualities allowed them to succeed spiritually under very difficult circumstances.
Helaman writes of their success: “There had not one soul of them fallen to the earth; yea, and they had fought as if with the strength of God; yea, never were men known to have fought with such miraculous strength; and with such mighty power did they fall upon the Lamanites, that they did frighten them; and for this cause did the Lamanites deliver themselves up as prisoners of war.” (Alma 56:56.)
War in the Latter Days
Elder Marion G. Romney made this observation: “Latter-day Saints know that this earth will never again, during its telestial existence, be free from civil disturbance and war.” (Improvement Era, June 1967, p. 77.) That being the case, the Book of Mormon can help us to face the problems arising from such a situation.
In the days leading up to the coming of the resurrected Lord to the Americas and also in the period after the two hundred years of peace, the greatest destruction to the Nephites was not caused by outward Lamanite attacks, but by internal problems and internal wickedness. There came a time when they were so unrighteous that God could no longer stand by them.
Regarding our time, General Omar O. Bradley once stated: “We have grasped the mystery of the atom and rejected the Sermon on the Mount. … Ours is a world of nuclear giants, and ethical infants. We know more about war than we know about peace—more about killing than we know about living.” (As quoted in Louis Fischer, The Life of Mahatma Gandhi, New York: Harper & Brothers, 1950, p. 349.)
As Latter-day Saints, our duty is to proclaim peace. The First Presidency, under President Spencer W. Kimball’s direction, stated: “We are dismayed by the growing tensions among the nations, and the unrestricted building of arsenals of war, including huge and threatening nuclear weaponry. Nuclear war, when unleashed on a scale for which the nations are preparing, spares no living thing within the perimeter of its initial destructive force, and sears and maims and kills wherever its pervasive cloud reaches.
“While recognizing the need for strength to repel any aggressor, we are enjoined by the word of God to ’renounce war and proclaim peace.’ We call upon the heads of nations to sit down and reason together in good faith to resolve their differences. If men of good will can bring themselves to do so, they may save the world from a holocaust, the depth and breadth of which can scarcely be imagined. We are confident that when there is enough of a desire for peace and a will to bring it about, it is not beyond the possibility of attainment.” (Church News, Dec. 20, 1980, p. 3.)
The duty of all Latter-day Saints is to seek peace and to live righteously so that their peaceful influence can be felt. As we do so, it may be that, as often happened in the Book of Mormon, a small minority of disciples, through faith, righteous example, and effort, can be a significant influence on a larger body of people among whom they live, wherever that may be.
|
<urn:uuid:f9bd1645-5d34-4110-8769-9a402903c9ad>
|
CC-MAIN-2016-26
|
https://www.lds.org/ensign/1988/09/peace-within?lang=eng
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00131-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.980675
| 5,314
| 2.765625
| 3
|
Scientists in Egypt say they may have discovered the mummy of Queen Nefertiti, one of the most famous figures of ancient Egypt.
A group of scientists believe that she is one of three mummies discovered in a secret chamber of a tomb known as KV35 in Egypt's Valley of the Kings in Luxor.
Nefertiti: One of the ancient world's most beautiful women
The tomb was originally located and catalogued in 1898, but the mummies were sealed up and apparently forgotten, until scientists drilled through to the room.
"There is a very, very strong possibility that... this in fact is the great female Pharaoh Nefertiti herself," said British mummification expert Dr Joann Fletcher, who led the expedition, which was sponsored by the Discovery Channel.
The whereabouts of the remains of Nefertiti, perhaps the most powerful woman in ancient Egypt, have for many years been one of archaeology's most enduring mysteries.
However, critics say that without DNA evidence to verify the claims, it is unlikely to be the remains of the queen.
Queen Nefertiti, along with her husband the pharaoh Akhenaten, ruled from 1353-1336 BC during the so-called 18th dynasty of ancient Egyptian rulers.
However, virtually all traces of the queen and her "heretic" husband were erased, after his unsuccessful attempt to overthrow the pantheon of Egyptian gods and replace worship of them with the sun god Aton, in one of the earliest known practices of monotheism.
Physical evidence known and published prior to this expedition indicates the unlikelihood of it being the
mummy of Nefertiti
Dr Fletcher said she became interested in the mummy after identifying a wig, which had been found by three mummies catalogued by scientists, as being a Nubian-style wig favoured by royal women in the 18th dynasty.
Further examination of the mummy in the side room revealed the remains of the younger woman had a doubled-pierced ear lobe, shaved head, and the clear impression of the tight-fitting brow-band worn by royalty.
The mummy - which had been defaced and mutilated - also had an arm removed, which was found in its wrappings bent at the elbow, a possible sign that it had originally held a royal sceptre, Dr Fletcher said.
The other two mummies, a teenage boy and an older woman, have not yet been identified.
However, other scientists have expressed doubts that the remains could be that of the famous queen.
"Physical evidence known and published prior to this expedition indicates the unlikelihood of it being the mummy of Nefertiti," Egyptologist Susan James said.
"Without any comparative DNA studies, statements of certainty are merely wishful thinking."
|
<urn:uuid:3c623db6-907d-4637-8af5-7d97e737f957>
|
CC-MAIN-2016-26
|
http://news.bbc.co.uk/2/hi/middle_east/2978710.stm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00151-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979149
| 572
| 3.203125
| 3
|
The Liver and its Role in Detoxification of the Body
Once food is broken down in the stomach and small intestine, it is absorbed into the bloodstream, which travels first to the liver, where substances may be chemically changed. One of the main functions of the liver is to help the body modify toxic substances, so that they may be removed easily from the body through the urine via the kidneys or through the bowel via the feces. A failure of the liver to carry out this function properly will result in an accumulation of toxic substances that may be stored in the nervous system and in fatty tissues. This toxic accumulation may contribute to a wide variety of diseases and complaints.
For example impaired liver function may contribute to Alzheimer’s or Parkinson’s Disease, autoimmune diseases, Chronic Fatigue Syndrome, food allergies, chemical sensitivities, headaches, hepatitis, premenstrual syndromes, the development and outcome of cancer, and many other conditions. Basically, when the liver detoxification mechanisms are not functioning properly, the body is poisoned with a buildup of toxins. The toxins may originate from outside the body in the form of pesticides, alcohol, drugs, paint fumes, exhaust fumes and many others or from inside the body from the gut or from metabolic products.
So, in evaluating any patient one of the first steps we take is to evaluate the functioning of the stomach, intestines and other aspects of the gastrointestinal system and then treat any abnormality. A second step is to evaluate liver functioning, because a problem with this organ may contribute to so many disorders. It is important to realize that when a physician orders blood tests that are called liver function tests or a liver profile, which includes the measurement of SGOT, SGPT, bilirubin and alkaline phosphatase, he is not really measuring how well the liver is able to carry out its detoxification function. Rather he is generally measuring damage to liver cells, which result in an elevation in one or more of these enzymes. All of these tests may be quite normal, but the liver may still not be carrying out this function properly. To measure how well the liver is functioning requires a different kind of test.
How the Liver Carries Out its Detoxifying Functions
The liver helps in the removal of toxic and metabolic waste products from the body by converting them to a form which is soluble in water, so that they are easily eliminated in the urine formed by the kidneys. Other substances transformed by the liver are dissolved in the bile formed in the liver and eliminated in the feces after the bile passes into the intestines through the bile duct.
This detoxification process occurs in two phases, termed Phase I and Phase II. Phase I involves a system of enzymes known as the cytochrome P-450 mixed-function oxidase enzymes system. These enzymes react with toxins, drugs, alcohol, paint fumes and many other substances to form compounds that are capable of being transformed to water soluble substances by Phase II reactions. The previously mentioned substances may up regulate the cytochrome P-450 mixed oxidase system by inducing enzyme changes. Some of the products formed from Phase I reactions are actually more toxic than the original substances and can be harmful and even cancer producing if Phase II reactions do not take place properly. Also, during Phase I reactions, which often involve the oxidation process, free radicals may be formed, causing damage, unless sufficient amounts of antioxidants, such as Vitamins A, C, E and glutathione are present to neutralize them. With underlying liver disease, insufficient nutrients necessary for Phase I, damage from drugs, alcohol, birth control pills, amphetamines or Tagamet, Phase I is slowed down and this is called a slow detoxifier situation.
|
<urn:uuid:d9e84c16-e6aa-40e2-8fe1-8128823c9a24>
|
CC-MAIN-2016-26
|
http://www.healthy.net/Health/Article/Introduction_to_the_Digestive_System/538/5
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00115-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937283
| 756
| 3.109375
| 3
|
When Spain ceded Florida to the United States after the Adams-Onis Treaty of 1819, the United States agreed to relinquish its claim to Texas. Unfortunately for Spain, their vast empire was about to crumble throughout the New World. It started with Texas.
Spain’s influence in Texas was minimal at best. After Mexico declared its independence from Spain in 1821, Texas was a forgotten land. The new nation of Mexico certainly lacked the authority or finances to manage the vast area. However, some opportunistic Americans saw potential for profit in Texas. Stephen F. Austin, the son of a Missouri man who had negotiated a large land-grant with the Mexican government in the hopes of building a local economy, set about colonizing Texas. By 1830, Austin had attracted 25,000 settlers and 2,000 slaves to Texas. Their plan was to grow cotton.
As the new Mexican government saw Austin’s colony, it attempted to exert more control over the region, claiming that the terms of the original land-grant had been violated (settlers refused to convert to Roman-Catholicism – the national religion of Mexico). Furthermore, the Mexican government refused to allow any more slaves to immigrate to Mexico and placed taxes on goods imported from America. As expected, the colonists became disgruntled. The situation worsened when the Mexican government jailed Stephen F. Austin for urging Texas to self-govern.
In 1836, General Antonio Lopez de Santa Anna and 6,000 troops marched to Texas to subdue the Texans. On February 23, Santa Anna besieged the mission known as The Alamo in San Antonio. Santa Anna’s demand for surrender was answered with a defiant cannon blast authorized by Col. William Barret Travis. The siege lasted for two weeks. On March 6, Santa Anna and his army stormed the mission and killed every Texan who resisted. Just four days earlier, on March 2, the Texans declared independence, legalized slavery, and formed a provisional government. They named Sam Houston commander of their army. Because the stand at The Alamo lasted two weeks, Sam Houston had time to prepare his army and plans of attack. On April 21, Houston’s army of 800 Texans routed the Mexican army of 1,600 at San Jacinto, Texas. In the battle, General Santa Anna was captured, and Texas became independent. Nine years later, after much debate and deliberation, Texas became the 28th state. As a result, Mexico broke all diplomatic ties with the United States. The Mexican-American War would soon follow.
EXPANSION – Click on a region below to learn more
|
<urn:uuid:80df5e99-d703-437c-a067-6e00fc8beb7c>
|
CC-MAIN-2016-26
|
http://mrnussbaum.com/history-2-2/alamo/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967768
| 531
| 3.84375
| 4
|
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99...
Diocese situated in New South Wales, Australia, in the ecclesiastical Province of Sydney, comprises the territory immediately west of the Dividing Range; it extends north to the Barwon River, is bounded on the west by the Macquarie River as far up as Warren and thence by a line to the Lachlan River twenty miles below Eauabolong.
Bathurst (population in 1901, 9,223) was founded in 1824. Owing to the hostility of the aboriginals and other causes, population filtered slowly into the rich Bathurst plains till the first paying goldfield was discovered in the district, in 1851. The first church in Bathurst, says Cardinal Moran, "was nothing better than a bark hut". It was superseded in 1861 by a fine new edifice (now the cathedral), which was erected at a cost of £12,000 by Dean Grant, pastor of Bathurst for nearly twenty years till his death in 1864. In 1865 Bathurst, then part of the Diocese of Sydney, was made the cathedral centre of a new diocese, which extended from the River Murray to Queensland, and from the Blue Mountains to the border of South Australia. That vast and sparsely populated territory was divided at the time into five missions, ministered to by six priests, with seven small churches and six state-aided Catholic schools, attended by 492 pupils. Its first bishop was the Right Rev. Matthew Quinn, who had taken an active part in organizing the Irish Brigade that fought for the defence of the Papal States in 1860. He was consecrated in Dublin, 14 November, 1865, and reached Bathurst 1 November, 1866, accompanied by five priests and seven pioneer Sisters of Mercy. Years of toilsome organization followed — laborious visitations; opening new missions and supplying them with clergy; church, school, and convent extension; the introduction of the (Australian) Sisters of St. Joseph and the Patrician Brothers; the founding of a Catholic newspaper, the "Record"; the erection of St. Stanislaus' College, in 1873, at a cost of £15,000, and of St. Charles' Ecclesiastical Seminary eight years later. Dr. Quinn was a man of great energy, deep piety, cultivated intellect, and, says Cardinal Moran, was one of the "foremost champions of religious education in Australia". At his death 16 January, 1885, there were in the diocese 28 priests, 56 Catholic schools, 21 convents, 192 nuns, and 5 religious brothers. Dr. Quinn was succeeded by the Right Rev. Joseph Patrick Byrne (consecrated 9 August, 1885). In 1887 the new Diocese of Wilcannia was formed out of the Bathurst Diocese. At the same time some districts from the Maitland diocese were added to the Bathurst jurisdiction. Dr. Byrne, says Cardinal Moran, "strenuously and successfully carried on the great work of education and religion begun by his predecessor", and, like him, was "a model to his clergy in his unwearying and self-sacrificing toil". St. Stanislaus' College, which from its foundation had been under the control of secular priests, was in 1888 entrusted to the Vincentian Fathers. It is now (1907) one of the foremost educational institutions in Australia, and noted for the work done in its well-equipped physical and chemical laboratories. When pronounced to be stricken by an incurable malady, Dr. Byrne received from his priests and people, on the Epiphany, 1901, a pathetic demonstration of affection, accompanied by a money gift of £2,530. He passed away on the 12th of January, 1901. To him succeeded the Right Rev. John Dunne — builder, missioner, organizer — who was consecrated 8 September, 1901. He is to complete the architecturally fine college of St. Stanislaus, and under his administration the missionary and scholastic traditions of the diocese are well sustained. The efficiency of the Catholic schools is in no small measure due to the system of inspection inaugurated by the Rev. J. J. Brophy, D. D., LL. B. The principal lay benefactors of the diocese are Mr. James Dalton, K.S.G., and Mr. John Meagher, K.S.G.
In the diocese there are: 18 parochial districts; 89 churches; 29 secular priests; 7 regular priests; 7 religious brothers; 242 nuns; 1 college; 8 boarding schools for girls; 11 day high schools; 39 primary schools (with 3,496 pupils); 1 orphanage; 4,298 children in Catholic schools; and a Catholic population of about 27,000.
Moran, History of the Catholic Church in Australasia (Sydney, s. d.); Hutchinson, Australasian Encyclopaedia (London, 1892); The Australian Handbook (Sydney, 1906); Australasian Catholic Directory for 1907 (Sydney, 1907); Report of the Catholic Schools in the Diocese of Bathurst for the Year 1906 (Dubbo, 1907); Missiones Catholicae (Propaganda, Rome, 1907), 694.
APA citation. (1907). Bathurst. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/02349a.htm
MLA citation. "Bathurst." The Catholic Encyclopedia. Vol. 2. New York: Robert Appleton Company, 1907. <http://www.newadvent.org/cathen/02349a.htm>.
Transcription. This article was transcribed for New Advent by Susan Birkenseer.
Ecclesiastical approbation. Nihil Obstat. 1907. Remy Lafort, S.T.D., Censor. Imprimatur. +John M. Farley, Archbishop of New York.
Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
|
<urn:uuid:c5c122e8-dd8e-4515-93a9-de1f4b3602e7>
|
CC-MAIN-2016-26
|
http://www.newadvent.org/cathen/02349a.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95993
| 1,325
| 2.609375
| 3
|
Map and profile data of a section of the Pacific Ocean displayed by GeoMapApp. Click on image for a larger view.
This chapter is appropriate for grades 9 - 16.
After completing this chapter, students will be able to:
- navigate to hydrothermal vent locations around the world and access geospatial data from multiple sources;
- sort, select, edit, and view data in spreadsheet format;
- create a topographic profile of the ocean floor and measure size of various features;
- examine images from hydrothermal vent locations to make observations of species diversity and geologic features; and
- develop and test hypotheses using observations and metadata.
The excitement of scientific discovery is captured in the surprising story of hydrothermal vents and the diverse ecosystem they support in the absence of sunlight. In this chapter, students are guided through their own process of discovery, simulating the past forty years of research at underwater volcanoes along oceanic spreading centers. They use tools and data from the Ridge 2000 Data Portal along with other sources. Students explore the topography and biology of these regions with GeoMapApp, a program developed at the Lamont-Doherty Earth Observatory. They investigate, develop, and test hypotheses based upon the observations they collect.
Before beginning this chapter, students should be familiar with the basics of plate tectonic theory, and have some knowledge of divergent margins and spreading centers where new ocean crust is forming. As they work through the chapter activities, they will be guided to learn more specific information about the following topics using the Dive and Discover web pages sponsored by the Woods Hole Oceanographic Institution:
- Mid-ocean ridges - comparison between fast-spreading vs. slow-spreading ridges
- Hydrothermal vent circulation - diagram explaining how fluids migrate through bedrock and are released at a hydrothermal vent
- Life forms at hydrothermal vents - descriptions and images of the various life forms that make up the ecosystem around a hydrothermal vent
- Photosynthesis vs. Chemosynthesis - comparison between different pathways of producing energy to sustain life
As an introduction to the topic, users may view an 8.5 minute video clip provided by NBC's Today show, documenting a trip that journalist Ann Curry took to the East Pacific Rise in October 2008 aboard the submersible Alvin.
Instructors will want to work through the Case Study and each part of the chapter ahead of time to see what steps might require more guidance for their students. It will be useful for instructors to familiarize themselves with the menu options in GeoMapApp, especially the zoom and profile tools and the data table configuration menu. Since the instructions for GeoMapApp are complex, it is easy to make a mistake and it is easier for an instructor to troubleshoot possible user mistakes when they are familiar with the program themselves. It is also important for instructors to read through the supplemental webpages that provide background information on hydrothermal vents and the ecosystem they support.
The chapter is designed so students can easily repeat the setup steps if they need to quit the GeoMapApp for the end of a class period or if there is a problem in loading or configuring data. At the end of each part, they are directed to save their work before they move on to the next part. At the beginning of each part, they are given an option to reload their data in case they are restarting the program at the beginning of a class period or if the GeoMapApp program needs to be restarted for some reason.
Students can work through the steps of the chapter individually or in small groups. It may be best to have them organized into groups of two or three, so they can share ideas and provide help with following the steps as needed. Instructors are encouraged to move between groups frequently, checking on student progress and providing assistance as needed to avoid long periods of downtime or misdirected work. If possible, organize the classroom so the instructor views most or all of the computer screens at the same time.
The Going Further section describes several opportunities for further exploration by students who are interested in an independent research project for a science fair or extra credit.
The material and activities explored in this chapter would be suitable for a high school earth science or college-level introductory geology or oceanography class. The chapter could be used in conjunction with a unit on plate tectonics, volcanoes, or marine ecosystems.
The following National Science Education Standards are supported by this chapter:
- Identify questions and concepts that guide scientific investigations. Students should form a testable hypothesis and demonstrate the logical connections between the scientific concepts guiding a hypothesis and the design of an experiment. They should demonstrate appropriate procedures, a knowledge base, and conceptual understanding of scientific investigations.
- Use technology and mathematics to improve investigations and communications. A variety of technologies, such as hand tools, measuring instruments, and calculators, should be an integral component of scientific investigations. The use of computers for the collection, analysis, and display of data is also a part of this standard. Mathematics plays an essential role in all aspects of an inquiry. For example, measurement is used for posing questions, formulas are used for developing explanations, and charts and graphs are used for communicating results.
- The great diversity of organisms is the result of more than 3.5 billion years of evolution that has filled every available niche with life forms. (standard 12CLS3.2)
- The distribution and abundance of organisms and populations in ecosystems are limited by the availability of matter and energy and the ability of the ecosystem to recycle materials. (standard 12CLS5.5)
- The outward transfer of Earth's internal heat drives convection circulation in the mantle that propels the plates comprising earth's surface across the face of the globe. (standard 12DESS1.2)
- Scientists in different disciplines ask different questions, use different methods of investigation, and accept different types of evidence to support their explanations. Many scientific investigations require the contributions of individuals from different disciplines, including engineering. New disciplines of science, such as geophysics and biochemistry often emerge at the interface of two older disciplines.
- Individuals and teams have contributed and will continue to contribute to the scientific enterprise. Doing science or engineering can be as simple as an individual conducting field studies or as complex as hundreds of people working on a major scientific question or technological problem. Pursuing science as a career or as a hobby can be both fascinating and intellectually rewarding.
- Science will never be finished. Although men and women using scientific inquiry have learned much about the objects, events, and phenomena in nature, much more remains to be understood.
- The historical perspective of scientific explanations demonstrates how scientific knowledge changes by evolving over time, almost always building on earlier knowledge. (standard 12GHNS3.4)
The following U.S. National Geography Standards are supported by this chapter:
- How to use maps and other geographic representations, tools, and technologies to acquire, process, and report information from a spatial perspective
- How to analyze the spatial organization of people, places, and environments on Earth's surface
- The physical processes that shape the patterns of Earth's surface
- The characteristics and spatial distribution of ecosystems on Earth's surface
This chapter will require three to four class periods of 45-60 minutes each.
Lesson plan outline:
1. Briefly introduce the topic beforehand by showing the NBC video of Ann Curry's trip in Alvin to the East Pacific Rise.
2. Assign the Case Study as reading homework, making printed copies available for those students who do not have computer access at home. Encourage students to explore the additional resources that describe the first discovery of life at hydrothermal vents in 1977 and 1979.
3. For the first day of the activity, students should be able to work through Parts 1 and 2, depending on the length of class period available.
4. On the second day of the activity, students will work through the observations and images in Part 3, and may have time to continue on to Part 4, depending on the length of class period available.
5. On the third day of the activity, students should be able to work through the example hypothesis in Part 4, as well as two or three of the working hypotheses. Instructors may wish to split up the working hypotheses between groups, so that each group tests one of the hypotheses and reports back to the class on their findings. Instructors are encouraged to assist students in developing their own hypotheses for testing to conclude the chapter.
Sample Results that can serve as an Answer Key
Students will generate a variety of products as they work through each part of the chapter.
Part 1. Getting Started with GeoMapApp
Part 2. Explore bathymetry data from the East Pacific Rise
Part 3. Observe life in the extreme environment of the East Pacific Rise
|
<urn:uuid:0ab4c8bf-1f37-48b8-af21-158a78ec5b05>
|
CC-MAIN-2016-26
|
http://serc.carleton.edu/eet/extreme_environments/teaching_notes.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00140-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910869
| 1,810
| 3.828125
| 4
|
Small-hole or keyhole technology in the utility maintenance industry is the equivalent of microsurgery in the healthcare industry. Those who embrace this innovation are achieving benefits and cost-savings for themselves, their stakeholders, and the municipalities where they operate.
The fundamental approach consists of opening the street surface using pavement coring technology, performing repair tasks through the 18-inch diameter cored hole with specialized extension tools, and reinstating the core removed in the first step via a specifically formulated bonding material. This low-intrusion approach to performing routine repairs and infrastructure upgrades is packed with benefits for customers, the work-force, and the community.
Although the technology requires investments in capital equipment, specialized tooling, and retraining workers, the benefits for all of the stakeholders can provide substantial returns on those investments. A street coring machine, which costs from $30,000 to $90,000, saws an 18-inch diameter core or plug from the street surface. The earth over the top of the infrastructure is removed using a vacuum excavator, which typically costs from $60,000 to $80,000, resulting in a vertical tunnel directly down to the area of infrastructure to be repaired.
Workers, standing on the street surface and not inside of an excavation pit, “operate” on the infrastructure using specialized extension tools. Most first-time implementers initially invest about $20,000 to $35,000 in tooling to perform an average of three to four repair types. When the operation is complete, the earth is replaced and the street plug that had been set aside is reinstated using a specifically formulated bonding agent. The “healed” street surface is as strong as the original surface and the “scarring” is minimal.
Depending on how the street coring and vacuum excavation components are configured on the truck chassis, the fleet investment can run from $50,000 to $100,000. Some first-time investors invest in training and consulting services to shorten the startup time and begin gaining returns on the technology quickly. Those who use these services estimate spending around $30,000 for the advice and training.
Why should a public works department consider using keyhole technology? Because like microsurgery, smaller and less intrusive is better. If one makes the analogy that cutting a road is like cutting one's skin, then a smaller hole means a shorter recovery time and a less intrusive operation. Keyhole technology has been used primarily by the natural gas industry, but this technology has the potential to be used used on drinking water pipelines and by government agencies for subsurface utility engineering on urban reconstruction projects.Keyhole cut history
The roots of small-hole work can be traced to the mid-1960s. Philadelphia Electric Co. and the Institute of Gas Technology co-authored a paper, “Repair of Bell and Spigot Joints Through Small Openings,” which was presented at the Leak Control Symposium in August 1963. This study examined the repair of cast iron joints using encapsulation methods through small-hole excavation using vacuum technology. For the next two decades, this process evolved with mostly specialized contract forces locating and repairing multiple joints, day-lighting the entire joint, and using workers dipping head-first into the small hole to install a boot entirely around the joint.
In the 1990s, anaerobic sealants emerged on the scene. This repair operation consisted of drilling a small hole in the top of the joint and injecting a sealant to revitalize the joint material. This created a major milestone in the infrastructure surgery process, allowing the pipe to be repaired entirely from above the hole. The development of above-ground tools made the scope endless. Now the gas industry is not only making cast iron joint repairs, but working together with Des Plaines, Ill.-based Gas Technology Institute (GTI) to undertake steel main repairs, curb valve installations, anode and test station installations, new and replacement service installations, service cut-offs, plastic service cap repairs, and underground utility verification.
The long-term goal is that whatever can be done today in a 3x4-foot excavation can be accomplished through a small hole. Paramount to this is the increased development of sophisticated locating tools.
PECO Energy Co. (PECO), formerly Philadelphia Electric Co., helped develop small-hole work in the 1960s. The company's path followed that of the gas industry, with primarily cast iron work done in small holes with vacuum excavation. But in the mid-1990s, the vacuum truck helped provide great cost savings in the field of pre-engineering work. It was from this that a full-time underground utility verification team was formed that helped develop precision locating tools.
|
<urn:uuid:d393dcf6-ee29-4b88-b775-675a4ac2e57a>
|
CC-MAIN-2016-26
|
http://www.pwmag.com/admixtures/utility-microsurgery.aspx?dfpzone=general
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942744
| 967
| 2.609375
| 3
|
development, the fly's two wings grow from a structure in the larva known
as the wing imaginal disk (top images). (An imago is an insect in its final,
adult state.) The haltere grows from the larval haltere imaginal disk (bottom
images). Remember the UbxHox gene? Using staining again, we can detect the gene product
of Ubx. This reveals that the Ubx gene is naturally "off" in the wing
disk—note the absence of the bright green stain in the upper right
image—and is "on" in the haltere disk (lower right image).
Now you'll see what happens when the Ubx gene—just one of a large number of Hox genes—is turned off in the haltere disk.
|
<urn:uuid:ea780180-b17a-46b7-b8bb-5f72132cdf9d>
|
CC-MAIN-2016-26
|
http://www.pbs.org/wgbh/nova/genes/fate-07.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00098-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.864684
| 165
| 3.015625
| 3
|
Neil Armstrong, the test pilot, aerospace engineer, university professor, United States Naval Aviator and American astronaut, has died at the age of 82 in Cincinnati, Ohio. His death was due to complications resulting from recent cardiovascular procedures carried out to relieve blocked arteries. He will forever be remembered by the history books as the first man to step foot on the Moon, the defining moment for a generation and inspiration to the generations that followed.
Early taste for flyingBorn on August 5, 1930, in Wapakoneta, Ohio, he acquired a love for flying early with his father taking him to the Cleveland Air Races when he was just two. He had his first airplane flight when he was six in a Ford Trimotor and, after his family moved to Wapakoneta in 1944, he began taking flying lessons at the county airport. He earned his flight certificate when he was 15, before he even had a driver’s license.
Although he was accepted to MIT, he began attending Purdue University in 1947 where he studied aerospace engineering with his college tuition paid under the Holloway Plan. This required him to commit to two years of study, followed by three years of service in the U.S. Navy, before finishing the degree with another two years of study.
In January 1949, Armstrong received a call-up from the Navy to report to Naval Air Station Pensacola for flight training. After almost 18 months, he qualified for carrier landing and in August, 1950, he received notification that he was a fully qualified Naval Aviator.
After being assigned to Fleet Aircraft Service Squadron 7 at NAS San Diego and then the all-jet Fighter Squadron 51, he made his first jet flight in an F9F-2B Panther in January, 1951. This was followed by his first jet carrier landing in June. That month he was also promoted from Midshipman to Ensign and was bound for Korea.
Armstrong flew 78 missions over Korea, racking up a total of 121 flight hours with the majority of missions flying armed reconnaissance over the primary transportation and storage facilities south of the village of Majon-ni. He received the Air Medal for 20 combat missions, a Gold Star for the next 20, and the Korean Service Medal and Engagement Star, before leaving the Navy on August 23, 1952, and becoming a Lieutenant, Junior Grade in the U.S. Naval Reserve.
He the returned to Purdue, where he met Janet Elizabeth Shearon, who would become his first wife on January 28, 1956. He received his Bachelor of Science degree in aeronautical engineering from Purdue in 1955, before later gaining a Master of Science degree in aerospace engineering from the University of Southern California in 1970, along with honorary doctorates from numerous universities.
Testing timesAfter graduating from Purdue, Armstrong become an experimental research test pilot at the National Advisory Committee for Aeronautics High-Speed Flight Station at Edwards Air Force Base in July 1955, where he logged over 900 flights in more than 200 different aircraft, including the Bell X-1B, the North American X-15 and the Lockheed F-104 Starfighter. During this time, he was involved in a number of incidents and while many test pilots praised his engineering ability, others felt that pilot-engineers like Armstrong flew in a way that was more mechanical than those with natural flying ability.
In 1958, he was selected for the U.S. Air Force’s Man in Space Soonest program, which was attempting to beat the Soviet Union in putting a man into outer space. The program was cancelled later that year and replaced with NASA’s Project Mercury in 1959. After his application to become part of the second group of NASA astronauts in June, 1962, he joined the NASA Astronaut Corps in September of that year as one of two civilian pilots selected for the group of nine.
On September 20, 1965, Armstrong was announced as Command Pilot for Gemini 8, which launched on March 16, 1966. This mission involved the first-ever docking between two spacecraft. Although the docking was successful, the docked spacecraft began to roll and the mission ultimately had to be cut short with most of the objectives – including a planned EVA by the pilot David Scott – had to be cancelled.
Armstrong then served as CAPCOM on Gemini 11, which launched on September 12, 1966, before serving as backup commander for Apollo 8. As Apollo 8 orbited the Moon on December 23, 1968, Armstrong was offered the post of commander of Apollo 11. A March 1969 meeting between NASA management gave Armstrong the thumbs up to be the first man on the Moon. This was supposedly at least partly because he was seen as a person without a big ego, although the design of the Lunar Module (LM) cabin was given as the official reason.
That giant leap
At 20:17:39 UTC on July 20, 1969, the LM touched down on the surface of the Moon. A few hours later, at 2:56 UTC on July 21, 1969, Armstrong descended the ladder from the LM and set his left foot down on the lunar surface to say, “That's one small step for [a] man, one giant leap for mankind."
Although Armstrong claimed that the “a” was obscured by static in the broadcast, he eventually conceded he may have dropped it. He is quoted later in Guidebook for the scientific traveler: visiting astronomy and space, that he “would hope that history would grant me leeway for dropping the syllable and understand that it was certainly intended, even if it was not said – although it might actually have been." However, more recent analysis is said to confirm Armstrong did indeed say “a man,” although this has not been absolutely confirmed.
Whether he did or he didn’t, from that moment on Neil Armstrong became one of the most recognizable names in human history. Out of an estimated world population of 3.6 billion, it is estimated that around 600 million people watched the grainy TV images. He was joined some 20 minutes later by Buzz Aldrin and the two planted the flag of the United States, which – as Aldrin said – appears to have toppled over when the LM’s Ascent stage lifted off.
A household name
After returning to Earth, Armstrong and his crew-mates took part in a 45-day “Giant Leap” tour across the U.S. and around the world and he also took part in Bob Hope’s 1969 USO show. However, he largely shunned the spotlight and announced his intention not to fly in space again shortly after Apollo 11.
He then became Deputy Associate Administrator for aeronautics for the Office of Advanced Research and Technology, Advanced Research Projects Agency (ARPA), but served in this position for only a year before resigning from it and NASA as a whole in 1971.
He then went into teaching, taking a position in the Department of Aerospace Engineering at the University of Cincinnati. Despite having offers from bigger and better known schools, he made this decision as he was concerned that faculty members at other schools might have been annoyed that he came straight into a professorship with only the USC master’s degree. After eight years teaching, he resigned in 1979 with no explanation as to why.
During his years teaching, he was approached by numerous businesses wishing to employ him as a spokesman but he turned down all offers. This was until 1979, when he began appearing in advertisements for Chrysler, as he believed it had a strong engineering division and were also in financial difficulty. He also acted as a spokesman for General Time Corporation and the Banker’s Association of America. He also served on the board of directors of several companies and served on the Rogers Commission that investigated the Space Shuttle Challenger disaster.
In February, 1991, Armstrong suffered a mild heart attack while skiing at Aspen, Colorado. Following complications resulting from surgery on August 7, 2012, to relieve blocked coronary arteries, Armstrong passed away on August 25, in Cincinnati, Ohio, prompting accolades from friends, family and world leaders.
Armstrong’s family released a statement saying, “Neil Armstrong was also a reluctant American hero who always believed he was just doing his job. He served his Nation proudly, as a navy fighter pilot, test pilot, and astronaut. He also found success back home in his native Ohio in business and academia, and became a community leader in Cincinnati.
While we mourn the loss of a very good man, we also celebrate his remarkable life and hope that it serves as an example to young people around the world to work hard to make their dreams come true, to be willing to explore and push the limits, and to selflessly serve a cause greater than themselves.”
Apollo 11 crew-mate Buzz Aldrin said, “I am very saddened to learn of the passing of Neil Armstrong today. Neil and I trained together as technical partners but were also good friends who will always be connected through our participation in the Apollo 11 mission. Whenever I look at the moon it reminds me of the moment over four decades ago when I realized that even though we were farther away from earth than two humans had ever been, we were not alone."
NASA Administrator Charles Bolden said, “as long as there are history books, Neil Armstrong will be included in them, remembered for taking humankind's first small step on a world beyond our own,” adding, “besides being one of America’s greatest explorers, Neil carried himself with a grace and humility that was an example to us all."
President Barack Obama said via Twitter, “Neil Armstrong was a hero not just of his time, but of all time. Thank you, Neil, for showing us the power of one small step."
|
<urn:uuid:6e2d2ba0-2cae-49dd-aeaa-d7004d4c4d0a>
|
CC-MAIN-2016-26
|
http://www.gizmag.com/neil-armstrong-dies-82/23876/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.984308
| 2,000
| 2.78125
| 3
|
Human behavior is placing at least a third of Earth's species at risk of extinction.
Can we just roll with the punches? Adapting to climate change isn't that simple.
Exactly how much are we damaging the environment when we put up new buildings and expand our cities?
Just how vulnerable and sensitive are insects to climate change?
With all the concerns about global warming and climate change one might ask the question, "Why don't species just adapt?" Find out on this Moment of Science.
|
<urn:uuid:0829e94e-76e8-48b4-b853-6aee7f4de52a>
|
CC-MAIN-2016-26
|
http://indianapublicmedia.org/amomentofscience/tag/ecosystems/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942008
| 101
| 2.59375
| 3
|
Exponential Functions-Word Problem: Predicting an Exponential Formula
I tried to solve this for over two hours without any successful results. (Shake)
If you could please provide the solution and explain how you arrived at it, it would be very much appreciated. I am perplexed at this point. (Nerd)
An experiment with coins is performed. The experimenter starts with 4 coins. Each time she tosses the coins into the air, after they land, she counts the number of heads that appear and adds that amount (the number of heads) of coins to what she previously had. (i.e she tosses 4 coins, 2 out of 4 are heads, 4+2=she now has six coins.)
The experimenter repeats this process and claims that one could make a rough prediction about how the coins will increase (given any amount of tosses) based on the formula for compound interest: A=P(1+ i)n
In this case, P represents the amount of coins she started with.
i = 0.5 since the 50% probability of the coins landing on heads.
n= the number of tosses
If her her hypothesis is correct, create a formula that predicts the total number of coins if an unfair coin is used (a weighted coin included in her starting amount of 4) that only comes up heads 1 out of every 4 times.
Thanks in advance.
|
<urn:uuid:33da508a-c019-4581-9b3b-402ef53d226d>
|
CC-MAIN-2016-26
|
http://mathhelpforum.com/algebra/182580-exponential-functions-word-problem-predicting-exponential-formula-print.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00129-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955119
| 292
| 3.109375
| 3
|
The Jews also known as the Jewish people, are an ethnoreligious group
originating from the Israelites, or Hebrews, of the Ancient Near East. Jewish
This chapter conveys the history, religion, and culture of the Jewish people from
its Biblical origins to the present. These characteristics of the Jews set them ...
The original name for the people we now call Jews was Hebrews. The word "
Hebrew" (in Hebrew, "Ivri") is first used in the Torah to describe Abraham (Gen.
Jun 9, 2010 ... The work entailed taking DNA samples from 121 people living in 14 Jewish
communities around the world, ranging from Israel to North Africa ...
Sep 24, 2013 ... TheTruthIsFullOfLies, when you find the truth, you will also find a lot of lies. I am
going to do this until you either wake up or I die a martyr.
Jew, Hebrew Yĕhūdhī, or Yehudi, any person whose religion is Judaism. In the
broader sense of the term, a Jew is any person belonging to the worldwide group
It has been said that the history of almost all of the Jewish holidays can be ...
Historians have classified six explanations as to why people hate the Jews:.
The Jews were an ancient people who had resided in Europe for more than two
thousand years. The Jews were expelled from Israel by the Romans following ...
Representing Jewish Communities In 100 Countries Across Six Continents,
Did Jewish intelligence evolve in tandem with Jewish diseases as a result of
discrimination in the ghettos of medieval Europe? That's the premise of a ...
|
<urn:uuid:9161ac52-f84b-435d-98d0-fb0caf708e17>
|
CC-MAIN-2016-26
|
http://www.ask.com/web?qsrc=6&q=Jewish+People&o=2852&l=dir
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00179-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910314
| 346
| 3.453125
| 3
|
became the killing centre where the largest numbers of European Jews were killed.
By mid 1942, mass gassing of
Jews using Zyklon-B began at Auschwitz, where extermination was conducted on an
industrial scale with some estimates running as high as three million persons
eventually killed through gassing, starvation, disease, shooting, and burning. 9
out of 10 were Jews. In addition, Gypsies, Soviet POWs, and prisoners of all
nationalities died in the gas chambers.
Auschwitz-Birkenau was located nearby the provincial Polish town of Oshwiecim in
Galacia, and was established by order of Heinrich Himmler on 27 April 1940.
Private diaries of Goebbels and Himmler unearthed from the secret Soviet
archives show that Adolf
Hitler personally ordered the mass extermination of the Jews - as Goebbels
wrote "With regards to the Jewish question, the Fuhrer decided to make a
clean sweep ..."
Children Of Izieu
You find horrifying stories of Auschwitz-Birkenau and a clique of fanatical,
ruthless SS-men. And you find stories to bear witness to goodness - in Auschwitz
the missionary Jane Haining refused to reject her children and showed
herself to be a saint. And Oscar Schindler came to Auschwitz to save 300
Schindler-women from certain death. He managed to do it - the only shipment out
of the Nazi death camp during WW2 ..
|
<urn:uuid:a50611f8-d2a1-4892-b47e-f119cd6ffd7f>
|
CC-MAIN-2016-26
|
http://www.deathcamps.info/new_page_1.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00098-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946004
| 314
| 3.125
| 3
|
|state transition diagram||computing dictionary|
A diagram consisting of circles to represent states and directed line segments to represent transitions between the states. One or more actions (outputs) may be associated with each transition. The diagram represents a finite state machine.
(03 Feb 2009)
|Bookmark with:||word visualiser||Go and visit our forums|
|
<urn:uuid:0edbde0d-7097-4c81-81ec-5fcd905dc874>
|
CC-MAIN-2016-26
|
http://www.mondofacto.com/facts/dictionary?state+transition+diagram
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00090-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.866802
| 75
| 2.84375
| 3
|
There are three types of standards in this section. Professional Standards have been established for North Carolina Media Coordinators and Instructional Technology Facilitators and are used in the evaluation of these professionals. The Information and Technology Essential Standards, implemented during the 2011-2012 school year, replaced the Technology/Computer Skills and the Information Skills Standard Course of Study. The American Association of School Librarians and the International Society for Technology in Education Standards are voluntary standards published by respective professional organizations including student, professional and administrator standards organizations. Follow the links in the left navigation to pages with more information.
|
<urn:uuid:9d2aecf5-3c99-4b1c-992b-ac10d54184fc>
|
CC-MAIN-2016-26
|
http://www.dpi.state.nc.us/dtl/standards/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00090-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925986
| 116
| 2.8125
| 3
|
1 Answer | Add Yours
Madness is an ongoing theme throughout Hamlet. It emerges primarily in two characters, Hamlet and Ophelia, but other characters comment on it, speculate about it, etc. This makes it seem like madness is a common possibility, and something that happens to people, like a storm.
In Hamlet's case, the main suggestions for why he might be mad are love for Ophelia and the loss of his father/remarriage of his mother. However, he is but pretending to be mad, so he can investigate Claudius.
In Ophelia's case, she actually loses it. She ends up dying as a result, unbalanced by the loss of Hamlet's love, his actions towards her, and, of course, his killing of her father.
We’ve answered 327,499 questions. We can answer yours, too.Ask a question
|
<urn:uuid:13adfc7e-d3aa-4db1-8d52-472290a2a8f9>
|
CC-MAIN-2016-26
|
http://www.enotes.com/homework-help/themes-madness-hamlet-by-shakespear-2652
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00009-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.979926
| 188
| 2.515625
| 3
|
The Resurrection and Genesis
First published: 10 April 2009 (GMT+10)
Re-featured on homepage: 5 April 2015 (GMT+10)
On Easter, many Christians around the world celebrate the Resurrection of our “great God and Saviour, Jesus Christ” (Titus 2:13). For them, this is the most important holiday of the Christian calendar. The doctrine of the Resurrection of Christ is one of the most important doctrines of Christianity; without the Resurrection, we have no hope of salvation from our sins (1 Corinthians 15:12–18).
The pagan culture of the first century did not accept resurrection even as a possibility, and non-Christians today are just as resistant to the idea, even coming up with ludicrous theories where Jesus was not really dead when He went to the tomb to try to explain His appearance three days later. Or they claim that His appearance was spiritual; that perhaps it was a hallucination or a vision, but certainly not a physical manifestation.
But ancient people had language to speak about spirits and ghosts, and indeed, that would have gone over much better with people in the Greco-Roman culture. But when they say that Jesus was resurrected, they mean precisely that He was brought back to life in a physical body.
Jesus’ Resurrection as a Historical Event
When we call Jesus’ resurrection a historical event, we must define ‘historical’, because unbelieving scholars use different definitions of ‘historical’ to deem Christ’s resurrection unhistorical. So we must explore the different uses of historical to determine in which senses we mean when we speak of Christ’s resurrection as a historical event.1
The simplest definition of ‘historical event’ is simply something that happened, whether or not it is important in terms of world history, whether or not there is a record or even a witness of it. So by this definition, anything that happens is historical. So Jesus’ resurrection is historical in this sense. New Testament scholar N.T. Wright calls this definition “history as event”.1 Another definition is “history as significant event;” hardly anyone who believes that the Resurrection of Christ is historical in the first sense would argue that it is not in the second sense.
What is usually contested is whether or not Jesus’ Resurrection is historical in the sense of being a provable event. Skeptics of the Resurrection accounts sometimes argue that all we have are the accounts in the Gospels which were written decades later, and even those do not depict the actual moment of the Resurrection. They argue that in the intervening decades mythology took over and they explained the missing body of Jesus with a Resurrection story. But this view is flawed on several points.
The earliest evidence
First, the accounts in the Gospels are neither the only nor the earliest evidence we have of Christian writing about the Resurrection. That honor goes to 1 Thessalonians; one of the earliest of Paul’s letters, which will be examined below, which was written around AD 50.2 So we have evidence that about two decades after Christ’s death, there was a group of people who insisted He was raised from the dead, and had built a decent portion of their theology around that fact, which doesn’t happen overnight. But the Gospel accounts, while penned decades after the events they describe (AD c. 30–33), go back to early oral tradition, which seems remarkably untainted by ‘theologizing’ on the part of the authors.
The Gospel accounts
The Resurrection accounts in the four canonical Gospels (penned from AD 55–853) are often criticized for being contradictory, but many of the alleged contradictions are no more than we would expect from any four different accounts of an event several decades after the fact. They include things such as who precisely made up the group of women who went to the tomb, whether there was one angel or two, and so on. Most of these are not even contradictory, and the critics clearly don’t understand logic, since they are not mutually exclusive; for instance, one account may mention only the angel who spoke, while the other account mentioned both angels. It would be a contradiction if one account specified only one angel.
It makes sense that the men who wrote the accounts might recall different details, even seemingly conflicting details, in their retelling of the event. What does not make sense is to say that since the authors include different women in the group that went to the tomb, the Resurrection obviously did not occur, and the same goes with all the other alleged contradictions.4
The Early Church
One of the strongest evidences for the historical nature of the Resurrection is somewhat indirect, in that it is required to explain a series of historical events, which make absolutely no sense unless the Resurrection actually happened. First, the disciples of Jesus went from cowering in an upper room (Peter had apparently already gone back to fishing), afraid for their lives, to proclaiming in the streets a little over a month later that Jesus was the Jewish Messiah and had risen from the dead. Ten of the apostles were martyred in various ways; only John died of old age, and Christians underwent many different periods of persecution, both at the local and state level. One could argue that many Christians were deluded, but to say that the apostles would die for what they knew to be a lie stretches credulity.5
That they were claiming He was resurrected was about the most unlikely way a first-century Jew would have explained an empty tomb. First-century Jews had diverse beliefs about the afterlife, from the Sadducees who did not believe in the Resurrection at all, to Pharisees, who believed in the Resurrection (but even among them there was diversity of opinion as to whether the unrighteous would be resurrected). But no type of Judaism believed that one person was going to be resurrected before everyone else; this is likely why the disciples had no idea what Jesus was talking about when He predicted His death and resurrection; the belief that the resurrection was something that would happen all at once at the end of time, whether to everyone or to the righteous only, rendered His words incomprehensible to them until they actually saw the Resurrection.
Implications of Christ’s Resurrection for His Followers
There is evidence that, almost from the beginning of the Christian movement, Christ’s resurrection was used to explain what His believers would experience in the Resurrection. In fact, one thing that marks the Gospel accounts out as going back to a very early oral tradition that was not tampered with by the Gospel authors is the distinct lack of such extrapolation from Christ’s resurrection to our own.6 In 1 Thessalonians 1:10, Paul calls Jesus “[God’s] Son from Heaven, whom He raised from the dead.” He does not return to resurrection until near the end of the letter in 4:13–18, but that short passage is very important for reconstructing early Christian belief in the Resurrection, because it is the earliest example of Resurrection theology: “We believe that Jesus died and rose again and so we believe that God will bring with Jesus those who have fallen asleep in him.” So the Resurrection of Jesus becomes the basis for the Christian’s resurrection when He returns. In Philippians 3:20-21, we find the explicit statement that our resurrection bodies will be just like Jesus’.
Christ as the Firstfruits of the Resurrection and the Last Adam
The most important developments of Paul’s theology regarding the resurrection of believers are his statements in 1 Corinthians 15 and Romans 5:12–21 (penned in 53–54 AD7 and 57–58 AD,8 respectively). In the former we find for the first time the reason why Christians can expect to be resurrected because of Jesus’ resurrection; Jesus is “the firstfruits” of the Resurrection, a guarantee that those who are under him will also be raised when He returns (1 Corinthians 15:23).
Paul made a clear contrast between the sin of the first man, Adam, vs. the Last Adam, Christ. Adam’s sin makes us all sinners by nature, but Jesus’s sacrifice enabled our sin to be imputed (credited) to Him (Isaiah 53:6), and His perfect life enabled His righteousness to be imputed to believers in Him (2 Corinthians 5:21). This is hard for modern Westerners to understand, because Western culture is very individualistic. But in New Testament time, and indeed in most cultures today, they thought in collective terms, so would readily understand this. That is, the actions of one person necessarily affected the whole; especially the actions of the head of a certain group. And if corporate punishment is ‘unjust’, whatever that might mean in an atheistic framework, then so is corporate redemption.
Paul essentially makes the argument that there are two ultimate ‘heads’ of two types of humanity; Adam and Christ.9 All people are under either one or the other, and the action of one’s ‘head’ determines their standing before God:
‘Paul is insisting that people were really ‘made’ sinners through Adam’s act of disobedience just as they were really ‘made righteous’ through Christ’s obedience. … To be righteous does not mean to be morally upright, but to be judged acquitted, cleared of all charges, in the heavenly judgment. Through Christ’s obedient act, people became really righteous; but ‘righteousness’ itself is a legal, not a moral, term in this context.’10
Adam was the firstfruits of death, in a manner of speaking; the first sentence in history to capital punishment (Genesis 3:19) showed that all who were under him would also die. Paul calls Jesus the ‘Last Adam’, because humanity’s relationship to Adam is the only one that remotely resembles the relationship of Christians to Christ. Even so, most of the time Paul talks about them in terms of contrasting the two; the only similarity he ever brings out between the two is that both were heads of humanity whose actions had far-reaching consequences for those under them.11 This similarity is the foundation for the contrasts he goes on to point out.12
There are several important points of contrast that Paul brings out in the two key passages:
- The effects of Adam’s sin are universal; Christ’s obedience and sacrifice are only effective for those who believe (i.e. ‘those who receive’—Romans 5:17).
- Christ’s action itself is infinitely better than Adam’s action, as are the results of the action. Adam’s disobedience occurred when men were morally ‘neutral’ and it made them morally evil, and resulted in both the physical death and spiritual estrangement from God of every person descended from him. Christ’s life of obedience and selfless sacrifice, on the other hand, occurred when we were morally evil and makes us morally ‘good’ (Romans 5:16).
- Christ Himself is infinitely better than Adam was, even before the first man fell, in that while Adam received life as a gift from God, Christ has the power and authority to bring His new humanity into being (1 Corinthians 15:45).13
The first man, Adam: a historical figure
This comparison between Adam and Christ is absolutely essential to Paul’s argumentation, and his theology of the Resurrection in general. This requires that both Adam and Christ be historical figures who both have a kind of headship over the humanity that is under them, whose actions had widespread consequences for those under them.
More specifically, it requires that Adam be the literal ancestor of all humans whose sin really caused the introduction of death and the estrangement of humanity from God, just as Christ is a historical human being whose life of obedience to God and sacrificial death reconcile us to God and pay the sin-debt in a way that no one else could.
Some argue that it is not necessary for Adam to be historical. C.K. Barrett is typical of this view:
“Sin and death, traced back by Paul to Adam, are a description of humanity as it empirically is. For this reason the historicity of Adam is unimportant. It is impossible to draw the parallel conclusion that the historicity of Christ is equally unimportant. The significance of Christ is that of impingement upon a historical sequence of sin and death. Sin and death (to change the metaphor) are in possession of the field, and if they are to be driven from it this must be by the arrival of new forces which turn the scale of the battle, that is, by a new event. As Paul knew, this event had happened very recently, and its character as historical event raised no doubt or problem in his mind. This observation is not intended as a defence of the Gospel narratives as historical documents; they are entirely open to question and must stand their own ground. But so far as the ‘Second Adam’ or ‘Heavenly man’ figure is mythological, the myth has been historicized by Paul, and that not only because he was aware of Jesus as a historical person, but because a historical person was needed by the theological argument.”14
But his argument fails because it requires sinfulness and mortality to be the original state of humanity. But the whole point is that sin and death were themselves intruded on human history when Adam disobeyed God’s command. This is the reason why Christ’s obedience and sacrificial death were needed to overturn the rule of sin and death15. If Jesus has to be a historical person, so does Adam. The historicity of the person of Jesus and His sacrifice means that we will be free from sin and death in the Resurrection. But without the historicity of Adam, we do not know why the world was under the rule of sin and death in the first place. If death had always been a part of the created order, part of what God called “very good”, then there is no way that death could be called the ‘last enemy’. Even Barrett has to admit that Paul treats Adam as a historical figure.16
Conclusion: without a historical Adam and Fall, the Gospel dangles rootlessly
As CMI has explained before, it is possible to be a Christian while not believing that the first chapters of Genesis relate historical events. However, it leaves those Christians with little foundation to resist the attacks and ridicule of sceptics, atheists, liberal religious leaders, fellow students, or work-mates, etc. That’s because those few chapters set the stage for everything to come, both in the Old and New Testament. Genesis is the foundation of the Gospel; without that we are left without an explanation for the origin of everything Christ came to remedy (see also Biblical creation impedes evangelism?). The Resurrection of Christ marks the dawning of what can quite literally be called a ‘new humanity’ under Christ, but if our sinfulness does not come from being under a sinful head of humanity, the first Adam, then we cannot be made righteous under a new head of humanity, the Last Adam, Jesus Christ. They logically stand or fall together, as Paul realized.
- See N.T. Wright, The Resurrection of the Son of God (Minneapolis: Fortress Press, 2003) for a detailed discussion of historicity, especially pp. 12–22. Return to text.
- F.F. Bruce 1 & 2 Thessalonians. Word Biblical Commentary (Grand Rapids: Eerdmans, 1982), p. xxi. Return to text.
- See Robert Guelich, Mark 1–8:26. Word Biblical Commentary (Nashville: Thomas Nelson, 1989), p. xxxii and D.A. Carson, The Gospel According to John. The Pillar New Testament Commentary (Grand Rapids: Eerdmans, 1991), p. 86. Return to text.
- See J.P. Holding, “Can’t We All Just Get Along?” Tekton Apologetics Ministries. Return to text.
- See J.P. Holding, “The Impossible Faith” Tekton Apologetics Ministries. Return to text.
- N.T. Wright, Surprised by Hope: Rethinking Heaven, the Resurrection, and the Mission of the Church. (New York: HarperOne, 2008), p. 56. Return to text.
- Ben Witherington III, Conflict and Community in Corinth: A Socio-Rhetorical Commentary on 1 and 2 Corinthians. (Grand Rapids: Eerdmans, 1995), p. 73. Return to text.
- Grant Osborne: Romans. IVP New Testament Commentary Series (Downers Grove: Intervarsity Press, 2004), p. 14. Return to text.
- L. Cosner, Romans 5:12–21: Paul’s view of a literal Adam, Journal of Creation 22(2):105–107, 2008. Return to text.
- Douglas Moo, The Epistle to the Romans: New International Commentary on the New Testament, (Grand Rapids: Eerdmans, 1996), p. 345. Return to text.
- Ben Witherington III, Paul’s Letter to the Romans: A Socio-Rhetorical Commentary (Grand Rapids: Eerdmans, 2004), p. 146-7. Return to text.
- John Murray, The Epistle to the Romans: The English Text with Introduction, Exposition, and Notes (Grand Rapids: Eerdmans, 1965), vol 1, p. 192. Return to text.
- Anthony Thiselton, The First Epistle to the Corinthians: A Commentary on the Greek Text. TNIGTC (Grand Rapids: Eerdmans, 2000), 1283. Return to text.
- C.K. Barrett, The First Epistle to the Corinthians. Black’s New Testament Commentary (Peabody: Hendrickson Publishers, 1968), p. 353 Return to text.
- Gordon Fee, The First Epistle to the Corinthians. NICNT (Grand Rapids: Eerdmans, 1987), p. 752. Return to text.
- Barrett Ref. 14, p. 352. Return to text.
Thank you Lita for your brief and effective dissertation on the historicity of Jesus' resurrection. I would say also that the reason for the 'unbelievers' disbelief on the matter lies mainly on the lack of knowledge on the whole story of salvation as depicted from Genesis (3:15) to Revelation (21:3-7) and taken in its entirety in an unbroken panoramic view: Adam's disobedience requires his death as the consequence of his cutting off from God who is Life. God's love toward him and the whole humanity provides the means to realize his forgiveness and salvation. These means are accomplished by the second Person of the Godhead, Jesus - only a God can do it! Isaiah 53 describes the scenario. God reveals his plan to humanity via the lambs' sacrifices that took their apex in the living parable of the Hebrew's Sanctuary ceremonies. The whole scenario DEMANDS that the story narrated in the Gospels find in Jesus DEATH and RESURRECTION its fulfilment. Why the story should be considered unreal since the Jesus' times? The pagan world was indeed permeated with stories of the underworld and figures of people who came back from the hades. Think of Orpheus and Eurydice; Hercules; Theseus and perhaps others. They were well accepted without discussion. All the mess is found only in the clear and wonderful story at the core of the Christian creed: Jesus' death and resurrection. Why? Doesn't it look suspicious? I can see just Satan -behind the scenes - as he try every possible mean to destroy it! Go forth CMI you do a great job.
He is risen.
Thanks Lita for your detailed explanation of the importance and relevance of the Resurrection, and the linked articles which clarify the importance of believing the Genesis record from the very first verse to be true history. These articles equip me to speak boldly to compromising fellow "believers" about the historicity of the whole bible, and their need to reckon with it and come to saving faith in it. Thank you very much.
As always a great article! However, I think the range of dates given for the penning of the 4 canonical gospels should be 55 to 70 A.D., not 55 to 85. If we know anything we know that the OT Scriptures were used by the apostles as an apologetic to prove Christ was the promised Messiah. I cannot imagine them missing the Roman destruction of Jerusalem in 70 A.D. as yet another fulfilled prophesy, this one having been made by Christ himself. I think liberal scholar A. T. Robinson also supports this view.
|
<urn:uuid:5311d391-a5b0-4823-9924-24e06ecad80f>
|
CC-MAIN-2016-26
|
http://creation.com/the-resurrection-and-genesis
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00190-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954073
| 4,359
| 3.109375
| 3
|
Details about The Elementary Teacher's Digital Toolbox:
The Elementary Teacher's Digital Toolbox is a book and CD set which welcomes new teachers to the field and offers guidance on topics such as classroom management, lesson planning, and standards. Most appropriate for student teachers and certification candidates in induction programs. The digital format enables users to customize and print materials such as lesson plan outlines, checklists for classroom management, reading logs and homework forms. The Elementary Teacher's Digital Toolbox puts the experience of a veteran teacher into the hands of a novice. FEATURES: Companion CD-ROM--Enables teachers to easily customize and print the many forms they will need during their teaching experience (i.e. lesson plan forms, reading logs, homework sheets, letters to parents and guardians, etc.) Field-tested vignettes of real classroom situations--At the conclusion of every chapter, these vignettes can spark discussion in professional development workshops for new and experienced teachers. Explains the use of standards and benchmarks and contains a comprehensive standards directory--Introduces nationally accepted multi-subject standards as well as subject area standards. Full lesson plans-Eases the transition for a student teacher or a novice teacher. Professional development section-Offers guidance in starting a teachers' book club, employment opportunities outside the classroom, and membership in professional organizations.
Back to top
Rent The Elementary Teacher's Digital Toolbox 1st edition today, or search our site for other textbooks by Helen Hoffner. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Pearson.
|
<urn:uuid:a8613129-8170-4883-91e4-c97d8cbd467c>
|
CC-MAIN-2016-26
|
http://www.chegg.com/textbooks/the-elementary-teacher-s-digital-toolbox-1st-edition-9780131709560-0131709569
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925197
| 315
| 2.703125
| 3
|
We surveyed the genetic diversity among avian influenza virus (AIV) in wild birds, comprising 167 complete viral genomes from 14 bird species sampled in four locations across the United States. These isolates represented 29 type A influenza virus hemagglutinin (HA) and neuraminidase (NA) subtype combinations, with up to 26% of isolates showing evidence of mixed subtype infection. Through a phylogenetic analysis of the largest data set of AIV genomes compiled to date, we were able to document a remarkably high rate of genome reassortment, with no clear pattern of gene segment association and occasional inter-hemisphere gene segment migration and reassortment. From this, we propose that AIV in wild birds forms transient “genome constellations,” continually reshuffled by reassortment, in contrast to the spread of a limited number of stable genome constellations that characterizes the evolution of mammalian-adapted influenza A viruses.
Influenza A viruses are an extremely divergent group of RNA viruses that infect in a variety of warm-blooded animals, including birds, horses, pigs, and humans. Since they contain a segmented RNA genome, mixed infection can lead to genetic reassortment. It is thought that the natural reservoir of influenza A viruses is the wild bird population. Influenza A viruses can switch hosts and cause novel outbreaks in new species. Influenza viruses containing genes derived from bird influenza viruses caused the last three influenza pandemics in humans. In this study, we surveyed the genetic diversity among influenza A viruses in wild birds. Through a phylogenetic analysis of the largest data set of wild bird influenza genomes compiled to date, we were able to document a remarkably high rate of genome reassortment, with no clear pattern of gene segment association and occasional inter-hemisphere gene segment migration and reassortment. From this, we propose that influenza viruses in wild birds forms transient “genome constellations,” continually reshuffled by reassortment, in contrast to the spread of a limited number of stable genome constellations that characterizes the evolution of mammalian-adapted influenza A viruses.
Citation: Dugan VG, Chen R, Spiro DJ, Sengamalay N, Zaborsky J, Ghedin E, et al. (2008) The Evolutionary Genetics and Emergence of Avian Influenza Viruses in Wild Birds. PLoS Pathog 4(5): e1000076. doi:10.1371/journal.ppat.1000076
Editor: Daniel R. Perez, University of Maryland, United States of America
Received: January 14, 2008; Accepted: April 24, 2008; Published: May 30, 2008
This is an open-access article distributed under the terms of the Creative Commons Public Domain declaration which stipulates that, once placed in the public domain, this work may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose.
Funding: This research was supported in part by the Intramural Research Program of the NIH, and the NIAID.
Competing interests: The authors have declared that no competing interests exist.
Low pathogenic (LP), antigenically diverse influenza A viruses are widely distributed in wild avian species around the world. They are maintained by asymptomatic infections, most frequently documented in aquatic birds of the orders Anseriformes and Charadriformes. As such, wild birds represent major natural reservoirs for influenza A viruses – and at least 105 species of the more than 9000 species of wild birds have been identified as harboring influenza A viruses ,,. These influenza A viruses, commonly referred to as avian influenza viruses (AIV), possess antigenically and genetically diverse hemagglutinin (HA) and neuraminidase (NA) subtypes, which includes all known influenza A virus HA (H1–H16) and NA (N1–N9) subtypes. At least 103 of the possible 144 type A influenza A virus HA-NA combinations have been found in wild birds ,.
AIV maintained in wild birds have been associated with stable host switch events to novel hosts including domestic gallinaceous poultry, horses, swine, and humans leading to the emergence of influenza A lineages transmissible in the new host. Adaptation to domestic poultry species is the most frequent –. Sporadically, strains of poultry-adapted H5 or H7 AIV evolve into highly pathogenic (HP) AIV usually through acquisition of an insertional mutation resulting in a polybasic amino acid cleavage site within the HA ,. The current panzootic of Asian-lineage HP H5N1 AIV appears to be unique in the era of modern influenza virology, resulting in the deaths of millions of poultry in 64 countries on three continents either from infection or culling. There are also significant zoonotic implications of this panzootic, with 379 documented cases in humans, resulting in 239 deaths in 14 countries since 2003 (as of April 2008 ). The Asian lineages of HP H5N1 AIV have also caused symptomatic, even lethal, infections of wild birds in Asia and Europe, suggesting that migratory wild birds could be involved in the spread of this avian panzootic –. Concern is heightened since wild birds are also likely to be the reservoir of influenza A viruses that switch hosts and stably adapt to mammals including horses, swine, and humans . The last three human influenza pandemic viruses all contained two or more novel genes that were very similar to those found in wild birds ,,,.
Despite the recent expansion of AIV surveillance ,,,, and genomic data , –, fundamental questions remain concerning the ecology and evolution of these viruses. Prominent among these are: (i) the structure of genetic diversity of AIV in wild birds, including the role played by inter-hemispheric migration, (ii) the frequency and pattern of segment reassortment, and (iii) the evolutionary processes that determine the antigenic structure of AIV, maintained as discrete HA and NA subtypes. Herein, we address these questions using the largest data set of complete AIV genomes compiled to date.
Global Genome Diversity of AIV
The complete genomes of 167 influenza A viruses isolated from 14 species of wild Anseriformes in 4 locations in the U.S. (Alaska, Maryland, Missouri, and Ohio) were sequenced; viral isolates consisted of 29 HA and NA combinations, including 11 HA subtypes (H1–H8, H10–H12) and all 9 neuraminidase subtypes (N1–N9). These sequences were collected as part of an ongoing AIV surveillance project at The Ohio State University and collaborators in other states (1986–2005) using previously described protocols , and more than double the number of complete U.S.-origin avian influenza virus genomes available in GenBank. In total, 1340 viral gene segment sequences (2,226,085 nucleotides) were determined (Table S1) and are listed on the Influenza Virus Resource website (http://www.ncbi.nlm.nih.gov/genomes/FLU/Database/shipment.cgi).
Cloacal samples from wild birds frequently show evidence of mixed infections with influenza viruses of different subtypes by serologic analysis –. Therefore, the isolates chosen for sequence analysis were subjected to sequential limiting dilutions (SLD) . The amplification and sequencing pipeline employed a ‘universal’ molecular subtyping strategy in which every sample was amplified with sets of overlapping primers representing all HA and NA subtypes. In this manner, samples without clear prior subtype information, and/or mixed samples, could be accurately analyzed. Despite performing SLD, 4 samples were shown by sequence analysis to represent a mixed infection (yielding sequence with more than one HA and/or NA subtype. In addition 40 samples had mismatches between the initial antigenic subtyping results (determined on first- or second-egg-passage isolates prior to SLD) and the subtype determined by sequence analysis of cDNA (following one SLD of low-egg-passage isolates) which suggests the possibility of minor populations of antigenically distinct viruses in the low-passage isolate that outgrew the dominant antigenic population in a foreign host system during the SLD or that mixed infections in first egg passage stock caused difficulty in initial subtyping and a dominant strain emerged during SLD (see table of viral isolates at http://www.ncbi.nlm.nih.gov/genomes/FLU/Database/shipment.cgi to examine the discordant results observed). Thus, up to 44 of 167 (26%) of isolates potentially represent mixed infections in the initial cloacal sample. Given the SLD procedure, the true rate of mixed infection, as defined by the presence of >1 HA and/or NA subtype, was likely to be even higher, although mis-serotyping cannot also be ruled out. Sequencing viral genomes directly from primary cloacal material would be the only way to assess the mixed infection frequency, in a manner unbiased by culture, but no such studies have yet been attempted to our knowledge.
For a more comprehensive analysis of AIV diversity, the AIV genomes from this study were compared to other AIV genomes available on GenBank . In total, 452 HA sequences and 473 NA sequences, representative of the global diversity of AIV, were used in phylogenetic analyses. For the internal protein genes (PB2, PB1, PA, NP, M, NS), a subset of 407 complete globally-sampled AIV genomes was used to assess the degree of linkage among gene segments. Phylogenetic trees for the HA alignment (Figures 1a and S1) and NA alignment (Figure 1b and S2) are shown here. Phylogenetic trees for the six other gene segments are presented in Figures S3, S4, S5, S6, S7 and S8.
(a) Maximum likelihood tree of the HA gene segment of 452 isolates of avian influenza A virus, including representatives of all 16 subtypes. For clarity, all branches within individual subtypes have been collapsed and color-coded to signify individual subtypes. Bootstrap values above 70% are shown next to relevant branches. Branch lengths are scaled according to the number of nucleotide substitutions per site. See Figure S1 for an expanded form of this tree. (b) Maximum likelihood tree of the NA gene segment of 473 isolates of avian influenza A virus, including representatives of all 9 subtypes. The mix of HA subtypes (color-coded according to Figure 1a) observed within each NA type is shown, highlighting the frequency of reassortment. For clarity, all branches within individual subtypes have been collapsed. Bootstrap values above 65% are shown next to the relevant branches. Branch lengths are scaled according to the number of nucleotide substitutions per site. See Figure S2 for an expanded form of this tree, in which individual viral isolates are marked.
The topology of the HA phylogeny reflects the antigenically defined subtypes, with some higher-order clustering among them (e.g., H1, H2, H5 and H6; H7, H10 and H15; Figures 1a and S1), as seen previously in smaller studies ,. Although most subtypes are found in numerous avian species and occupy wide global distributions, this phylogenetic structure indicates that HA subtypes did not originate in a single radiation. More striking was the high level of genetic diversity between different subtypes; the average amino acid identity of 120 inter-subtype comparisons of full-length HA was 45.5%. As expected, inter-subtype comparisons of the HA1 domain exhibited more diversity, with an average inter-subtype identity of 38.5%. In contrast, intra-subtype identity is high (averaging >92%), even when comparing sequences from different hemispheres. Hence, the genetic structure of the AIV HA is characterized by highly divergent subtypes that harbor relatively little internal genetic diversity. However, 4 subtype comparisons show considerably less divergence (76–79% identity); H4 vs. H14, H7 vs. H15, H13 vs. H16, and H2 vs. H5, indicating that they separated more recently (Figure 1; see below).
A similar phylogenetic structure was seen in the NA (Figure 1b and S2), again with evidence for higher-order clustering (e.g., N6 and N9; N1 and N4). In contrast to the HA, however, levels of genetic divergence among the NA types are more uniform, with the 9 subtypes exhibiting an average inter-subtype identity of 43.6% (with an average intra-subtype identity of >89%) and no clear outliers. Hence, no new (detected) NA types have been created in the recent evolutionary past. This correlates with the more uniform distribution of NA than HA subtypes in wild bird AIV isolates .
The topology of the NS segment phylogeny was also of note, being characterized by the deep divergence among the A and B alleles as described (Figure S8). Almost every HA and NA subtype of AIV contain both the A and B NS alleles, without evidence of ‘intermediate’ lineages expected under random genetic drift, strongly suggesting that the two alleles are subject to some form of balancing selection. The NS1 protein has pleiotropic functions during infection in mammalian cells, and plays an important role in down-regulating the type I interferon response . Supporting these results are the elevated rates of nonsynonymous to synonymous substitution per site (ratio dN/dS) observed for the NS1 gene in both avian and human influenza viruses suggesting that the NS1 protein has experienced adaptive evolution in both host types. Whether this selection relates to the role the NS1 protein plays in its interaction in the type I interferon pathway is currently unclear.
Far less genetic diversity is observed in the 5 remaining AIV gene segments (PB2, PB1, PA, NP, and M - Figures S3, S4, S5, S6 and S7). Indeed, the extent of diversity in these genes is less than that within a single HA or NA subtype, with average pairwise identities ranging from 95–99%. Our phylogenetic analysis also revealed a clear separation of AIV sequences sampled from the Eastern and Western Hemispheres, as previously noted (3,19), indicating that there is relatively little gene flow between overlapping Eastern and Western Hemisphere flyways. However, despite this strong biogeographic split, mixing of hemispheric AIV gene pools clearly occurs at a low level (see below).
Abundant reassortment in AIV
To assess the frequency and pattern of reassortment in AIV, we compared the extent of topological similarity (congruence) among phylogenetic trees of each internal segment. This analysis revealed a remarkably frequent occurrence of reassortment, supporting previous studies on smaller data sets ,. For example, 5 H4N6 AIV isolates were recovered from mallards sampled at the same location in Ohio on the same morning and in the same trap (Figure 2). For the internal genes, these viruses contained 4 different genome ‘constellations’, with only 1 pair of viruses sharing the same constellation. In the data set as a whole, the large number of different subtype combinations recovered highlights the frequency of reassortment (Figures 1b and S2), and provides little evidence for the elevated fitness of specific HA/NA combinations in AIV isolates from wild birds. That the majority of HA/NA combinations have been recovered , also strongly supports the high frequency of reassortment involving these surface protein genes.
The different colors reflect segments whose sequences fall into different major clades – defined by strong bootstrap support (>80%) – in each internal gene segment tree. For example, all 6 internal gene segments from isolates A/Mall/OH/655/2002 and A/Mall/OH/657/2002 have the same, shared phylogenetic position (shaded red), but exhibit a significantly different phylogenetic pattern, indicative of reassortment, with A/Mall/OH/667/2002 in the PB1 and PA gene segments (individual trees presented in Figures S4 and S5). Similarly, isolate A/Mall/OH/668/2002 shows phylogenetic evidence of reassortment in 5 of 6 internal gene segments compared to A/Mall/OH/655/2002.
Thus, while there is strong evidence of frequent reassortment between HA and NA, we also sought to assess the extent of reassortment among the less commonly studied internal gene segments. A maximum likelihood test of phylogenetic congruence revealed that although the topologies of the internal segment trees are more similar to each other than expected by chance, so that the segments are not in complete linkage equilibrium (in which case they would be no more similar in topology than two random trees), the difference among them is extensive, indicative of extremely frequent reassortment and with little clear linkage among specific segments (Figure 3). Of the 6 internal segments, NS exhibited the least linkage to other genes, falling closest to the random distribution (i.e. possessed the greatest phylogenetic incongruence). This is compatible with the deep A and B allelic polymorphism in this segment. In contrast, the M segment showed the greatest phylogenetic similarly, albeit slight, to the other segments. Overall, however, the relationships between segments are better described by their dissimilarity than their congruence.
Each column represents the difference in log likelihood (Δ-lnL) between the ML trees of each gene (shown by colored dots). In every case, the ML tree estimated for the reference gene has the highest likelihood, while lower likelihoods (greater Δ-lnL values) are observed when the ML trees for the other genes are fitted to the sequence data from the reference gene and branch lengths re-optimized. To assess the extent of similarity in topology among genes, 500 random trees were created for each data set and their likelihoods assessed for each gene in turn using the same procedure (indicated by horizontal bars). In every case, and most notably for NS, the trees inferred for each gene have likelihoods closer to the random set than to the ML tree for the reference gene, indicative of extensive incongruence.
Occasional AIV isolates demonstrated hemispheric mixing with reassortment. As reported previously, the majority of such mixing occurs in shorebirds and gulls (with the exception of Eurasian lineage H6 HA genes distributed widely in North American Anseriformes as also revealed in this study). Interestingly, no completely Eurasian-lineage AIV genome has been reported in North America, or vice versa ,. This suggests that birds initially carrying AIV between the hemispheric flyways have not been identified in surveillance efforts. Most mixed isolates possess only one gene segment derived from the other hemisphere, indicating that there is little or no survival advantage for such hemispheric crossovers in the new gene pool. Since Asian lineage HP H5N1 AIV have been isolated from wild birds in Eurasia , concern has been raised over the importation of the virus into North America via migratory birds. Our analyses suggest that enhanced surveillance in gulls and other shorebirds may be warranted, and that with frequent reassortment (see below), entire Asian HP H5N1 AIV isolate genome constellations may not be detected in these surveys.
Overall, 25 of 407 (6%) AIV genomes show evidence of hemispheric mixing, with the phylogenies suggesting a general pattern of viral gene flow from Eurasia to North America: 5 North American isolates possessed two Eurasian-lineage internal gene segments, and 20 carried a single segment. North American isolates possessing a Eurasian-lineage M segment were the most common, seen in 18 isolates (Figure S7), followed by 8 with a Eurasian PB2 segment (Figure S3), four with a Eurasian PB1 segment (Figure S4), and 1 with a Eurasian PA segment (Figure S5). The 18 Eurasian M segments and the 8 Eurasian PB2 segments each form monophyletic groups, suggesting single introductions to North America. In each case, sequences from domestic ducks in China and turkeys in Europe were the closest relatives. It is therefore theoretically possible that some of these introductions may have been derived from imported poultry rather than migratory birds. In contrast, 3 of the 4 Eurasian PB1 and the single Eurasian PA segment in North American AIV contained genes whose closest relatives were in viruses found in red-necked stints from Australia. These small waders are widely migratory, with a range from Siberia to Australasia, and occasionally in Europe and North America. Interestingly, 23 of 25 such mixed genomes were observed in shorebirds along the U.S. Atlantic coast. Unfortunately, no complete AIV genomes are available from shorebirds on the U.S. Pacific coast for comparison.
The Evolutionary Genetics of AIV
In theory, two evolutionary models can explain the global pattern of AIV diversity, analogous to the allopatric and sympatric models of speciation. Under the allopatric model, the HA and NA subtypes correspond to viral lineages that became geographically isolated, resulting in a gradual accumulation of amino acid changes among them. Because of physical separation through geographical divergence, there is no requirement for natural selection to reinforce the partition of HA and NA diversity into discrete subtypes by preferentially favoring mutations at antigenic sites. In contrast, under the sympatric model, the discrete HA and NA subtypes originate within the same spatial population, such that natural selection must have reinforced speciation; subtypes that were too antigenically similar would be selected against because of cross-protective immune responses. Therefore, mutations would accumulate first at key antigenic sites, allowing subtypes to quickly diversify in the absence of herd immunity.
The AIV genomic data available here suggest a complex interplay of evolutionary processes. That discrete HA and NA subtypes, as well as the 2 divergent NS alleles, are maintained in the face of frequent reassortment strongly suggests that each represents a peak on a fitness landscape shaped by cross-immunity (Figure 4a). Under this hypothesis, ‘intermediate’ HA/NA/NS alleles would be selected against because they generate more widespread herd immunity, corresponding to fitness valleys. Indeed, it is the likely lack of immunological cross-protection at the subtype level that allows the frequent mixed infections described here (although mixed infections may also occur in young, immunologically naïve birds). Further, in most cases these divergent HA, NA and NS alleles circulate in the same bird species in the same geographical regions, compatible with their divergence under sympatry. In addition, 3 of the most closely related pairs of HA subtypes contain an HA that is rarely isolated or limited geographically or by host species restriction, implying that their dispersion is inhibited by existing immunity; H14 has only been isolated rarely in Southern Russia, H15 only in Australia, and H16 has only been described in gulls. The possible exception is H2–H5, where both subtypes have been isolated from a variety of bird species in a global distribution. Although these may represent more recent occurrences of allopatric speciation, antigenic cross-reactivity between the H2–H5, H7–H15, H4–H14 pairs was recently demonstrated , again compatible with the sympatric model. Further support for possible cross-immunity between these subtypes would require experimental challenge studies.
(a) The fitness landscapes observed in HA, NA and NS, and represented here by NA. Each colored cone represents an individual subtype. These subtypes are connected by a bifurcating tree. The lack of ‘intermediate’ subtypes – those falling below the pink disc – reflects major valleys in fitness, such that any virus falling in this area will experience a major reduction in fitness, most likely due to an elevated cross-protective immune response. Occasionally, individual subtypes jump species barriers and spread in new hosts (such as humans), where they experience a continued selection pressure and hence accumulate amino acid substitutions in a progressive manner, as shown. (b) The fitness landscapes observed in the remaining internal protein segments of avian influenza virus – PB2, PB1, PA, NP and M (represented by different colors). In this case, there is little functional difference among the genetic variants of each segment, so that the fitness landscape is flat. This equivalence in fitness among genome constellations also means that reassortment is frequent among them (as reassortants suffer no fitness cost), represented by the horizontal lines connected each internal gene segment.
In contrast to the extensive genetic diversity seen in HA, NA and NS, the 5 remaining internal gene segments encode proteins that are highly conserved at the amino acid level, indicating that they are subject to widespread purifying selection. The fitness landscape for these genes is therefore not determined by cross-immunity, but by functional viability, with less selective pressure to fix advantageous mutations (Figure 4b). Further, given such strong conservation of amino acid sequence, large-scale reassortment is permitted as it will normally involve the exchange of functionally equivalent segments, with little impact on overall fitness. These data also suggest that the cross-immunity provided by these proteins is minimal.
Together, these global genomic data provide new insight into the different evolutionary dynamics exhibited by influenza A viruses in their natural wild bird hosts and in those viruses stably adapted to novel species (e.g., domestic gallinaceous poultry, horses, swine, and humans). Based on these analyses, we hypothesize that AIV in wild birds exists as a large pool of functionally equivalent, and so often inter-changeable, gene segments that form transient genome constellations, without the strong selective pressure to be maintained as linked genomes. Rather than favoring successive changes in single subtypes, geographic and ecologic partitioning within birds, particularly within the different flyways, coupled with complex patterns of herd immunity, has resulted in an intricate fitness landscape comprising multiple fitness peaks of HA, NA and NS alleles, interspersed by valleys of low fitness which prevent the generation of intermediate forms (Figure 4a).
In contrast, stable host switching involves the acquisition of a number of (as yet) poorly characterized mutations ,,, that serve to separate an individual, clonally derived influenza virus strain from the large wild bird AIV gene pool. Because adaptation to a new host likely limits the ability of these viruses to return to the wild bird AIV gene pool ,, these emergent viruses must evolve as distinct eight-segment genome configurations within the new host. The ability of recent HP H5N1 AIV to cause spillover infections in wild birds is an unprecedented exception. Further, because humans represent a large and spatially mixed population, natural selection is able to act efficiently on individual subtypes . Hence, a limited number of subtypes circulate within humans and evolve by antigenic drift to escape population immunity.
Notably, the recent Asian lineage HP H5N1 AIV strains are intermediate between these two contrasting influenza ecobiologies; a combination of large poultry populations allows natural selection to effectively drive rapid antigenic and genetic change within a single subtype ,, while reassortment with the wild bird AIV gene pool facilitates the generation of new genome constellations –. Similar patterns have also been observed with the widely circulating H9N2 and H6N1 viruses in gallinaceous poultry in Eurasia ,. Previous analyses have also shown that recent HP H5N1 viruses had the highest evolutionary rates and selection pressures (dN/dS ratios) as compared to other AIV lineages . Consequently, these results underscore the importance of determining the mechanistic basis of how H5N1 has spread so successfully among a diverse range of both domestic and wild bird species.
Materials and Methods
Sample collection and virus isolation
The genomes of 167 influenza A virus isolates recovered from 14 species of wild Anseriformes located in four U.S. states (Alaska, Maryland, Missouri, Ohio) were sequenced for this study; viral isolates consisted of 29 hemagglutinin (HA) and neuraminidase (NA) combinations, including H1N1, H1N6, H1N9, H2N1, H3N1, H3N2, H3N6, H3N8, H4N2, H4N6, H4N8, H5N2, H6N1, H6N2, H6N5, H6N6, H6N8, H7N3, H7N8, H8N4, H10N7, H10N8, H11N1, H11N2, H11N3, H11N6, H11N8, H11N9, H12N5. Cloacal swabs were collected as previously described from 1986–2005 as part of The Ohio State University's ongoing influenza A virus surveillance activities and in collaboration with many researchers in other states since 2001. A table listing the details of each isolate are available from the Influenza Virus Resource page (http://www.ncbi.nlm.nih.gov/genomes/FLU/Database/shipment.cgi). Avian influenza viruses were originally isolated using standard viral isolation procedures after 1–2 passages in 10-day-old embryonated chicken eggs (ECEs) . Type A influenza virus was confirmed using commercially available diagnostic assays (Directigen Flu A Assay, Becton Dickinson Microbiology Systems, Cockeysville, MD) and isolates were subtyped at the National Veterinary Services Laboratories (NVSL), Animal and Plant Health Inspection Service, United States Department of Agriculture, Ames, Iowa, using standard hemagglutinin inhibition and neuraminidase inhibition testing procedures .
Sequential Limiting Dilutions
Isolates for this investigation were generally selected from the viral archives based on antigenic diversity, clustering of recoveries, no evidence of antigenically mixed subtypes, and distribution over time. First- or second-egg-passage isolates in chorioallantoic fluid (CAF) were rapidly thawed from −80°C to room temperature, vortexed for 30 seconds and centrifuged at 1500 rpm for 10 minutes. Approximately 0.5 ml of CAF was drawn from the vial using a 26-gauge needle and subsequently passed through a 25 mm, 0.2 µm filter. Following filtration, a 10−1 CAF stock dilution was obtained by adding 0.2 ml filtered CAF to 1.8 ml Brain Heart Infusion Broth containing penicillin and streptomycin and vortexed for 30 seconds. Serial dilutions (10−6 maximum) were performed and 0.1 ml of each dilution was inoculated into each of four 10-day-old ECEs. After approximately 48 hours of incubation at 35°C/60% humidity, the inoculated eggs were chilled overnight and CAF was harvested from each egg and tested for hemagglutinating activity. The CAF from the last dilution positive for hemagglutinating activity was tested for the presence of type A influenza virus using the Directigen Flu A or Synbiotics Flu Detect Antigen Capture Test Strips™ (Synbiotics Corp., San Diego, CA). Hemagglutination titer assays were performed and CAF aliquots from the most dilute influenza A positive samples were stored at −80°C. If no endpoint titer was determined, the 10−6 CAF dilution was stored at −80°C and the procedure repeated utilizing 10−4 to 10−9 sequential dilutions.
Preliminary molecular testing
Viral RNA was isolated from allantoic fluid using Trizol® Reagent (Invitrogen Corp., Carlsbad, CA) and transcribed into 20 µl of cDNA for a subset of samples . Segment-specific universal primers designed to amplify partial and/or full-segments were initially used in RT-PCR assays to assess vRNA quality and RT-PCR primer specificity and sensitivity. Additionally, M13 sequencing tags (F primer: GTAAAACGACGGCCAG; R primer: CAGGAAACAGCTATGAC) were added to each primer set for ease of sequencing RT-PCR products in both forward and reverse directions.
For initiation of a high-throughput sequencing pipeline, a universal strategy for primer design was employed to ensure detection of multiple viral infections within a single sample. Primers were designed to semi-conserved areas of the six internal segments. For the segments encoding the external proteins, primers were designed from alignments of subsets of the 16 HA and 9 NA avian subtypes. Alignments were generated with MUSCLE and visualized with BioEdit . An M13 sequence tag was added to the 5′ end of each primer to be used for sequencing. Four sequencing reactions per run were analyzed on an agarose gel for quality control purposes. The sequence success rate of each primer pair was analyzed relative to the HA and NA subtype. Primers that did not perform well were altered or replaced. All primers and RT-PCR assay cycling conditions are available upon request.
cDNA Synthesis and Sequencing
Influenza A virus isolates were amplified with the OneStep RT-PCR kit (Qiagen, Inc., Valencia, CA). Amplicons were sequenced in both the forward and reverse directions. Each amplicon was sequenced from each end using M13 primers (F primer: TGTAAAACGACGGCCAGT; R primer: CAGGAAACAGCTATGACC). Sequencing reactions were performed using Big Dye Terminator chemistry (Applied Biosystems, Foster City, CA) with 2 µl of template cDNA. Additional RT-PCR and sequencing was performed to close gaps and to increase coverage in low coverage or ambiguous regions. Sequencing reactions were analyzed on a 3730 ABI sequencer and sequences were assembled in a software pipeline developed specifically for this project.
Sequence trimming and assembly
Once genomic sequence was obtained for an individual sample, reads for each segment were downloaded, trimmed to remove amplicon primer-linker sequence and low quality sequence, and assembled. A small genome assembly suite called Elvira (http://elvira.sourceforge.net/), based on the open-source Minimus assembler, was developed to automate these tasks. The Elvira software delivers exceptions including failed reads, failed amplicons, and insufficient coverage to a reference sequence (as obtained from GenBank), ambiguous consensus sequence calls, and low coverage areas. The avian influenza A sequences (with GenBank Accession numbers) produced from this ongoing study are available at http://www.ncbi.nlm.nih.gov/genomes/FLU/Database/shipment.cgi. The first 167 avian influenza genomes from this collection were submitted to GenBank and included in this study.
The genomes of avian influenza virus newly determined here were combined with those already available on GenBank, particularly from recent large-scale surveys of viral biodiversity . Sequences from viruses isolated before 1970, which may have been subjected to extensive laboratory passage, were excluded as were the large numbers of H5N1 sequences collected in recent years (a sample of H5N1 genomes, 1997–2005, were included for analysis). In total, 452 HA sequences and 473 NA sequences were used in analyses. For the internal protein-encoding segments (PB2, PB1, PA, NP, M, NS), a total of 407 genomes were analyzed (by considering a common data set we were able to investigate patterns of segment linkage, see below). For each data set, sequence alignments of the coding regions were created using MUSCLE and adjusted manually using Se-Al according to their amino acid sequence. In the case of HA and NA, some regions of the inter-subtype sequence alignment were extremely divergent such that they could not be aligned with certainty (HA signal peptide and cleavage site insertions in HPAI H5 or H7, and variable small stalk deletions in NA). Because of their potential to generate phylogenetic error, these small regions of ambiguity were deleted. This resulted in the following sequence alignments used for evolutionary analysis: PB2 = 2277 nt; PB1 = 2271 nt; PA = 2148 nt; HA = 1683 nt; NP = 1494 nt; NA = 1257 nt; M = 979 nt; NS = 835 nt. All sequence alignments are available from the authors on request. Nucleotide and amino acid identity was calculated using Megalign (Lasergene 7.2, DNAStar, Madison, WI).
Using these alignments, maximum likelihood (ML) trees were inferred using PAUP* , based on the best-fit models of nucleotide substitution models determined by MODELTEST . In most cases, the preferred model of nucleotide substitution was GTR+I+Γ4, or a close relative. For each of these trees, the reliability of all phylogenetic groupings was determined through a bootstrap resampling analysis (1000 pseudo-replicates of neighbor-joining trees estimated under the ML substitution model).
We employed a maximum likelihood method to assess the extent of phylogenetic congruence, indicative of reassortment . To reduce any bias in phylogenetic structure caused by geographic segregation, only isolates from North American flyways were used in analyses of the internal gene segments. Briefly, ML trees for each internal gene segment were estimated as described above. Next, the log likelihood (-LnL) of each of the ML trees was estimated on each gene segment data set in turn, optimizing branch lengths under the ML substitution model in every case. The topological similarity between each gene segment tree on each data set was then determined by compared the difference in likelihood among them (Δ-LnL). Clearly, the greater the similarity in topology (congruence) among the trees for each segment, the closer their likelihood scores and so the more likely they are to be linked. To put the distribution of Δ-LnL values in context, we constructed 500 random trees for each data set and optimized their branch lengths in the same manner. If any of the Δ-LnL values among the ML trees falls within the random distribution then we can conclude that the gene segments in question are in complete linkage equilibrium. All these analyses were conducted using PAUP* package .
Maximum likelihood tree of the HA gene of 452 isolates of avian influenza A virus, including representatives of all 16 subtypes. Sequences are color-coded according to HA subtype (see Figure 1). Internal branches are color-coded to reflect the flyway from which the viruses were sampled; North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to the relevant branches. Branch lengths are scaled according to the number of nucleotide substitutions per site.
(1.16 MB EPS)
Maximum likelihood tree of the NA gene of 473 isolates of avian influenza A virus, including representatives of all 9 subtypes. Sequences are color-coded according to HA subtype (see Figure 1), with the mix of colors highlighting the frequency of reassortment. Internal branches are color-coded to reflect the flyway from which the viruses were sampled; North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to the relevant branches. Branch lengths are scaled according to the number of nucleotide substitutions per site.
(1.02 MB EPS)
Maximum likelihood tree of the PB2 gene of avian influenza A viruses. Sequences are color-coded according to HA subtype. Internal branches are color-coded to reflect the flyway from which the viruses were samples: North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to relevant branches.
(0.84 MB EPS)
Maximum likelihood tree of the PB1 gene of avian influenza A viruses. Sequences are color-coded according to HA subtype. Internal branches are color-coded to reflect the flyway from which the viruses were samples: North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to relevant branches.
(0.84 MB EPS)
Maximum likelihood tree of the PA gene of avian influenza A viruses. Sequences are color-coded according to HA subtype. Internal branches are color-coded to reflect the flyway from which the viruses were samples: North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to relevant branches.
(0.83 MB EPS)
Maximum likelihood tree of the NP gene of avian influenza A viruses. Sequences are color-coded according to HA subtype. Internal branches are color-coded to reflect the flyway from which the viruses were samples: North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to relevant branches.
(0.79 MB EPS)
Maximum likelihood tree of the M genes of avian influenza A viruses. Sequences are color-coded according to HA subtype. Internal branches are color-coded to reflect the flyway from which the viruses were samples: North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to relevant branches.
(0.79 MB EPS)
Maximum likelihood tree of the NS genes of avian influenza A viruses. Sequences are color-coded according to HA subtype. Internal branches are color-coded to reflect the flyway from which the viruses were samples: North American flyway in red, Eurasian flyway in blue. Bootstrap values above 70% are shown next to relevant branches.
(0.83 MB EPS)
Sequencing results for 167 complete genomes of 29 subtypes of avian influenza A viruses.
(0.06 MB DOC)
Over the last 22 years, the virus repository used in this study received financial support from the USDA CSREES NRI, US Poultry and Egg Association, and the USDA ARS. Also, many individuals associated with the Ohio and Maryland Departments of Natural Resources, the Missouri Avian Influenza Task Force, and several Ohio and Maryland shooting clubs provided expertise, time, and physical support for this project.
Conceived and designed the experiments: J. Taubenberger. Performed the experiments: V. Dugan, R. Chen, D. Spiro, N. Sengamalay, J. Zaborsky, E. Ghedin, J. Nolting, D. Senne, R. Wang, R. Slemons, J. Taubenberger. Analyzed the data: V. Dugan, R. Chen, D. Senne, R. Slemons, E. Holmes, J. Taubenberger. Contributed reagents/materials/analysis tools: D. Swayne, J. Runstadler, G. Happ, R. Slemons. Wrote the paper: V. Dugan, R. Chen, R. Slemons, E. Holmes, J. Taubenberger.
- 1. Easterday BC, Trainer DO, Tumova B, Pereira HG (1968) Evidence of infection with influenza viruses in migratory waterfowl. Nature 219: 523–524.
- 2. Slemons RD, Johnson DC, Osborn JS, Hayes F (1974) Type-A influenza viruses isolated from wild free-flying ducks in California. Avian Dis 18: 119–124.
- 3. Webster RG, Bean WJ, Gorman OT, Chambers TM, Kawaoka Y (1992) Evolution and ecology of influenza A viruses. Microbiol Rev 56: 152–179.
- 4. Krauss S, Walker D, Pryor SP, Niles L, Chenghong L, et al. (2004) Influenza A viruses of migrating wild aquatic birds in North America. Vector Borne Zoonotic Dis 4: 177–189.
- 5. Spackman E, Stallknecht DE, Slemons RD, Winker K, Suarez DL, et al. (2005) Phylogenetic analyses of type A influenza genes in natural reservoir species in North America reveals genetic variation. Virus Res 114: 89–100.
- 6. Munster VJ, Veen J, Olsen B, Vogel R, Osterhaus AD, et al. (2006) Towards improved influenza A virus surveillance in migrating birds. Vaccine 24: 6729–6733.
- 7. Olsen B, Munster VJ, Wallensten A, Waldenstrom J, Osterhaus AD, et al. (2006) Global patterns of influenza a virus in wild birds. Science 312: 384–388.
- 8. Munster VJ, Baas C, Lexmond P, Waldenstrom J, Wallensten A, et al. (2007) Spatial, temporal, and species variation in prevalence of influenza A viruses in wild migratory birds. PLoS Pathog 3: e61.
- 9. Krauss S, Obert CA, Franks J, Walker D, Jones K, et al. (2007) Influenza in migratory birds and evidence of limited intercontinental virus exchange. PLoS Pathog 3: e167.
- 10. Runstadler JA, Happ GM, Slemons RD, Sheng ZM, Gundlach N, et al. (2007) Using RRT-PCR analysis and virus isolation to determine the prevalence of avian influenza virus infections in ducks at Minto Flats State Game Refuge, Alaska, during August 2005. Arch Virol.
- 11. Webby RJ, Webster RG, Richt JA (2007) Influenza viruses in animal wildlife populations. Curr Top Microbiol Immunol 315: 67–83.
- 12. Stallknecht DE, Shane SM (1988) Host range of avian influenza virus in free-living birds. Vet Res Commun 12: 125–141.
- 13. Hanson BA, Stallknecht DE, Swayne DE, Lewis LA, Senne DA (2003) Avian influenza viruses in Minnesota ducks during 1998–2000. Avian Dis 47: 867–871.
- 14. Air GM (1981) Sequence relationships among the hemagglutinin genes of 12 subtypes of influenza A virus. Proc Natl Acad Sci U S A 78: 7639–7643.
- 15. Alexander DJ (2006) An overview of the epidemiology of avian influenza. Vaccine..
- 16. Scholtissek C, Rohde W, Von Hoyningen V, Rott R (1978) On the origin of the human influenza virus subtypes H2N2 and H3N2. Virology 87: 13–20.
- 17. Webster RG, Shortridge KF, Kawaoka Y (1997) Influenza: interspecies transmission and emergence of new pandemics. FEMS Immunol Med Microbiol 18: 275–279.
- 18. Alexander DJ, Brown IH (2000) Recent zoonoses caused by influenza A viruses. Rev Sci Tech 19: 197–225.
- 19. Alexander DJ (2000) A review of avian influenza in different bird species. Vet Microbiol 74: 3–13.
- 20. Reid AH, Taubenberger JK, Fanning TG (2004) Evidence of an absence: the genetic origins of the 1918 pandemic influenza virus. Nat Rev Microbiol 2: 909–914.
- 21. Alexander DJ (2006) Avian influenza viruses and human health. Dev Biol (Basel) 124: 77–84.
- 22. Boon AC, Sandbulte MR, Seiler P, Webby RJ, Songserm T, et al. (2007) Role of terrestrial wild birds in ecology of influenza A virus (H5N1). Emerg Infect Dis 13: 1720–1724.
- 23. Peiris JS, de Jong MD, Guan Y (2007) Avian influenza virus (H5N1): a threat to human health. Clin Microbiol Rev 20: 243–267.
- 24. Swayne DE (2007) Understanding the complex pathobiology of high pathogenicity avian influenza viruses in birds. Avian Dis 51: 242–249.
- 25. Swayne DE, Suarez DL (2000) Highly pathogenic avian influenza. Rev Sci Tech 19: 463–482.
- 26. Taubenberger JK, Morens DM, Fauci AS (2007) The next influenza pandemic: can it be predicted? JAMA 297: 2025–2027.
- 27. WHO (2008) Cumulative Number of Confirmed Human Cases of Avian Influenza A/(H5N1) Reported to WHO. Geneva.
- 28. Liu J, Xiao H, Lei F, Zhu Q, Qin K, et al. (2005) Highly pathogenic H5N1 influenza virus infection in migratory birds. Science 309: 1206.
- 29. Chen H, Li Y, Li Z, Shi J, Shinya K, et al. (2006) Properties and dissemination of H5N1 viruses isolated during an influenza outbreak in migratory waterfowl in western China. J Virol 80: 5976–5983.
- 30. Flint PL (2007) Applying the scientific method when assessing the influence of migratory birds on the dispersal of H5N1. Virol J 4: 132.
- 31. Needham H (2007) H5N1 in wild and domestic birds in Europe - remaining vigilant in response to an ongoing public health threat. Euro Surveill 12: E071206 071201.
- 32. Kawaoka Y, Krauss S, Webster RG (1989) Avian-to-human transmission of the PB1 gene of influenza A viruses in the 1957 and 1968 pandemics. J Virol 63: 4603–4608.
- 33. Taubenberger JK, Reid AH, Lourens RM, Wang R, Jin G, et al. (2005) Characterization of the 1918 influenza virus polymerase genes. Nature 437: 889–893.
- 34. Jonassen CM, Handeland K (2007) Avian influenza virus screening in wild waterfowl in Norway, 2005. Avian Dis 51: 425–428.
- 35. Wallensten A, Munster VJ, Latorre-Margalef N, Brytting M, Elmberg J, et al. (2007) Surveillance of influenza A virus in migratory waterfowl in northern Europe. Emerg Infect Dis 13: 404–411.
- 36. Widjaja L, Krauss SL, Webby RJ, Xie T, Webster RG (2004) Matrix gene of influenza a viruses isolated from wild aquatic birds: ecology and emergence of influenza a viruses. J Virol 78: 8771–8779.
- 37. Macken CA, Webby RJ, Bruno WJ (2006) Genotype turnover by reassortment of replication complex genes from avian influenza A virus. J Gen Virol 87: 2803–2815.
- 38. Obenauer JC, Denson J, Mehta PK, Su X, Mukatira S, et al. (2006) Large-scale sequence analysis of avian influenza isolates. Science 311: 1576–1580.
- 39. Slemons RD, Hansen WR, Converse KA, Senne DA (2003) Type A influenza virus surveillance in free-flying, nonmigratory ducks residing on the eastern shore of Maryland. Avian Dis 47: 1107–1110.
- 40. Sharp GB, Kawaoka Y, Jones DJ, Bean WJ, Pryor SP, et al. (1997) Coinfection of wild ducks by influenza A viruses: distribution patterns and biological significance. J Virol 71: 6128–6135.
- 41. Wang R, Soll L, Dugan V, Runstadler JA, Happ GM, et al. (2008) Examining the hemagglutinin subtype diversity among wild duck-origin influenza A viruses using ethanol-fixed cloacal swabs and a novel RT-PCR method. Virology In press.. E-pub February 27, 2008.
- 42. Fouchier RA, Munster V, Wallensten A, Bestebroer TM, Herfst S, et al. (2005) Characterization of a novel influenza A virus hemagglutinin subtype (H16) obtained from black-headed gulls. J Virol 79: 2814–2822.
- 43. Kaleta EF, Hergarten G, Yilmaz A (2005) Avian influenza A viruses in birds –an ecological, ornithological and virological view. Dtsch Tierarztl Wochenschr 112: 448–456.
- 44. Treanor JJ, Snyder MH, London WT, Murphy BR (1989) The B allele of the NS gene of avian influenza viruses, but not the A allele, attenuates a human influenza A virus for squirrel monkeys. Virology 171: 1–9.
- 45. Krug RM, Yuan W, Noah DL, Latham AG (2003) Intracellular warfare between human influenza viruses and human cells: the roles of the viral NS1 protein. Virology 309: 181–189.
- 46. Chen R, Holmes EC (2006) Avian influenza virus exhibits rapid evolutionary dynamics. Mol Biol Evol 23: 2336–2341.
- 47. Hatchette TF, Walker D, Johnson C, Baker A, Pryor SP, et al. (2004) Influenza A viruses in feral Canadian ducks: extensive reassortment in nature. J Gen Virol 85: 2327–2337.
- 48. Holmes EC, Urwin R, Maiden MC (1999) The influence of recombination on the population structure and evolution of the human pathogen Neisseria meningitidis. Mol Biol Evol 16: 741–749.
- 49. Li C, Yu K, Tian G, Yu D, Liu L, et al. (2005) Evolution of H9N2 influenza viruses from domestic poultry in Mainland China. Virology 340: 70–83.
- 50. Bragstad K, Jorgensen PH, Handberg K, Hammer AS, Kabell S, et al. (2007) First introduction of highly pathogenic H5N1 avian influenza A viruses in wild and domestic birds in Denmark, Northern Europe. Virol J 4: 43.
- 51. Lee CW, Senne DA, Suarez DL (2006) Development and application of reference antisera against 15 hemagglutinin subtypes of influenza virus by DNA vaccination of chickens. Clin Vaccine Immunol 13: 395–402.
- 52. Subbarao EK, London W, Murphy BR (1993) A single amino acid in the PB2 gene of influenza A virus is a determinant of host range. J Virol 67: 1761–1764.
- 53. Sorrell EM, Perez DR (2007) Adaptation of influenza A/Mallard/Potsdam/178-4/83 H2N2 virus in Japanese quail leads to infection and transmission in chickens. Avian Dis 51: 264–268.
- 54. Campitelli L, Mogavero E, De Marco MA, Delogu M, Puzelli S, et al. (2004) Interspecies transmission of an H7N3 influenza virus from wild birds to intensively reared domestic poultry in Italy. Virology 323: 24–36.
- 55. Ferguson NM, Galvani AP, Bush RM (2003) Ecological and immunological determinants of influenza evolution. Nature 422: 428–433.
- 56. Webster RG, Govorkova EA (2006) H5N1 influenza–continuing evolution and spread. N Engl J Med 355: 2174–2177.
- 57. Guan Y, Peiris JS, Lipatov AS, Ellis TM, Dyrting KC, et al. (2002) Emergence of multiple genotypes of H5N1 avian influenza viruses in Hong Kong SAR. Proc Natl Acad Sci U S A 99: 8950–8955.
- 58. Chen H, Smith GJ, Li KS, Wang J, Fan XH, et al. (2006) Establishment of multiple sublineages of H5N1 influenza virus in Asia: implications for pandemic control. Proc Natl Acad Sci U S A 103: 2845–2850.
- 59. Smith GJ, Naipospos TS, Nguyen TD, de Jong MD, Vijaykrishna D, et al. (2006) Evolution and adaptation of H5N1 influenza virus in avian and human hosts in Indonesia and Vietnam. Virology 350: 258–268.
- 60. Cheung CL, Vijaykrishna D, Smith GJ, Fan XH, Zhang JX, et al. (2007) Establishment of influenza A virus (H6N1) in minor poultry species in southern China. J Virol 81: 10402–10412.
- 61. Xu KM, Li KS, Smith GJ, Li JW, Tai H, et al. (2007) Evolution and Molecular Epidemiology of H9N2 Influenza A Viruses from Quail in Southern China, 2000 to 2005. J Virol 81: 2635–2645.
- 62. Beard CW, Hitchner SB, Domermuth C, Purchase HG, Williams JE (1980) Avian Influenza. College Station, Texas: American Association of Avian Pathologists.
- 63. Hoffmann E, Stech J, Guan Y, Webster RG, Perez DR (2001) Universal primer set for the full-length amplification of all influenza A viruses. Arch Virol 146: 2275–2289.
- 64. Edgar RC (2004) MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res 32: 1792–1797.
- 65. Hall TA (1999) BioEdit: a user friendly biological sequence alignment editor and analysis program for Windows 95/98/NT. Nucl Acids Symp Ser 41: 95–98.
- 66. Rambaut A, Grassly NC, Nee S, Harvey PH (1996) Bi-De: an application for simulating phylogenetic processes. Comput Appl Biosci 12: 469–471.
- 67. Swofford DL (2003) PAUP*. Phylogenetic Analysis Using Parsimony (*and other methods). Version 4. ed. Sunderland, MA: Sinauer Associates.
- 68. Posada D, Crandall KA (1998) MODELTEST: testing the model of DNA substitution. Bioinformatics 14: 817–818.
|
<urn:uuid:1fb0f954-60bc-4d77-9376-d0cc58a7ffb1>
|
CC-MAIN-2016-26
|
http://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1000076
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00181-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.88904
| 12,497
| 3.203125
| 3
|
Right Now | A Work in Progress
The Teen Brain
Your teenage daughter gets top marks in school, captains the debate team, and volunteers at a shelter for homeless people. But while driving the family car, she text-messages her best friend and rear-ends another vehicle.
How can teens be so clever, accomplished, and responsible—and reckless at the same time? Easily, according to two physicians at Children’s Hospital Boston and Harvard Medical School (HMS) who have been exploring the unique structure and chemistry of the adolescent brain. “The teenage brain is not just an adult brain with fewer miles on it,” says Frances E. Jensen, a professor of neurology. “It’s a paradoxical time of development. These are people with very sharp brains, but they’re not quite sure what to do with them.”
Research during the past 10 years, powered by technology such as functional magnetic resonance imaging, has revealed that young brains have both fast-growing synapses and sections that remain unconnected. This leaves teens easily influenced by their environment and more prone to impulsive behavior, even without the impact of souped-up hormones and any genetic or family predispositions.
Most teenagers don’t understand their mental hardwiring, so Jensen, whose laboratory research focuses on newborn-brain injury, and David K. Urion, an associate professor of neurology who treats children with cognitive impairments like autism and attention deficit disorder, are giving lectures at secondary schools and other likely places. They hope to inform students, parents, educators, and even fellow scientists about these new data, which have wide-ranging implications for how we teach, punish, and medically treat this age group. As Jensen told some 50 workshop attendees at Boston’s Museum of Science in April, “This is the first generation of teenagers that has access to this information, and they need to understand some of their vulnerabilities.”
Human and animal studies, Jensen and Urion note, have shown that the brain grows and changes continually in young people—and that it is only about 80 percent developed in adolescents. The largest part, the cortex, is divided into lobes that mature from back to front. The last section to connect is the frontal lobe, responsible for cognitive processes such as reasoning, planning, and judgment. Normally this mental merger is not completed until somewhere between ages 25 and 30—much later than these two neurologists were taught in medical school.
There are also gender differences in brain development. As Urion and Jensen explain, the part of our brain that processes information expands during childhood and then begins to thin, peaking in girls at roughly 12 to 14 years old and in boys about two years later. This suggests that girls and boys may be ready to absorb challenging material at different stages, and that schools may be missing opportunities to reach them.
Meanwhile, the neural networks that help brain cells (neurons) communicate through chemical signals are enlarging in teen brains. Learning takes place at the synapses between neurons, as cells excite or inhibit one another and develop more robust synapses with repeated stimulation. This cellular excitement, or “long-term potentiation,” enables children and teenagers to learn languages or musical instruments more easily than adults.
On the flip side, this plasticity also makes adolescent brains more vulnerable to external stressors, as Jensen and Urion point out.
Teen brains, for example, are more susceptible than their adult counterparts to alcohol-induced toxicity. Jensen highlights an experiment in which rat brain cells were exposed to alcohol, which blocks certain synaptic activity. When the alcohol was washed out, the adult cells recovered while the adolescent cells remained “disabled.” And because studies show that marijuana (cannabinoid) use blocks cell signaling in the brain, according to Jensen, “We make the point that what you did on the weekend is still with you during that test on Thursday. You’ve been trying to study with a self-induced learning disability.”
Similarly, even though there is evidence that sleep is important for learning and memory, teenagers are notoriously sleep-deprived. Studying right before bedtime can help cement the information under review, Jensen notes. So can aerobic exercise, says Urion, bemoaning the current lack of physical-education opportunities for many American youths.
Teens are also bombarded by information in this electronic age, and multitasking is as routine as chatting with friends on line. But Jensen highlights a recent study showing how sensory overload can hinder undergraduates’ ability to recall words. “It’s truly a brave new world. Our brains, evolutionarily, have never been subjected to the amount of cognitive input that’s coming at us,” she says. “You can’t close down the world. All you can do is educate kids to help them manage this.” For his part, Urion believes programs aimed at preventing risky adolescent behaviors would be more effective if they offered practical strategies for making in-the-moment decisions, rather than merely lecturing teens about the behaviors themselves. (“I have yet to meet a pregnant teenager who didn’t know biologically how this transpired,” he says.)
By raising awareness of this paradoxical period in brain development, the neurologists hope to help young people cope with their challenges, as well as recognize their considerable strengths.
|
<urn:uuid:d595c210-8e57-49c1-a48d-5c987f7f4c69>
|
CC-MAIN-2016-26
|
http://harvardmagazine.com/2008/09/the-teen-brain.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00088-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958487
| 1,117
| 3.625
| 4
|
You're seeing this message because you're using an older version of Internet Explorer that is unsupported on our website. Please use these links to upgrade to a modern web browser that fully supports our website and protects your computer from security risks.
Classification of flowering plants, gymnosperms, and ferns. Emphasis on
collection, identification, and preparation of herbarium specimens.
Prerequisite: BIOL 149: You must check with the instructor if you do not satisfy the prerequisite otherwise, you will be dropped from the course.
The course will prepare your for practical plant identification in any environment through learning how to use taxonomic keys and exposure to terminology, family characteristics, and plant systematics. The course will require you to learn common plants of western Maryland. You will also gain a sense of place in the Appalachian Mountains where you attend college.
You may forget much of the detail of this material, however, there are some aspects that I hope will serve you well throughout your life: 1) appreciate the beauty and intricacy of plants and enjoy discovering things about nature; 2) improve your skills in memory, observation, writing, and critical thinking; 3) gain base knowledge of the structure, function, and evolutionary history of plants.
|
<urn:uuid:031c84bc-2d36-4bb9-911c-63cfed577a0e>
|
CC-MAIN-2016-26
|
http://www.frostburg.edu/ethnobotany/classes/plant-taxonomy/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00096-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.908538
| 253
| 2.953125
| 3
|
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 5 results on physics.org and 18 results in our database of sites
18 are Websites,
0 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
As part of the Internet Plasma Physics Education eXperience (IPPEX) project, this Java applet is designed to illustrate the basic principles of magnetically confined fusion.
Well, essentially, they are glass globes with 'lightning bolts' coming from a central electrode to the surface of the sphere. If you touch the globe, the lightning will normally follow your hand.
Take control of a plasma experiment over the internet and see how pressure, electric field strength and magnetic field all affect it.
An introduction to nuclear fusion power, explaining plasma confinement and the use of tokamaks. From JET – Europe's largest nuclear fusion research facility.
Interactive physics tutorials using applets to explain the background for fusion energy research and plasma physics, plus performing virtual tokamak experiments.
This is what The Internet Plasma Physics Education Experience (IPPEX) is all about! The following pages, are full of information on Fusion and Fusion Power, and will help you dive into the basic ...
Hannes Olof Gösta Alfvén (1908 - 1995) received the Nobel Prize for Physics in 1970 for his work and discoveries in magneto-hydrodynamics (MHD) which has applications in different parts of plasma ...
The mission of the U.S. Fusion Energy Sciences Program is to advance plasma science, fusion science, and fusion technology − the knowledge base needed for an economically and environmentally ...
Showing 11 - 18 of 18
|
<urn:uuid:5408174c-fe90-40f2-8403-8339f9f6322f>
|
CC-MAIN-2016-26
|
http://www.physics.org/explore-results-all.asp?currentpage=2&age=0&knowledge=0&q=plasma
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00115-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.878577
| 394
| 2.90625
| 3
|
PUPILS were transported back to 1914 as part of a multi-sensory commemoration of the beginning of the First World War.
Military sergeants attempted to enlist new recruits and show them how to march to the backdrop of propaganda and live music.
Wycombe High School pupils were able to walk through the foul-smelling trenches and a field hospital with surgeons operating on the wounded.
And in a poignant tribute, the school’s playing field was converted in to a field of poppies with a cross in place for each of the heroes from High Wycombe who gave their lives during The Great War.
Hilary Brash, Assistant Headteacher, said: “The pupils have been working on this project since the end of their early GCSEs and have devised the entire experience themselves.
“Their creativity, good humour and energy have been matched by their sensitivity and compassion, and the end result was very moving.”
The Trench Experience event was organised by the school’s Year Tens and was attended by RAF Group Captain Frank Clifford.
Pupil Katie Parry said: “It's wonderful our school has gone to so much effort to commemorate the beginning of the Frist World War.
“The project has taught us so much and really highlighted the extent that the soldiers went to, to fight for their countries; my respect for them has substantially intensified.”
|
<urn:uuid:b4d66036-3f51-41eb-bd47-44686de53491>
|
CC-MAIN-2016-26
|
http://www.bucksfreepress.co.uk/NEWS/11356745.Pupils_transported_back_to_1914_in__moving__WWI_commemoration/?ref=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00066-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981294
| 295
| 2.53125
| 3
|
All our eyes are now set in Copenhagen, in what’s in my view one of the most important meetings ever held. Following the overhyped data fraud scandal, which is being targeted by many skeptics as the “Climategate”, the UK Met Office decided to make available the data for more than 1,000 weather stations from across the world, in order to hush divergent voices. The dataset, to be released this week, is the subset of stations evenly distributed across the globe and provides a “fair representation of changes in mean temperature on a global scale over land”, said the Met Office in a statement. “We are confident this subset will show that global average land temperatures have risen over the last 150 years.”
The data has not yet been made public, but once it does I will update this post. In case you cannot wait for this dataset, the group of scientists at RealClimate.org have recently put together a cohesive list of datasources, from innumerous satellites and stations, on sea levels, sea temperature, surface temperature, aerosols, greenhouse gases, and many more. In a blog post announcing the list, the group states:
Much of the discussion in recent days has been motivated by the idea that climate science is somehow unfairly restricting access to raw data upon which scientific conclusions are based. This is a powerful meme and one that has clear resonance far beyond the people who are actually interested in analysing data themselves. However, many of the people raising this issue are not aware of what and how much data is actually available.
This represents a great momentum for all of us involved in Visualization at large to be part of the solution and deliver a clear unequivocal view on what’s happening with our planet. Regardless of how you label your practice, Information Visualization, Data Visualization, Information Design, Visual Analytics, or Information Graphics, this is ultimately a call for everyone dealing with the communication of information for human reasoning. Let’s roll up our sleeves!
|
<urn:uuid:f8ea8e44-ad49-4058-b197-860effbb7014>
|
CC-MAIN-2016-26
|
http://www.visualcomplexity.com/vc/blog/?m=200912
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00020-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93903
| 413
| 2.703125
| 3
|
Since at least 2004, sudden aspen decline, or SAD for short, has killed trees in five Western states in sweeping fashion. By 2008, in Colorado alone, more than a half million acres were afflicted. But after a few wet, cool years, the fatal phenomenon is finally relenting. "We're pretty sure that the drought in 2002 was the major inciting event," says Jim Worrall, a forest pathologist with the U.S. Forest Service. That year -- one of the driest on record in Colorado -- weakened trees, leaving them vulnerable to insects and disease. Aspen stands at lower elevations and on the upper reaches of south- and southwest-facing slopes were most affected, suggesting that warm, dry conditions were the primary triggers of decline. Trees are still dying, but new research shows that SAD is no longer spreading, at least not to large new areas, and is not affecting new growth. Even so, aspen's glory days may be past. So far, regeneration has been too slight to replace dieback in the most hard-hit areas. And with hot, dry spells expected to become more frequent, the next epidemic is likely not far off. Shown here, an aspen tree with Cytospora canker, which often kills trees affected by SAD.
|
<urn:uuid:e7f7f617-4f8b-4610-81a4-2641aa1147fe>
|
CC-MAIN-2016-26
|
http://www.hcn.org/issues/42.20/not-quite-so-sad
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.977635
| 266
| 3.140625
| 3
|
Byzantium and Islam
Age of Transition
March 14–July 8, 2012
As the seventh century began, vast territories extending from Syria to Egypt and across North Africa were ruled by the Byzantine Empire from its capital, Constantinople (modern Istanbul). Critical to the wealth and power of the empire, these southern provinces, long influenced by Greco-Roman traditions, were home to Orthodox, Coptic, and Syriac Christians, Jewish communities, and others. Great pilgrimage centers attracted the faithful from as far away as Yemen in the east and Scandinavia in the west. Major trade routes reached eastward down the Red Sea past Jordan to India in the south, bringing silks and ivories to the imperial territories. Major cities made wealthy by commerce extended along inland trade routes north to Constantinople and along the Mediterranean coastline. Commerce carried images and ideas freely throughout the region.
In the same century, the newly established faith of Islam emerged from Mecca and Medina along the Red Sea trade route and reached westward into the empire's southern provinces. Political and religious authority was transferred from the long established Christian Byzantine Empire to the newly established Umayyad and later Abbasid Muslim dynasties. The new powers took advantage of existing traditions of the region in developing their compelling secular and religious visual identities. This exhibition follows the artistic traditions of the southern provinces of the Byzantine Empire from the seventh century to the ninth, as they were transformed from being central to the Byzantine tradition to being a critical part of the Islamic world.
|
<urn:uuid:a5c82d9c-9b42-4b47-9a1b-da654496d038>
|
CC-MAIN-2016-26
|
http://metmuseum.org/exhibitions/listings/2012/byzantium-and-islam
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95777
| 302
| 3.359375
| 3
|
IT'S MORE THAN A PIECE OF WIRE
You may think a radio transmitter's antenna is just a
length of wire running from the foremast to the mainmast, and that any dumb-bell can rig one. A receiver's
antenna may be that simple, but that is not quite true
for a transmitter antenna.
An ANTENNA IS a piece of wire. It is cut to the PROPER
LENGTH and CORRECTLY installed so that it will RADIATE
EFFICIENTLY the energy delivered to it from the transmitter. The word "EFFICIENTLY" is the word you want
to note well. ANY WIRE carrying an a.c. radiates electromagnetic energy-remember the HUM that your receiver
picked up from a 60-cycle power line? And the static
from a neon sign driven by an induction coil?
The power line and neon sign are not EFFICIENT RADIATORS because they were not designed to radiate energy.
The power line carries energy from the power plant to
your motor or light bulb, while a neon sign is built to
But an ANTENNA is designed to RADIATE, in the form of
ELECTROMAGNETIC WAVES, the energy delivered to it by
The BASIC ANTENNA is a DIPOLE-a WIRE with a length
equal to HALF A WAVE LENGTH. If a station is operating
on a wave length of 100 meters, the dipole to be used at
that wave length will be-
100 / 2 = 50 meters, or about 164 feet.
A transmitter operating on a wave length of one meter
(300 mc.) will require a dipole 1/2 meter long-about 20
IMPEDANCE OF A DIPOLE
First of all, you must remember that an antenna carries a.c. Therefore the antenna will have inductive reactance as well as RESISTANCE. In a dipole, the impedance is MAXIMUM at BOTH ENDS, and MINIMUM at the
Figure 137.-Impedance of a dipole.
CENTER. In figure 137 the impedance is illustrated as
being greatest at each end, gradually diminishing until it
reaches minimum at the center.
Now this information, is just for your convenience-the impedance of a DIPOLE at its CENTER is approximately
73.2 ohms, REGARDLESS of what frequency you use.
CURRENT AND VOLTAGE IN A HALF-WAVE ANTENNA
If a feeder line from the transmitter is connected to the
center of a DIPOLE, the antenna will operate as if you set
Figure 138.-Development of an antenna.
an a.c. generator between TWO QUARTER-WAVE antennas,
as in figure 138.
During one half of the alternation, the electrons will
flow from right to left, figure 138B. On the next half-alternation, the generator will make the electrons flow in
the opposite direction, figure 138C.
In an antenna, as in any other circuit, the flow of electrons is the GREATEST where the IMPEDANCE is LEAST.
Therefore, more electrons will be moving at the CENTER
of the dipole than at the ENDS.
What's the voltage along an antenna? Voltage is always GREATEST where the IMPEDANCE is the HIGHEST.
Thus you will find the HIGHEST VOLTAGE at the ENDS of
the dipole, figure 138D. During one half of an alternation, the left end of the dipole will be MAXIMUM NEGATIVE,
and the right end will be POSITIVE. On the next half
alternation, the POLARITY of voltages is reversed.
If the antenna extends EXACTLY one-quarter wave
length on each side of the generator, the REBOUNDING or
reflected ELECTRONS from the negative end of the dipole
will return at the proper instant to reinforce the movement of other electrons already moving in that direction.
But if the antenna is GREATER or LESS than one-quarter
wave length on each side of the generator, much of the
energy will be lost in the collision of electrons trying to
flow in TWO directions at the same time.
Figure 139.-Relationship of current and voltage in a dipole.
From the CURRENT-VOLTAGE diagrams of figure 139, you
can see the CHARACTERISTICS of an antenna. The current is MAXIMUM at the CENTER. The VOLTAGE is maximum POSITIVE at ONE END and MAXIMUM NEGATIVE at the
ELECTROMAGNETIC FIELD SURROUNDING A DIPOLE
A dipole suspended out in space away from the influence of the earth would be surrounded by an ELECTROMAGNETIC FIELD the shape of a DOUGHNUT, as shown in
figure 140. You see that no radiation takes place at the
ENDS of the dipole. If the antenna is mounted vertically,
Figure 140.-Electromagnetic field surrounding a dipole.
the field will have the shape of a doughnut lying on
the ground. All areas surrounding the dipole will receive a magnetic field of equal strength, as in figure 140B.
Set the dipole PARALLEL TO the surface of the earth-the
field is the shape of a doughnut standing on edge.
The GREATEST FIELD STRENGTH is along a vertical line
PERPENDICULAR to the dipole.
ELECTROSTATIC FIELD SURROUNDING A DIPOLE
High voltage at each end of the dipole produce an
ELECTROSTATIC FIELD which is at maximum strength at
the ends of the dipole. But if the antenna is shorter or
longer than a half-wave length, the electrostatic field
strength will be greatest at the point where the voltage
The electrostatic field is always present with an electromagnetic field. One cannot exist without the other.
In most cases, only the electromagnetic will be discussed,
but remember, the electrostatic is always there too.
The electrostatic and electromagnetic fields surrounding an antenna each form STANDING WAVES. The two
types of standing waves are as dissimilar as current and
voltage. The electrostatic field is 90° out of phase with
the electromagnetic field. The presence of an ELECTROMAGNETIC field can be shown by the glowing of a MAZDA
lamp-loop in the presence of the field, while a NEON lamp
will glow in the presence of an electrostatic field. The
points along an antenna where the magnetic fields are
MAXIMUM are called CURRENT LOOPS. The points where
the electrostatic fields are maximum are called VOLTAGE
Figure 141.-Standing waves along full-wave antenna.
Figure 141 shows the location of the loop points along a
full-wave antenna. The CURRENT LOOPS appear every
half wavelength, and a VOLTAGE LOOP appears every other
If you move a NEON bulb along an r.f. transmission line,
the bulb will glow each time a voltage loop is reached.
If the transmission line is several wavelengths long,
several voltage loops will be spotted.
You can determine the wavelength of your transmitter
approximately if you measure the distance between the
loop points, since each loop is exactly one-half wavelength
from the other.
ELECTRICAL LENGTHS AND ACTUAL LENGTHS OF
An ideal antenna, one completely free from the influence of the earth, would have an ACTUAL LENGTH exactly
equal to its ELECTRICAL LENGTH. For instance-an ideal
half-wave antenna for use with a 100-meter wavelength
would be 50 meters long.
Since no antenna is completely free from the influence
of the earth, the PHYSICAL length of an antenna is approximately 5 percent shorter than its ELECTRICAL length.
A half-wave antenna for a 100-meter station will be 50
meters minus 5 percent or 47½ meters long.
The physical length of a half-wave antenna for frequencies above 30 mc. can be calculated from the frequency
by using the following equation-
LENGTH (feet) = (492 x 0.95) / frequency, in megacycles
The number 492 is a factor for converting meters to
feet. The correction factor, 0.95, is 100 percent minus
the 5 percent loss due to the effect of the earth.
THE HERTZ ANTENNA
Any antenna that is one-half wavelength long is a
HERTZ ANTENNA, and may be mounted either vertically or
horizontally. The great length of HERTZ antennas makes
them difficult and costly to build to handle low frequencies.
Consider the problem of constructing a half-wave antenna
for a wavelength of 545 meters-550 kc. The antenna
would have to be about 851 feet long! You can imagine
the weight of a horizontal cable 850 feet long. And a
vertical half-wave antenna would be as tall as the RCA
building in New York's Radio City.
Because of the construction difficulties and costs, you
will find that half wave antennas are seldom used with
broadcasting transmitters operating at frequencies below
1,000 kc. But half-wave antennas are widely used with
high-frequency communication transmitters. A half-wave antenna for a 30 mc.-10 meters-transmitter will
be only a little over 16 feet long.
THE MARCONI ANTENNA
The MARCONI ANTENNA is also known as the QUARTER-WAVE ANTENNA, and the GROUNDED ANTENNA. Figure
142 illustrates the principle of a Marconi antenna
Figure 142.-Quarter-wave Marconi antenna, showing antenna images.
mounted ON the surface of the earth. The transmitter is
connected between the BOTTOM of the antenna and the
earth. Although the antenna is only ONE-QUARTER WAVELENGTH, the REFLECTION or IMAGE in the earth is EQUIVALENT to ANOTHER quarter-wave antenna. By this arrangement, HALF-WAVE operation can be obtained from an
antenna only a QUARTER wavelength long.
Figure 143.-Current and voltage relationships in antennas of various lengths.
The relationship of impedance, current, and voltage
in a quarter-wave ground antenna are similar to those
in a half-wave Hertz antenna. IMPEDANCE and VOLTAGE
are MAXIMUM at the TOP of the antenna and MINIMUM at
the BOTTOM. The flow of CURRENT IS GREATEST at the
BOTTOM and LEAST at the TOP.
The advantage of using a Marconi antenna can be seen
when you compare a length of 426 feet for a Marconi to
851 feet for a Hertz antenna at 550 kcs.
The quarter-wave antenna is used extensively with
portable transmitters. On an airplane, a quarter wave
mast or a trailing wire will be the ANTENNA, and the
FUSELAGE will produce the IMAGE. Similar installations
are made on ships. A quarter-wave mast or horizontal
wire will be the antenna, the hull and superstructure will
provide the image.
ANTENNAS OF OTHER LENGTHS
Occasionally you'll need an antenna of some other
length than one-quarter or one-half wavelength. You'll
see some of the usual lengths in figure 143.
Figures 143A and 143C are examples of CURRENT FED
antennas, while figures 143B and 143D are VOLTAGE-FED.
The expressions VOLTAGE-FED and CURRENT-FED refer to
the points along the antenna where the power is applied.
In the CURRENT-FED antenna of figure 143A, the power k
delivered to the antenna at the point of HIGHEST CURRENT.
The antenna of figure 143B is VOLTAGE-FED, the power being applied to the point of HIGHEST VOLTAGE.
CORRECT THE ELECTRICAL LENGTH
After the antenna has been erected, you .may find that
its physical length is greater or less than its electrical
length. If a grounded antenna is less than one-quarter
wavelength, there will be a CAPACITIVE effect at the base,
and an INDUCTANCE must be added in series to increase
the ELECTRICAL LENGTH, as in figure 144A.
When the physical length of an antenna is GREATER
than its correct electrical length, the antenna will have
excess INDUCTANCE. In this case it will be necessary for
you to add a CONDENSER in series with the antenna to
SHORTEN its electrical length, as in figure 144B.
ANTENNA TUNING CIRCUITS
You will have to change the ELECTRICAL LENGTH of the
antenna each time you change the FREQUENCY of the
transmitter. Since you can't climb up the superstructure
and chop off a piece of the antenna each time you
increase the frequency, you will use a combination of
VARIABLE INDUCTANCES and CONDENSERS to adjust the
ELECTRICAL LENGTH. Condensers and inductances used
Figure 144.-Methods of correcting the electrical length.
for this purpose make up the ANTENNA LOADING or ANTENNA TUNING circuits.
The construction of a transmission line to carry LOW-FREQUENCY
a.c. is relatively simple, but the building of a
Figure 145-Open two-wire transmission line
line that will EFFICIENTLY transmit the energy of a HIGH-FREQUENCY radio transmitter to the antenna is something
Transmission lines used with frequencies below 300 mc.
are of four general types-the OPEN TWO-WIRE system,
the COAXIAL CABLE or CONCENTRIC LINE, the TWISTED PAIR,
and the SHIELDED PAIR.
Figure 145 shows an open two-wire transmission line.
Wires are held rigidly in a parallel position by INSULATED
SPACERS. For 20 mc. and lower, a spacing of at least six
inches is desirable. For frequencies higher than 20 mc. a
spacing of four inches is best.
Figure 146 is a drawing of COAXIAL CABLE or a CONCENTRIC LINE.
It consists of a copper tube with a copper
wire extending down the length of the tube. The wire
is held centered in position in the tube by INSULATED
Higher operating efficiency is obtained by filling the
tube of the CONCENTRIC LINE with NITROGEN under several
pounds of pressure. But a pressurized line is often a
source of trouble. Vibrations caused by gunfire or rough
sea may cause leaks which allow the pressure to drop.
If this happens, the efficiency of the line will drop.
Figure 146.-Concentric line.
The concentric line has several advantages. The tube
is GROUNDED This allows you to install the line in any
convenient position Because the open two-wire system
lacks insulation, it must be carefully located. It is subject to stray capacitative and inductive coupling.
The TWISTED PAIR and the SHIELDED PAIR are not commonly used as transmission lines. Both types are shown
in figure 147. The twisted pair is the least efficient. The
Figure 147.-Twisted and shielded pair transmission lines.
shielded pair possesses an advantage in having a
GROUNDED OUTER SHIELD surrounding the two lines. This
shield prevents stray capacitative and inductive couplings.
RESONANT AND NON-RESONANT TRANSMISSION LINES
Transmission lines are either RESONANT or NON-RESONANT. A RESONANT line has characteristic STANDING
WAVES, while a NON-RESONANT line does not.
Remember the STANDING WAVE is the result of a certain
amount of energy being REFLECTED BACK along the transmission line. Imagine a transmission line so long that
NONE of the energy sent out by the transmitter ever
reaches the end of the line. Naturally, since none
reaches the end, none can be reflected back.
But no line is that long, so why not string up a line
of convenient length and connect a device to the far end
that will ABSORB ALL the energy traveling down the line?
Since all the energy is absorbed, none is left to be reflected
back. This gives you a NON-RESONANT line. To do this,
the IMPEDANCE of the ABSORBER matches the IMPEDANCE of
the ANTENNA. The absorber will collect all the energy
fed into the line and feed that energy into the antenna
to be radiated as a magnetic field.
A RESONANT LINE does NOT have its impedance matched
to the impedance of the antenna. This type of line is
actually an ANTENNA whose length is some multiple-1, 2, 3, etc.-of a QUARTER wavelength. You fasten one
end of the line to the antenna, the other end to the
RESONANT lines are usually OPEN TWO-WIRE SYSTEMS,
while the NON-RESONANT line may be TWO-WIRE, a CONCENTRIC, a SHIELDED, or TWISTED PAIR.
YOUR JOB AND ANTENNAS
You may never be called upon to rig an antenna, or
even change an installation you are using, but the knowledge of what an antenna is, and what it does will help you
in the tuning of your transmitters.
Remember the antenna's job is to radiate, in the form
of electromagnetic energy, as much as possible of the
energy delivered by the transmission lines from the transmitter. To do this, the antenna must be correctly built
and correctly installed. But more important as far as
you are concerned-the transmitter must be correctly
tuned and coupled to the antenna. That is your job.
|
<urn:uuid:2516325d-1372-43c6-b462-e95f51f8749f>
|
CC-MAIN-2016-26
|
http://maritime.org/doc/radio/chap20.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00004-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.862908
| 3,861
| 3.84375
| 4
|
text()function allows us to put text on the plot where we want it. An obvious use is to label a line or group of points.
This would give us two labels in the upper right corner of the plot, which with appropriate arrows could be used in place of a legend. Note that vectors were supplied as the coordinates. The first label would be centered at (2,37), and the second at (2,35), as the default alignment is to center the labels.
Text labels can also be placed in the margins of a plot using the
This would place the words "Low" and "High" on the second line below the X axis
centered at 5 and 7 units. Note that the documentation declares that "user
mtext() do not match those on the plot, but it
appears that they do at the moment (R-1.6.2).
For more information, see An Introduction to R: Low-level plotting commands.
Back to Table of Contents
|
<urn:uuid:6d4ace55-e0e2-4e9e-a720-5e197aca6960>
|
CC-MAIN-2016-26
|
http://cran.r-project.org/doc/contrib/Lemon-kickstart/kr_adtxt.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00128-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919785
| 205
| 4.0625
| 4
|
It’s Blasphemy Day today.
Why do we celebrate it?
Blasphemy Day International is a campaign seeking to establish September 30th as a day to promote free speech and stand up in a show of solidarity for the freedom to challenge, criticize, and satirize religion without fear of murder, litigation, and reprisal. Blasphemy Day takes place September 30th to commemorate the publishing of the Jyllands-Posten Muhammad cartoons. The purpose of Blasphemy Day is not to promote hate or violence; it is to support free speech, support the right to criticize and satirize religion, and to oppose any resolutions or laws, binding or otherwise, that discourage or inhibit free speech of any kind.
The Center For Inquiry is running a blasphemy contest through next week to mark the occasion. All you have to do is “create a phrase, poem, or statement that would be or would have been considered blasphemous.”
Paul Kurtz is the founder and former Chairman of CFI and he thinks the contest is in the wrong spirit. I have to say I agree with him.
It is one thing to examine the claims of religion in a responsible way by calling attention to Biblical, Koranic or scientific criticisms, it is quite another to violate the key humanistic principle of tolerance. One may disagree with contending religious beliefs, but to denigrate them by rude caricatures borders on hate speech. What would humanists and skeptics say if religious believers insulted them in the same way?
It’s not just the contest that’s the problem. It’s the idea of blasphemy, used in the wrong way.
If you’re doing something blasphemous today, ask yourself these questions:
- Are you doing it to make some larger point?
- Are you doing it to begin a conversation?
- Are you doing it just to piss off religious people?
- What are you trying to accomplish?
Blasphemy isn’t always bad, of course. When Jyllands-Posten published “blasphemous” cartoons about Muhammad — and a few other newspapers and magazines followed — they were doing it to support freedom of the press. A very important point was made: just because you find something or someone sacred doesn’t mean the rest of us have to follow suit. More power to those institutions which stood up to the violent protestors.
Yes, you could hold a sign that reads, “FUCK JESUS.” You could wear a shirt like this. You could paint a picture of religious deities doing all sorts of disgusting things with barnyard animals. But what would be the point?
Without a good reason, you’re not showing the general public that we ought to take advantage of our right to free speech. You’re only showing them you’re a jerk.
I’m not trying to act holier-than-thou or anything, either.
Several affiliates of the Secular Student Alliance are celebrating their own way and I support what they’re doing.
For example, At LSU, Matthew Shepherd‘s campus atheist group has a plan of action:
Shepherd said the group will hand out fliers about the Jyllands-Posten cartoons from 10 a.m. until at least 12:30 p.m., and AHA might set up a system for students to exchange Bibles for materials “promoting free thinking.”
That’s what the day should be about: opening peoples’ minds, not simply offending them for the sake of it.
Just because you can blaspheme doesn’t mean you have to blaspheme.
***Update***: Current CFI CEO Ron Lindsay has responded to Kurtz’s piece.
Paul Kurtz does offer to the readers of Free Thinking a choice between two starkly different views of CFI. There is the CFI that stands with those who believe we should be free to criticize religion just as we criticize other beliefs; then there is the neo-Kurtzian vision of a CFI that would tiptoe around criticism of religion for fear of giving offense. There is a CFI that believes that art, even when it might be considered crude or offensive to some, may have symbolic value, and, in any event, deserves protection; and then there is the neo-Kurtzian CFI that advocates censorship of art. There is the CFI that honors those who have risked everything to express their views about religion; and then there is the neo-Kurtzian CFI that equates critique of religion with hate speech.
Kurtz wasn’t arguing against critiquing religion, though. He was against attacks on religion that had no point other than to offend. There’s a difference between criticizing what the Bible says and drawing a picture called “Jesus Does His Nails”:
No one’s trying to censor the artist or anyone else for that matter.
Again, I think Kurtz is asking what I’m asking: If you’re “blaspheming,” are you doing it for the right reasons?
(On a side note, like Ron Lindsay, I would like to know why Kurtz is using the phrase “fundamentalist atheist”…)
|
<urn:uuid:2b62a516-74c4-4321-9307-f69ae10e4b67>
|
CC-MAIN-2016-26
|
http://www.patheos.com/blogs/friendlyatheist/2009/09/30/should-we-blaspheme-not-always/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.953837
| 1,117
| 3.09375
| 3
|
1. In general, contiguous refers to an object that is adjacent to another object.
2. When referring to a computer hard drive, contiguous or continuous refers to sectors on a disk that are by each other. When information is written to a disk if there is enough space the information is written contiguous. However, if there is not enough available sectors the data is written to multiple places on a disk causing it to be fragmented.
3. When referring to computer memory, contiguous refers to sections of memory that are next to one another.
|
<urn:uuid:c2bd3567-29cb-4091-a793-bfafbf9ffc12>
|
CC-MAIN-2016-26
|
http://www.computerhope.com/jargon/c/contiguo.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00138-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94976
| 108
| 3.546875
| 4
|
It’s a brisk afternoon in Paris, and sunlight streams through the glass and steel ceiling of the Grand Palais. Once a flagship of the 1900 World’s Fair, this Beaux-Arts palace is now the temporary home of Solutions COP21 — one of the largest exhibitions of scientific and educational innovations taking place alongside the UN Climate Conference, also known as COP21. Entering through the ornate golden gates into the main gallery, one can’t help but notice two enormous silver spheres. Floating above corporate exhibitors, visitors, and the occasional protestor, they aren’t Christmas decorations, but climate-conscious sculptures from the mind of MIT Visiting Artist Tomás Saraceno.
Saraceno is known for interactive sculptures that combine art with engineering, architecture, and natural science to explore sustainable ways of living. This latest exhibition, "Aerocene," dares to imagine and define a new epoch beyond our current proposed epoch, the Anthropocene. Born from an ongoing collaboration between Saraceno, MIT’s Department of Earth, Atmospheric, and Planetary Sciences and the MIT Center for Art, Science and Technology (CAST), the project is based on atmospheric physics principles and expanding our "thermodynamic imagination," Saraceno says. He hopes his sculptures will achieve the longest emission-free journey around the world.
Made of silver and transparent Mylar, the air-filled sculptures are kept afloat in Earth’s stratosphere by solar heat during the day and infrared radiation at night. Saraceno modeled them after Montgolfière Infrarouge (MIR) — hot-air balloons developed by France’s National Center for Space Studies (CNES) in the 1970s. He is repurposing the zero-energy "aerosolar" technology to inspire the public and scientists alike.
“We like to think of ourselves as living on the Earth’s surface, but we are living at the bottom of an ocean of air. We are told that a shift of 2 degrees will destroy us,” he says, referring to the warming threshold set by world leaders at COP21. “But that feels intangible to us. I think this sculpture visually manifests how much subtle changes can do.” In fact, Saraceno’s assistants monitored the spheres’ pressure throughout the exhibition because solar heat energy coming through the venue’s glass ceiling threatened to overinflate them.
From Boston to Paris
"Aerocene" is more than just a work of environmental art. Ideally, future iterations will float high in the atmosphere while carrying cutting-edge sensors capable of measuring atmospheric ozone, particles, and wind currents, among other things. But a successful flight requires intimate knowledge of large-scale atmospheric dynamics. That’s where MIT meteorologist Lodovica Illari and her team come in.
“Their contribution is taking on the most unpredictable nature of aerosolar balloon flight,” says Leila Kinney, executive director of MIT CAST. Kinney, who collaborated with Saraceno and Illari, was a featured panelist at a Paris symposium that explored Aerocene's socio-political, civic, and scientific implications.
Using historical MIR flight data and associated atmospheric data from CNES, Illari and research associate Bill McKenna examined past trajectories. “Studying the solar balloon flights was very interesting,” Illari says. “We were very impressed by how well the technology was working.” CNES used the MIR balloons to perform several tropical and trans-polar stratospheric flights as late as 2000 with great success. The longest MIR balloon flight, which launched from Bauru, Brazil, in February 2001, traveled around the world in a record 72 days.
Traveling mainly in the lower stratosphere, MIR balloons can rise 30 kilometers during the day and descend to 20 km at night. As they cycle along a gentle atmospheric sine wave, they measure a variety of things including greenhouse gas concentrations. “The lower stratosphere is a critical layer where the chemistry of ozone, methane and other chemicals has a fundamental impact on our climate,” Illari says. “Concentrations of these chemicals are not well known and there is a clear need to better monitor these constituents.”
Illari and McKenna also speculated on the best places and seasons for future launches based on various factors including temperature, pressure, wind strength and direction, and cloud cover. “The project requires critical optimism,” McKenna says. “You have to find a balance between the best atmospheric conditions and maximizing solar and infrared radiation.”
For example, the best time to launch a balloon traveling from Boston to Paris via the tropospheric jet stream is sometime between January and March, they found. Although the mid- and higher latitudes are not the best locations for aerosolar flight — dense cloud formations in the region’s troposphere shield the balloon from infrared radiation — air flow from west to east is strongest during this period and would transport the balloon in only 24 hours. Comparatively, floating a balloon from Boston to Paris in October, when the jet stream is weaker and not very zonal, could take five days.
“Tomás’ vision of flying solar balloons between cities might be rather futuristic, but it opened our eyes,” Illari says. “It made us think outside the box and imagine having a large array of solar balloons, moving with the flow and measuring constituents all over the stratosphere at almost zero energy cost. ‘Becoming aerosolar’ acquired a completely different meaning!”
All of this information comes in handy when planning field research, but Saraceno has an additional goal in mind. “Tomás is not satisfied just sending up sensors,” Kinney says. “He wants us to go.”
Traditional air travel has been a boon for mankind, but at a cost to Earth’s atmosphere. Take COP21, for example. Based on the track records of previous conferences, this year’s meeting is all but guaranteed to emit tens of thousands of tons of carbon dioxide into the atmosphere — of which 85 percent can be linked to the air travel of 22,000 delegates, according to The New York Times. And that figure doesn’t account for the travel of approximately 18,000 journalists, activists, and others monitoring the talks. The United Nations says it will offset all carbon emissions from its staff’s travel. As for delegations from the 195 participating countries, offsetting programs are voluntary.
“Mobility is responsible for a lot of carbon in the air,” Saraceno says. “We need to rethink how we fly. Imagine if different politicians flew to the conference not on an airplane, but by aerosolar flight. The earth is our biggest battery.”
During their collaboration, Saraceno, Illari, and McKenna imagined how emission-free air travel might be possible through aerosolar flight. The resulting video shows tropospheric jet stream trajectories seeded from all world airports in the days leading up to COP21. Each singular path represents 12 hours of travel time, and colors highlight hypothetical balloon launches from the New England area on five departure dates.
Although aerosolar flight won’t replace jet travel anytime soon, Saraceno and his collaborators around the world are one step closer to its realization. In New Mexico’s White Sands National Monument last November, his team took the world’s first and longest fully solar-powered, lighter-than-air vehicle tether flight. “The visceral feeling of launching Aerocene [at White Sands] felt like skydiving, but away from Earth’s surface,” he says. Eventually, Saraceno will attempt to set the longest human aerosolar flight in Bolivia.
This research was funded, in part, by the National Science Foundation FESD project.
|
<urn:uuid:9884d7bf-b416-48cd-9a51-09cb3428d0cd>
|
CC-MAIN-2016-26
|
http://news.mit.edu/2015/cop21-finding-hope-climate-aerocene-1221
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00006-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.938658
| 1,644
| 2.9375
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.