text
stringlengths 247
264k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 20
294
| date
stringlengths 20
20
| file_path
stringclasses 370
values | language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 62
58.7k
|
|---|---|---|---|---|---|---|---|---|
12-Year-Old Iceberg's Death Caught on Camera
Iceberg B-15J, in the upper-middle portion of the image, is a remnant of the colossal B-15 iceberg.
In mid-December, a NASA satellite snapped an image of the disintegration of a large iceberg that first broke away from Antarctica nearly 12 years ago, and has been wandering the Southern Ocean ever since.
It's not unusual for icebergs to survive for up to a quarter century if they stay near their birthplace, in the frigid waters surrounding Antarctica; yet if they stray too far north, the massive chunks of ice can quickly disappear, said Ted Scambos, a glaciologist at the National Snow and Ice Data Center at the University of Colorado, Boulder.
In March 2000, the largest iceberg ever measured broke away from the Ross Ice Shelf, a massive plain of floating ice that clings to the coastline of Antarctica. The iceberg , dubbed B-15, was a whopping 170 miles (270 kilometers) long and 25 miles (40 km) wide – nearly the size of Connecticut. Over time, the iceberg broke into smaller pieces.
One of those pieces, known as B-15J, was spotted about 1,700 miles (2,700 km) southeast of New Zealand on Dec. 19.
At the beginning of November, B-15J was roughly 12 miles (20 km) across, but has now crumbled to just a few miles breadth as it sails through warmer waters, Scambos told OurAmazingPlanet.
How icebergs are born
Icebergs are birthed by ice shelves, which are the outlets of glaciers and regularly send enormous icebergs floating out to sea in a natural process known as calving.
NASA researchers recently spotted a huge rift in Western Antarctica's Pine Island Glacier ice shelf, and they expect the shelf to calve in the coming months.
Although calving is a natural process, some Antarctic ice shelves are undergoing rapid, sometimes catastrophic change, and Scambos said that tracking icebergs as they melt is a good proxy for scientists trying to understand what is happening to ice shelves in general in a warming world.
Ice shelves essentially act as doorstops for glaciers. When ice shelves disappear or weaken, glaciers speed up, dumping ever more ice into the ocean and raising global sea levels.
Scambos said B-15J is rapidly disintegrating. "It's similar to what happens to ice shelves when they go through a rapid warming," he said.
Smaller and smaller pieces
"Icebergs do a sort of fast-forward through climate change as they drift northward," Scambos said. "All the things that go on in terms of melting on the underside and melting on the surface, they all happen at high speed and dramatically with these large, tabular bergs." [Images: Antarctica, Iceberg Maker]
Scambos said researchers are trying to get some high-resolution satellite images of B-15J as it melts away, but it's difficult. The iceberg moves between 10 and 15 miles (16 and 24 kilometers) a day, and it's a challenged to aim the right satellite required for such detailed images – one operated by Taiwan – at such a small target.
The major pieces of iceberg B-15 have run through most of the alphabet, Scambos said. "It goes to at least B-15X, but the letter pieces are starting to break off into more pieces — and below 10 kilometers [6 mi] nobody tracks them, so there are literally thousands of pieces of B-15 floating around," he said.
This story was provided by OurAmazingPlanet, a sister site to SPACE.com. Reach Andrea Mustain at firstname.lastname@example.org. Follow her on Twitter @AndreaMustain. Follow OurAmazingPlanet for the latest in Earth science and exploration news on Twitter @OAPlanet and on Facebook.
MORE FROM SPACE.com
|
<urn:uuid:141e6392-5cfc-496d-8813-7bd8e6ed24fe>
|
CC-MAIN-2013-20
|
http://www.space.com/14094-giant-chunk-12-year-iceberg-caught-camera.html
|
2013-05-22T22:21:19Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948759
| 818
|
Despite the negative consequences of drug use, some people who take drugs are unable to stop. Drugs change the way the brain works. Some of these changes are short term, while other changes can last a very long time.
In some people drug use can change the brain and its neurotransmitters so profoundly that addiction results. Addiction is characterized by the following:
- Compulsive use: A strong compulsion or drive to use drugs despite negative consequences. In other words, a person persists in using drugs even if he or she is having serious problems.
- Tolerance: Loss of control over the amount of the drug used—the person needs more of the drug to produce the same effect as before.
- Withdrawal: Intense craving for the drug when it is not available. The craving results from changes in the brain. Once a person is addicted, he or she must have the drug just to keep from feeling bad. This is because drugs can cause changes in the normal functioning of neurotransmitters in the brain.
Addiction is considered a disease because the drugs have changed the way the brain functions. Different drugs cause different changes in the brain, some more severe than others. Research in animals and humans suggests that some drugs may cause changes that last long after the individual has stopped taking drugs or even permanently.
Addiction affects men and women of all ages and ethnicities. Because of the severity of the problem, scientists have been studying how drugs act in the brain to produce addiction using a range of methods, from brain imaging to psychological testing. These researchers are trying to identify causes and methods of effective treatment and prevention of drug abuse. As a result of this international attention and research, scientists and physicians now have a greater understanding of how drugs act in the brain. This has led to the development of new treatments for drug addiction.
When a person becomes addicted to a drug, neurological, physiological, psychological, and social changes take place. These biopsychosocial changes must be addressed for the person to get better. The appropriate treatment is dependent on the individual, drug of abuse, and severity of addiction.
Often, detoxification is the first step in addiction treatment. Detoxification is the medically controlled withdrawal of the abused drug. However, this is only the first step in successful treatment, and many drugs, such as cocaine, do not cause the typical detoxification symptoms when their use is discontinued. After a person has gotten off of a drug, he or she still must deal with any changes that have occurred in his or her brain as a result of drug use. Often these changes are much harder to deal with than the initial detoxification from the drug use, and research has shown that some drugs can cause changes in the brain that last for a long time and may even be permanent.
For some abused drugs, medications are available that can be used in conjunction with psychological and social treatments. For other drugs, however, medications are not yet available, so successful treatment relies on psychological and social treatments. These treatments can help a person recovering from addiction deal with a range of emotions, including shame, denial, emotional distress, and neglect of family, friends, work, and school. They can also help them deal with a variety of social problems, such as trouble at school and hurt family members and friends. The person recovering from addiction must work to mend relationships with family and friends, reestablish a responsible role in school, and avoid situations that might provoke a relapse. During treatment and recovery, addicted people and their families often have to learn how to communicate in new and healthy ways. This is typically accomplished during family therapy.
These treatments are offered in a variety of settings, such as hospitals and clinics, and recovery continues through the assistance of self-help and individual and group therapy. Addiction is a serious disease and, in some cases, drug abusers start using drugs again after treatment and need to go back into treatment. Although addiction can be treated successfully, the best way to avoid addiction is to never start using drugs in the first place.
Brain Power Video Modules: Grades 4-5
T-shirts, Stickers, and Buttons: Grades 4-5
As a result of scientific research, we know that addiction is a disease that affects both brain and behavior.
|
<urn:uuid:9ad135d7-759a-439a-be24-110e48e92a91>
|
CC-MAIN-2013-20
|
http://www.drugabuse.gov/publications/brain-power/grades-4-5/what-addiction-module-6/background
|
2013-06-19T06:15:34Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956061
| 868
|
Let’s say you’ve just arrived from another planet, with a mastery of English, but little exposure to the popular sport known as golf. So you don’t understand why one golfer would hit a “banana ball” and end up with a “bogey,” while another used a “chicken stick” and ended up with an “eagle.”
Like most sports, golf has a lexicon all its own. Many terms never make it off the course—calling a sand bunker a “cat box” or “kitty litter” seems wrong in polite company—while words like “bogey,” “par,” and “eagle” are common shorthand.
In the U.S., “par” is the number of strokes a good player is expected to need to complete a hole, while “bogey” is one stroke more than “par.” But if you said “I bogeyed that hole” to someone in England, the response might be “Good show!” In England, “bogey” is the same as “par.” Or it was, before televised golf tournaments forced commentators to switch to the American view regardless of local parlance.
The current view of the source of “bogey,” The Oxford English Dictionary says, is “The Bogey Man,” a song popular in the late nineteenth century, which taunted people to try to catch an evil spirit, the bogey man.
A golfer in 1890 was new to the idea of a set number of strokes for each hole (called a “ground score” then). It was so difficult for him to reach the ground score, he said, that he called it his “bogey-man.” For a number of years, a “Bogey score” was a desired goal, albeit elusive. But a 1946 U.S. book, Golf Simplified, defined “bogey” as one stroke over par, and the term stuck.
“Eagle,” the term for two under par, is an obvious descendant of “birdie,” one under par. “Birdie” itself began in England in 1911 as “bird,” from a slang term for an exceptional person, the OED says. But its use was primarily American, as witness this 1923 quote from The Daily Mail in London: “Then he went all out to ‘shoot birdies’ the American colloquialism for aiming at doing holes in a stroke under the par scores.”
A “banana ball,” according to the U.S. Golf Association, is a ball that curves away from the player, in the shape of a banana. In other words, a “slice.” And a “chicken stick” is the safer club used for a difficult shot when the choice is between the obvious club and a more heroic, but riskier, club. The player is “chicken” and takes the easy way out.
“Par” itself, of course, is from the Latin meaning “equality,” and has meanings far beyond the world of golf. “Par value” of a financial instrument is face value, as opposed to market value; “on a par with” means at the same level as something else; and “she’s feeling under par” means she’s not as well as she could be.
In that case, chicken soup is in order.
|
<urn:uuid:d43b932a-022b-4ffc-8247-9b129fca44ce>
|
CC-MAIN-2013-20
|
http://www.cjr.org/language_corner/par_for_the_course.php
|
2013-05-22T14:39:38Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963606
| 776
|
Note: This message is displayed if (1) your browser is not standards-compliant or (2) you have you disabled CSS. Read our Policies for more information.
The peony (Paeonia) was adopted as the state flower by the 1957 Indiana General Assembly. From 1931 to 1957, the zinnia was the state flower. The peony blooms the last of May and early June in various shades of red and pink and also in white; it occurs in single and double forms. No particular variety or color was designated by the General Assembly. The flower is cultivated widely throughout the state and is extremely popular for decorating gravesites for Memorial Day.
|
<urn:uuid:7a5f25af-3357-4ead-be63-071f927bba04>
|
CC-MAIN-2013-20
|
http://www.state.in.us/gov/2374.htm
|
2013-05-18T17:29:03Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.9626
| 136
|
Scientific name: Prunella modularis
Size: Up to 145mm
Distribution: Found throughout the UK
Months seen: All year round
Habitat: Parks, gardens, open woodland and areas with hedges and scrub
Food: Invertebrates and seeds
Special features: The dunnocks plumage is similar to the house sparrow, being mostly brown with black streaks, although the head and neck has a lot of blue-grey colouring and the underparts are generally darker. The beak is dark coloured and very thin, and the legs are an orange-brown color. The males and females are similar but the male frequently has more of the blue-grey colouring on the head.
Dunnocks are shy birds and like to be close to cover. Their flight is very jerky, and they spend a lot of time on the ground.
Dunnocks are sometimes called 'Hedge Sparrows' or 'Hedge Accentors'.
|
<urn:uuid:19edeac9-5336-46b6-9981-cd7329438955>
|
CC-MAIN-2013-20
|
http://www.uksafari.com/dunnocks.htm
|
2013-06-18T22:30:33Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949622
| 203
|
and Frontier Children
Between 1841 and 1865, some 40,000 children moved westward on
the overland trails, headed to California, Oregon, Utah, and other
destinations. Their accounts of their journeys and of their childhood
experiences in the Far West bring the hardships and joys of pioneering
to life. Their words allow us to appreciate its difficulties,
but also the opportunities it gave even very young children to
exhibit their resilience and inner strength.
What were some of the hardships and difficulties children faced
as they and their families traveled the overland trails?
Draw upon the following accounts to describe how a pioneer or
frontier childhood differs from childhood today.
|
<urn:uuid:b83c4f86-da08-4e46-b0a8-7eb01764c21f>
|
CC-MAIN-2013-20
|
http://www.digitalhistory.uh.edu/active_learning/explorations/children_westward/children_menu.cfm
|
2013-05-22T07:26:42Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955414
| 141
|
|Site Map Site Search|
|There are 523 Active Applicants Logged In.||
|History of Aviation|
History of Aviation
On December 17, 1903, Orville and Wilbur Wright capped four years of research and design efforts with a 120-foot, 12-second flight at Kitty Hawk, North Carolina - the first powered flight in a heavier-than-air machine. Prior to that, people had flown only in balloons and gliders.
The first person to fly as a passenger was Leon Delagrange, who rode with French pilot Henri Farman from a meadow outside of Paris in 1908. Charles Furnas became the first American airplane passenger when he flew with Orville Wright at Kitty Hawk later that year.
The first scheduled air service began in Florida on January 1, 1914. Glenn Curtiss had designed a plane that could take off and land on water and thus could be built larger than any plane to date, because it did not need the heavy undercarriage required for landing on hard ground. Thomas Benoist, an auto parts maker, decided to build such a flying boat, or seaplane, for a service across Tampa Bay called the St. Petersburg - Tampa Air Boat Line. His first passenger was ex-St. Petersburg Mayor A.C. Pheil, who made the 18-mile trip in 23 minutes, a considerable improvement over the two-hour trip by boat. The single-plane service accommodated one passenger at a time, and the company charged a one-way fare of $5. After operating two flights a day for four months, the company folded with the end of the winter tourist season.
World War I
These and other early flights were headline events, but commercial aviation was very slow to catch on with the general public, most of whom were afraid to ride in the new flying machines. Improvements in aircraft design also were slow. However, with the advent of World War I, the military value of aircraft was quickly recognized and production increased significantly to meet the soaring demand for planes from governments on both sides of the Atlantic. Most significant was the development of more powerful motors, enabling aircraft to reach speeds of up to 130 miles per hour, more than twice the speed of pre-war aircraft. Increased power also made larger aircraft possible.
At the same time, the war was bad for commercial aviation in several respects. It focused all design and production efforts on building military aircraft. In the public's mind, flying became associated with bombing runs, surveillance and aerial dogfights. In addition, there was such a large surplus of planes at the end of the war that the demand for new production was almost nonexistent for several years - and many aircraft builders went bankrupt. Some European countries, such as Great Britain and France, nurtured commercial aviation by starting air service over the English Channel. However, nothing similar occurred in the United States, where there were no such natural obstacles isolating major cities and where railroads could transport people almost as fast as an airplane, and in considerably more comfort. The salvation of the U.S. commercial aviation industry following World War I was a government program, but one that had nothing to do with the transportation of people.
By 1917, the U.S. government felt enough progress had been made in the development of planes to warrant something totally new - the transport of mail by air. That year, Congress appropriated $100,000 for an experimental airmail service to be conducted jointly by the Army and the Post Office between Washington and New York, with an intermediate stop in Philadelphia. The first flight left Belmont Park, Long Island for Philadelphia on May 14, 1918 and the next day continued on to Washington, where it was met by President Woodrow Wilson.
With a large number of war-surplus aircraft in hand, the Post Office set its sights on a far more ambitious goal - transcontinental air service. It opened the first segment, between Chicago and Cleveland, on May 15, 1919 and completed the air route on September 8, 1920, when the most difficult part of the route, the Rocky Mountains, was spanned. Airplanes still could not fly at night when the service first began, so the mail was handed off to trains at the end of each day. Nonetheless, by using airplanes the Post Office was able to shave 22 hours off coast-to-coast mail deliveries.
In 1921, the Army deployed rotating beacons in a line between Columbus and Dayton, Ohio, a distance of about 80 miles. The beacons, visible to pilots at 10-second intervals, made it possible to fly the route at night.
The Post Office took over the operation of the guidance system the following year, and by the end of 1923, constructed similar beacons between Chicago and Cheyenne, Wyoming, a line later extended coast-to-coast at a cost of $550,000. Mail then could be delivered across the continent in as little as 29 hours eastbound and 34 hours westbound - prevailing winds from west to east accounted for the difference which was at least two days less than it took by train.
The Contract Air Mail Act of 1925
By the mid-1920s, the Post Office mail fleet was flying 2.5 million miles and delivering 14 million letters annually. However, the government had no intention of continuing airmail service on its own. Traditionally, the Post Office had used private companies for the transportation of mail. So, once the feasibility of airmail was firmly established and airline facilities were in place, the government moved to transfer airmail service to the private sector, by way of competitive bids. The legislative authority for the move was the Contract Air Mail Act of 1925, commonly referred to as the Kelly Act after its chief sponsor, Rep. Clyde Kelly of Pennsylvania. This was the first major step toward the creation of a private U.S. airline industry. Winners of the initial five contracts were National Air Transport (owned by the Curtiss Aeroplane Co.), Varney Air Lines, Western Air Express, Colonial Air Transport and Robertson Aircraft Corporation. National and Varney would later become important parts of United Air Lines (originally a joint venture of the Boeing Airplane Company and Pratt & Whitney). Western would merge with Transcontinental Air Transport (TAT), another Curtiss subsidiary, to form Transcontinental and Western Air (TWA). Robertson would become part of the Universal Aviation Corporation, which in turn would merge with Colonial, Southern Air Transport and others, to form American Airways, predecessor of American Airlines. Juan Trippe, one of the original partners in Colonial, later pioneered international air travel with Pan Am - a carrier he founded in 1927 to transport mail between Key West, Florida, and Havana, Cuba. Pitcairn Aviation, yet another Curtiss subsidiary that got its start transporting mail, would become Eastern Air Transport, predecessor of Eastern Air Lines.
The Morrow Board
The same year Congress passed the Contract Air Mail Act, President Calvin Coolidge appointed a board to recommend a national aviation policy (a much-sought-after goal of then Secretary of Commerce Herbert Hoover). Dwight Morrow, a senior partner in J.P. Morgan's bank, and later the father-in-law of Charles Lindbergh, was named chairman. The board heard testimony from 99 people, and on November 30, 1925, submitted its report to President Coolidge. The report was wide-ranging, but its key recommendation was that the government should set standards for civil aviation and that the standards should be set outside of the military.
The Air Commerce Act of 1926
Congress adopted the recommendations of the Morrow Board almost to the letter in the Air Commerce Act of 1926. The legislation authorized the Secretary of Commerce to designate air routes, to develop air navigation systems, to license pilots and aircraft, and to investigate accidents. The act brought the government into commercial aviation as regulator of the private airlines spawned by the Kelly Act of the previous year.
Congress also adopted the board's recommendation for airmail contracting, by amending the Kelly Act to change the method of compensation for airmail services. Instead of paying carriers a percentage of the postage paid, the government would pay them according to the weight of the mail. This simplified payments, and proved highly advantageous to the carriers, which collected $48 million from the government for the carriage of mail between 1926 and 1931.
Ford's Tin Goose
Henry Ford, the automobile manufacturer, was also among the early successful bidders for airmail contracts, winning the right, in 1925, to carry mail from Chicago to Detroit and Cleveland aboard planes his company already was using to transport spare parts for his automobile assembly plants. More importantly, he jumped into aircraft manufacturing, and in 1927, produced the Ford Trimotor, commonly referred to as the Tin Goose. It was one of the first all-metal planes, made of a new material, duralumin, which was almost as light as aluminum but twice as strong. It also was the first plane designed primarily to carry passengers rather than mail. The Ford Trimotor had 12 passenger seats; a cabin high enough for a passenger to walk down the aisle without stooping; and room for a "stewardess," or flight attendant, the first of whom were nurses, hired by United in 1930 to serve meals and assist airsick passengers. The Tin Goose's three engines made it possible to fly higher and faster (up to 130 miles per hour), and its sturdy appearance, combined with the Ford name, had a reassuring effect on the public's perception of flying. However, it was another event, in 1927, that brought unprecedented public attention to aviation and helped secure the industry's future as a major mode of transportation.
At 7:52 a.m. on May 20, 1927, a young pilot named Charles Lindbergh set out on an historic flight across the Atlantic Ocean, from New York to Paris. It was the first trans-Atlantic non-stop flight in an airplane, and its effect on both Lindbergh and aviation was enormous. Lindbergh became an instant American hero. Aviation became a more established industry, attracting millions of private investment dollars almost overnight, as well as the support of millions of Americans.
The pilot who sparked all of this attention had dropped out of engineering school at the University of Wisconsin to learn how to fly. He became a barnstormer, doing aerial shows across the country, and eventually joined the Robertson Aircraft Corporation, to transport mail between St. Louis and Chicago.
In planning his trans-Atlantic voyage, Lindbergh daringly decided to fly by himself, without a navigator, so he could carry more fuel. His plane, the Spirit of St. Louis, was slightly less than 28 feet in length, with a wingspan of 46 feet. It carried 450 gallons of gasoline, which comprised half its takeoff weight. There was too little room in the cramped cockpit for navigating by the stars, so Lindbergh flew by dead reckoning. He divided maps from his local library into thirty-three 100-mile segments, noting the heading he would follow as he flew each segment. When he first sighted the coast of Ireland, he was almost exactly on the route he had plotted, and he landed several hours later, with 80 gallons of fuel to spare.
Lindbergh's greatest enemy on his journey was fatigue. The trip took an exhausting 33 hours, 29 minutes and 30 seconds, but he managed to keep awake by sticking his head out the window to inhale cold air, by holding his eyelids open, and by constantly reminding himself that if he fell asleep he would perish. In addition, he had a slight instability built into his airplane that helped keep him focused and awake.
Lindbergh landed at Le Bourget Field, outside of Paris, at 10:24 p.m. Paris time on May 21. Word of his flight preceded him and a large crowd of Parisians rushed out to the airfield to see him and his little plane. There was no question about the magnitude of what he had accomplished. The Air Age had arrived.
The Watres Act and the Spoils Conference
In 1930, Postmaster General Walter Brown pushed for legislation that would have another major impact on the development of commercial aviation. Known as the Watres Act (after one of its chief sponsors, Rep. Laurence H. Watres of Pennsylvania), it authorized the Post Office to enter into longer-term contracts for airmail, with rates based on space or volume, rather than weight. In addition, the act authorized the Post Office to consolidate airmail routes, where it was in the national interest to do so. Brown believed the changes would promote larger, stronger airlines, as well as more coast-to-coast and nighttime service.
Immediately after Congress approved the act, Brown held a series of meetings in Washington to discuss the new contracts. The meetings were later dubbed the Spoils Conference because Brown gave them little publicity and directly invited only a handful of people from the larger airlines. He designated three transcontinental mail routes and made it clear that he wanted only one company operating each service rather than a number of small airlines handing the mail off to one another. His actions brought political trouble that resulted in major changes to the system two years later.
Scandal and the Air Mail Act of 1934
Following the Democratic landslide in the election of 1932, some of the smaller airlines began complaining to news reporters and politicians that they had been unfairly denied airmail contracts by Brown. One reporter discovered that a major contract had been awarded to an airline whose bid was three times higher than a rival bid from a smaller airline. Congressional hearings followed, chaired by Sen. Hugo Black of Alabama, and by 1934 the scandal had reached such proportions as to prompt President Franklin Roosevelt to cancel all mail contracts and turn mail deliveries over to the Army.
The decision was a mistake. The Army pilots were unfamiliar with the mail routes, and the weather at the time they took over the deliveries, February 1934, was terrible. There were a number of accidents as the pilots flew practice runs and began carrying the mail, leading to newspaper headlines that forced President Roosevelt to retreat from his plan only a month after he had turned the mail over to the Army
By means of the Air Mail Act of 1934, the government once again returned airmail transportation to the private sector, but it did so under a new set of rules that would have a significant impact on the industry. Bidding was structured to be more competitive, and former contract holders were not allowed to bid at all, so many companies were reorganized. The result was a more even distribution of the government's mail business and lower mail rates that forced airlines and aircraft manufacturers to pay more attention to the development of the passenger side of the business.
In another major change, the government forced the dismantling of the vertical holding companies common up to that time in the industry, sending aircraft manufacturers and airline operators (most notably Boeing, Pratt & Whitney, and United Air Lines) their separate ways. The entire industry was now reorganized and refocused.
For the airlines to attract passengers away from the railroads, they needed both larger and faster airplanes. They also needed safer airplanes. Accidents, such as the one in 1931 that killed Notre Dame Football Coach Knute Rockne along with six others, kept people from flying
Aircraft manufacturers responded to the challenge. There were so many improvements to aircraft in the 1930s that many believe it was the most innovative period in aviation history. Air-cooled engines replaced water-cooled engines, reducing weight and making larger and faster planes possible. Cockpit instruments also improved, with better altimeters, airspeed indicators, rate-of-climb indicators, compasses, and the introduction of artificial horizon, which showed pilots the attitude of the aircraft relative to the ground - important for flying in reduced visibility
Another development of enormous importance to aviation was radio. Aviation and radio developed almost in lock step. Marconi sent his first message across the Atlantic on the airwaves just two years before the Wright Brothers? first flight at Kitty Hawk. By World War I, some pilots were taking radios up in the air with them so they could communicate with people on the ground. The airlines followed suit after the war, using radio to transmit weather information from the ground to their pilots, so they could avoid storms
An even more significant development, however, was the realization that radio could be used as an aid to navigation when visibility was poor and visual navigation aids, such as beacons, were useless. Once technical problems were worked out, the Department of Commerce constructed 83 radio beacons across the country. They became fully operational in 1932, automatically transmitting directional beams, or tracks, that pilots could follow to their destination. Marker beacons came next, allowing pilots to locate airports in poor visibility. The first air traffic control tower was established in 1935 at what is now Newark International Airport in New Jersey
The First Modern Airliners
Boeing built what generally is considered the first modern passenger airliner, the Boeing 247. It was unveiled in 1933, and United Air Lines promptly bought 60 of them. Based on a low-wing, twin-engine bomber with retractable landing gear built for the military, the 247 accommodated 10 passengers and cruised at 155 miles per hour. Its cabin was insulated, to reduce engine noise levels inside the plane, and it featured such amenities as upholstered seats and a hot water heater to make flying more comfortable to passengers. Eventually, Boeing also gave the 247 variable-pitch propellers, that reduced takeoff distances, increased the rate of climb, and boosted cruising speeds
Not to be outdone by United, TWA went searching for an alternative to the 247 and eventually found what it wanted from the Douglas Aircraft Company. Its DC-1 incorporated Boeing's innovations and improved upon many of them. The DC-1 had a more powerful engine and accommodations for two more passengers than did the 247. More importantly, the airframe was designed so that the skin of the aircraft bore most of the stress on the plane during flight. There was no interior skeleton of metal spars, thus giving passengers more room than they had in the 247.The DC-1 also was easier to fly. It was equipped with the first automatic pilot and the first efficient wing flaps, for added lift during takeoff. However, for all its advancements, only one DC-1 was ever built. Douglas decided almost immediately to alter its design, adding 18 inches to its length so it could accommodate two more passengers. The new, longer version was called the DC-2 and it was a big success, but the best was still to come
Called the plane that changed the world, the DC-3 was the first aircraft to enable airlines to make money carrying passengers. As a result, it quickly became the dominant aircraft in the United States, following its debut in 1936 with American Airlines (which played a key role in its design).
The DC-3 had 50 percent greater passenger capacity than the DC-2 (21 seats versus 14), yet cost only ten percent more to operate. It also was considered a safer plane, built of an aluminum alloy stronger than materials previously used in aircraft construction. It had more powerful engines (1,000 horsepower versus 710 horsepower for the DC-2), and it could travel coast to coast in only 16 hours - a fast trip for that time.
Another important improvement was the use of a hydraulic pump to lower and raise the landing gear. This freed pilots from having to crank the gear up and down during takeoffs and landings. For greater passenger comfort, the DC-3 had a noise-deadening plastic insulation, and seats set in rubber to minimize vibrations. It was a fantastically popular airplane, and it helped attract many new travelers to flying.
Although planes such as the Boeing 247 and the DC-3 represented significant advances in aircraft design, they had a major drawback. They could fly no higher than 10,000 feet, because people became dizzy and even fainted, due to the reduced levels of oxygen at higher altitudes.
The airlines wanted to fly higher, to get above the air turbulence and storms common at lower altitudes. Motion sickness was a problem for many airline passengers, and an inhibiting factor to the industry's growth.
The breakthrough came at Boeing with the Stratoliner, a derivation of the B-17 bomber introduced in 1940 and first flown by TWA. It was the first pressurized aircraft, meaning that air was pumped into the aircraft as it gained altitude to maintain an atmosphere inside the cabin similar to the atmosphere that occurs naturally at lower altitudes. With its regulated air compressor, the 33-seat Stratoliner could fly as high as 20,000 feet and reach speeds of 200 miles per hour.
The Civil Aeronautics Act of 1938
Government decisions continued to prove as important to aviation's future as technological breakthroughs, and one of the most important aviation bills ever enacted by Congress was the Civil Aeronautics Act of 1938. Until that time, numerous government agencies and departments had a hand in aviation policy. Airlines sometimes were pushed and pulled in several directions, and there was no central agency working for the long-term development of the industry. All the airlines had been losing money, since the postal reforms in 1934 significantly reduced the amount they were paid for carrying the mail.
The airlines wanted more rationalized government regulation, through an independent agency, and the Civil Aeronautics Act gave them what they needed. It created the Civil Aeronautics Authority (CAA) and gave the new agency power to regulate airline fares, airmail rates, interline agreements, mergers and routes. Its mission was to preserve order in the industry, holding rates to reasonable levels while, at the same time nurturing the still financially-shaky airline industry, thereby encouraging the development of commercial air transportation.
Congress created a separate agency - the Air Safety Board - to investigate accidents. In 1940, however, President Roosevelt convinced Congress to transfer the accident investigation function to the CAA, which was then renamed the Civil Aeronautics Board (CAB). These moves, coupled with the tremendous progress made on the technological side, put the industry on the road to success.
World War II
Aviation had an enormous impact on the course of World War II and the war had just as significant an impact on aviation. There were fewer than 300 air transport aircraft in the United States when Hitler marched into Poland in 1939. By the end of the war, U.S. aircraft manufacturers were producing 50,000 planes a year.
Most of the planes, of course, were fighters and bombers, but the importance of air transports to the war effort quickly became apparent as well. Throughout the war, the airlines provided much needed airlift to keep troops and supplies moving, to the front and throughout the production chain back home. For the first time in their history, the airlines had far more business - for passengers as well as freight - than they could handle. Many of them also had opportunities to pioneer new routes, gaining an exposure that would give them a decidedly broader outlook at war's end.
While there were numerous advances in U.S. aircraft design during the war, that enabled planes to go faster, higher, and farther than ever before, mass production was the chief goal of the United States. The major innovations of the wartime period - radar and jet engines - occurred in Europe.
The Jet Engine
Isaac Newton was the first to theorize, in the 18th century, that a rearward-channeled explosion could propel a machine forward at a great rate of speed. However, no one found a practical application for the theory until Frank Whittle, a British pilot, designed the first jet engine in 1930. Even then, widespread skepticism about the commercial viability of a jet prevented Whittle's design from being tested for several years.
The Germans were the first to build and test a jet aircraft. Based on a design by Hans von Ohain, a student whose work was independent of Whittle's, it flew in 1939, although not as well as the Germans had hoped. It would take another five years for German scientists to perfect the design, by which time it was, fortunately, too late to affect the outcome of the war.
Whittle also improved his jet engine during the war, and in 1942 he shipped an engine prototype to General Electric in the United States. America's first jet plane - the Bell P-59 - was built the following year.
Another technological development with a much greater impact on the war's outcome (and later on commercial aviation) was radar. British scientists had been working on a device that could give them early warning of approaching enemy aircraft even before the war began, and by 1940 Britain had a line of radar transceivers along its east coast that could detect German aircraft the moment they took off from the Continent. British scientists also perfected the cathode ray oscilloscope, which produced map-type outlines of surrounding countryside and showed aircraft as a pulsing light. Americans, meanwhile, found a way to distinguish between enemy aircraft and allied aircraft by installing transponders aboard the latter that signaled their identity to radar operators.
Dawn of the Jet Age
Aviation was poised to advance rapidly following the war, in large part because of the development of jets, but there still were significant problems to overcome. In 1952, a 36-seat British-made jet, the Comet, flew from London to Johannesburg, South Africa, at speeds as high as 500 miles per hour. Two years later, the Comet's career ended abruptly following two back-to-back accidents in which the fuselage burst apart during flight - the result of metal fatigue.
The Cold War between the Soviet Union and the United States, following World War II, helped secure the funding needed to solve such problems and advance the jet's development. Most of the breakthroughs related to military aircraft that later were applied to the commercial sector. For example, Boeing employed a swept-back wing design for its B-47 and B-52 bombers to reduce drag and increase speed. Later, the design was incorporated into commercial jets, making them faster and thus more attractive to passengers. The best example of military - civilian technology transfer was the jet tanker Boeing designed for the Air Force to refuel bombers in flight. The tanker, the KC-135, was a huge success as a military plane, but even more successful when revamped and introduced, in 1958, as the first U.S. passenger jet, the Boeing 707. With a length of 125 feet and four engines with 17,000 pounds of thrust each, the 707 could carry up to 181 passengers and travel at speeds of 550 miles per hour. Its engines proved more reliable than piston-driven engines - producing less vibration, putting less stress on the plane's airframe and reducing maintenance expenses. They also burned kerosene, which cost half as much as the high-octane gasoline used in more traditional planes. With the 707, first ordered and operated by Pan Am, all questions about the commercial feasibility of jets were answered. The Jet Age had arrived, and other airlines soon were lining up to buy the new aircraft.
The Federal Aviation Act of 1958
Following World War II, air travel soared, but with the industry's growth came new problems. In 1956 two aircraft collided over the Grand Canyon, killing 128 people. The skies were getting too crowded for existing systems of aircraft separation, and Congress responded by passing the Federal Aviation Act of 1958.
The legislation created a new safety regulatory agency, the Federal Aviation Agency, later called the Federal Aviation Administration (FAA) when Congress created the Department of Transportation (DOT) in 1967. The agency was charged with establishing and running a broad air traffic control system, to maintain safe separation of all commercial aircraft through all phases of flight. In addition, it assumed jurisdiction over all other aviation safety matters, such as the certification of aircraft designs, and airline training and maintenance programs. The Civil Aeronautics Board retained jurisdiction over economic matters, such as airline routes and rates.
Wide-bodies and Supersonics
1969 marked the debut of another revolutionary aircraft, the Boeing 747, which, again, Pan Am was the first to purchase and fly in commercial service. It was the first wide-body jet, with two aisles, a distinctive upper deck over the front section of the fuselage, and four engines. With seating for as many as 450 passengers, it was twice as big as any other Boeing jet and 80 percent bigger than the largest jet up until that time, the DC-8.
Recognizing the economies of scale to be gained from larger jets, other aircraft manufacturers quickly followed suit. Douglas built its first wide-body, the DC-10, in 1970, and only a month later, Lockheed flew its contender in the wide-body market, the L-1011. Both of these jets had three engines (one under each wing and one on the tail) and were smaller than the 747, seating about 250 passengers.
During the same period of time, efforts were underway in both the United States and Europe to build a supersonic commercial aircraft. The Soviet Union was the first to succeed, testing the Tupolev 144 in December of 1968. A consortium of West European aircraft manufacturers first flew the Concorde two months later and eventually produced a number of those fast, but small, jets for commercial service. U.S. efforts to produce a supersonic passenger jet, on the other hand, stalled in 1971 due to public concern about it's expense and the sonic boom produced by such aircraft.
Log In - View Aviation Jobs
Aviation Job of the Week
Aviation Career News
Aviation Jobs Newsletter
Free Aviation Employment Toolbar
Post Aviaiton Jobs
View Aviation Resumes
Aviation Employer Services
Aviation Company Directory
Cares Re-Employment Service
Aviation Salary Wages & Pay
Aviation Jobs Blog
Student Aviation Program Discounts
Aviation Career Overviews
Site Map and Table of Contents
Link to Us
© Avjobs, Inc. 1988-2013
All Rights Reserved
|FEDERAL COPYRIGHT LAW PROHIBITS UNAUTHORIZED REPRODUCTION BY ANY MEANS. DISSEMINATION VIA EMAIL IS PROHIBITED WITHOUT PRIOR WRITTEN CONSENT OF AVJOBS, INC.|
|
<urn:uuid:cead47b3-0612-4a67-86a0-b4fde0089b46>
|
CC-MAIN-2013-20
|
http://www.avjobs.com/history/
|
2013-05-19T10:34:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.976339
| 6,211
|
Henry Gray (18251861). Anatomy of the Human Body. 1918.
as age advances, and is more abundant in males than in females. As a rule, the posterior border of the lung is darker than the anterior.
The right lung usually weighs about 625 gm., the left 567 gm., but much variation is met with according to the amount of blood or serous fluid they may contain. The lungs are heavier in the male than in the female, their proportion to the body being, in the former, as 1 to 37, in the latter as 1 to 43.
Each lung is conical in shape, and presents for examination an apex, a base, three borders, and two surfaces.
The apex (apex pulmonis) is rounded, and extends into the root of the neck, reaching from 2.5 to 4 cm. above the level of the sternal end of the first rib. A sulcus produced by the subclavian artery as it curves in front of the pleura runs upward and lateralward immediately below the apex.
The base (basis pulmonis) is broad, concave, and rests upon the convex surface of the diaphragm, which separates the right lung from the right lobe of the liver, and the left lung from the left lobe of the liver, the stomach, and the spleen. Since the diaphragm extends higher on the right than on the left side, the concavity on the base of the right lung is deeper than that on the left. Laterally and behind, the base is bounded by a thin, sharp margin which projects for some distance into the phrenicocostal sinus of the pleura, between the lower ribs and the costal attachment of the diaphragm. The base of the lung descends during inspiration and ascends during expiration.
FIG. 971 Pulmonary vessels, seen in a dorsal view of the heart and lungs. The lungs have been pulled away from the median line, and a part of the right lung has been cut away to display the air-ducts and bloodvessels. (Testut.) (See enlarged image)
Surfaces.The costal surface (facies costalis; external or thoracic surface) is smooth, convex, of considerable extent, and corresponds to the form of the cavity of the chest, being deeper behind than in front. It is in contact with the costal pleura, and presents, in specimens which have been hardened in situ, slight grooves corresponding with the overlying ribs.
|
<urn:uuid:934cfa64-87b0-490d-bd7d-6c421019f45d>
|
CC-MAIN-2013-20
|
http://bartleby.com/107/pages/page1094.html
|
2013-05-22T07:54:39Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.937615
| 533
|
About Site Map Contact Us
|A service of the U.S. National Library of Medicine®|
On this page:
Reviewed December 2006
What is the official name of the GSS gene?
The official name of this gene is “glutathione synthetase.”
GSS is the gene's official symbol. The GSS gene is also known by other names, listed below.
Read more about gene names and symbols on the About page.
What is the normal function of the GSS gene?
The GSS gene provides instructions for making an enzyme called glutathione synthetase. Glutathione synthetase participates in a process called the gamma-glutamyl cycle. The gamma-glutamyl cycle is a sequence of chemical reactions that takes place in most of the body's cells. These reactions are necessary for the production of glutathione, a small molecule made of three protein building blocks (amino acids). Glutathione protects cells from damage caused by unstable oxygen-containing molecules, which are byproducts of energy production. Glutathione is called an antioxidant because of its role in protecting cells from the damaging effects of these unstable molecules. Glutathione also helps process medications and cancer-causing compounds (carcinogens), and helps build DNA, proteins, and other important cellular components.
How are changes in the GSS gene related to health conditions?
Where is the GSS gene located?
Cytogenetic Location: 20q11.2
Molecular Location on chromosome 20: base pairs 33,516,235 to 33,543,600
The GSS gene is located on the long (q) arm of chromosome 20 at position 11.2.
More precisely, the GSS gene is located from base pair 33,516,235 to base pair 33,543,600 on chromosome 20.
See How do geneticists indicate the location of a gene? in the Handbook.
Where can I find additional information about GSS?
You and your healthcare professional may find the following resources about GSS helpful.
You may also be interested in these resources, which are designed for genetics professionals and researchers.
What other names do people use for the GSS gene or gene products?
See How are genetic conditions and genes named? in the Handbook.
Where can I find general information about genes?
The Handbook provides basic information about genetics in clear language.
These links provide additional genetics resources that may be useful.
What glossary definitions help with understanding GSS?
You may find definitions for these and many other terms in the Genetics Home Reference Glossary.
See also Understanding Medical Terminology.
References (10 links)
The resources on this site should not be used as a substitute for professional medical care or advice. Users seeking information about a personal genetic disease, syndrome, or condition should consult with a qualified healthcare professional. See How can I find a genetics professional in my area? in the Handbook.
|
<urn:uuid:4255b51d-9bae-45ad-aa8c-25d34dc59079>
|
CC-MAIN-2013-20
|
http://www.ghr.nlm.nih.gov/gene/GSS
|
2013-06-19T19:06:16Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.908802
| 623
|
|Title||An ABC of children's names (O-P) |
|Author||Ewen, Doris; Ewen, Mary |
Hodder & Stoughton
|Publisher Location||England--London |
|Publication Date||1912 |
|Image Production Process||Relief prints|
|Notes||Illustrated with color printed reliefs.|
An accordion book.
This book illustrates the letters of the alphabet with pictures and verses that point out good and bad behavior in children including depictions of children being obedient, useful, virtuous, and idle.
"O" is for "Obedient Oliver seated on his stool, learns his lessons carefully and then sets out for school." The illustration depicts a boy sitting on a stool with a book in his hands.
"P" is for "Pert Phoebe in her petticoat, refusing to be dressed. Her mammy had to smack her but she did it for the best." The illustration depicts a little girl standing in her white petticoat.
|Subjects (LCSH)||Alphabet--Juvenile literature; Names, Personal--Juvenile literature; Toy and movable books; Alphabet books |
Manners and social etiquette
|Digital Collection||Children's Historical Literature Collection |
|Digital ID Number||CHL0908 |
|Repository||University of Washington Libraries, Special Collections Division |
|Repository Collection||Children's Historical Literature Collection PE1155.A434 1912 |
|Physical Description|| leaves: illustrated; 16.5 x 13 cm. |
|Digital Reproduction Information||Scanned from original book at 400-600 dpi in TIFF format using a ScanMaker 6800, resized and enhanced using Adobe Photoshop, and imported as JPEG2000 using Contentdm's software JPEG2000 Extension. 2008. |
|Exhibit Checklist||Exhibit checklist L.1 |
|
<urn:uuid:4a1f0cd9-e317-48ad-b3d6-3d0676c03689>
|
CC-MAIN-2013-20
|
http://content.lib.washington.edu/cdm4/item_viewer.php?CISOROOT=/childrens&CISOPTR=585&CISOBOX=1&REC=3&DMROTATE=270
|
2013-05-24T08:50:56Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.822296
| 399
|
The near-universal hypocrisy in what Americans do in private versus what they say in public about schooling is not an isolated example. Instead, it reflects the currently widespread assumption that there should be two completely divorced realms of thought:
- In the lower sphere of private life, we figure out how to make mundane decisions like where we'll buy a house using all the information and intuition available to us, such as our awareness of racial differences in academic performance and crime rates.
- In the higher sphere of public discourse, however, where public policy is discussed, much of this useful knowledge is simply off-limits. As Larry Summers, then the president of Harvard, discovered, there is much in the human sciences of which we are never supposed to speak.
This bifurcated mental model is strikingly similar to the dysfunctional conceptual map Renaissance natural philosophers, such as Galileo, inherited from the Ancient Greeks. According to Aristotle's still-dominant cosmology, there was a fundamental divide between the grubby "sublunary sphere" where we humans dwelled, and the higher celestial realm -- where, by definition, perfection reigned.
The sun and the planets revolved around the Earth embedded in crystalline spheres, the circle being the most ideal of all shapes. To make the observed data fit the presumption of circularity, the Alexandrine astronomer Ptolemy elaborated a baffling system of "epicycles," with smaller spheres embedded within larger spheres.
The Ptolemaic system is strangely reminiscent of the various Rube Goldbergian explanations popular today to explain away the racial test score gap. One example: Claude Steele's theory of "Stereotype Threat." Steele hypothesizes that stereotypes make minorities so scared of scoring badly on tests that their discomfort makes them score exactly as badly as the stereotype predicted they would! It's almost as unfalsifiable a theory as Ptolemy's was for 1500 years.
In the conventional wisdom of 1600, the moon, like all heavenly bodies, had to be a perfect sphere. It just had to be, even though it looks imperfect to your lying eyes:
"The dark spots on the moon that been visible to man throughout the ages were explained away as parts of the moon that absorbed and emitted light differently than other parts -- the surface itself was perfectly smooth."
When Galileo pointed his new telescope at the moon in 1609, however, he observed changing shadows that could only be cast by mountains. He announced:
"The Moon certainly does not possess a smooth and polished surface, but one rough and uneven, and just like the face of the Earth itself, is everywhere full of vast protuberances, deep chasms, and sinuosities."
This, and much other new evidence discovered with his telescope, caused Galileo to doubt that the celestial and sublunary spheres were fundamentally different. Adopting the heliocentric theory of the solar system, Galileo began to develop a theory of mechanics (one eventually brought to near-perfection by Newton) that, unlike Aristotle's, would work for both the heavens and the earth. [More]
|
<urn:uuid:067e8c4a-57b1-4605-8bf2-a27f7a9861c4>
|
CC-MAIN-2013-20
|
http://www.isteve.blogspot.com/2007/11/this-sublunary-realm-versus-mountains.html
|
2013-05-18T18:41:46Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959825
| 632
|
The Battle of Naseby, fought in the open fields between the villages of Naseby, Sibbertoft and Clipston in Northamptonshire, was the decisive battle of the English Civil War. It started at about 9 o’clock in the morning on 14 June 1645 and lasted about 3 hours.The Royalist army num bered about 12,000 men,the Parliamentarians 1 5,000.The Royalists were routed and only about 4,000 escaped the field, most of whom were either cavalry or senior officers.
On the anniversary of the battle, two ghostly re enactments take place: a convoy of grim-faced soldiers has been seen pushing carts down an old drovers’ road and the entire battle has been seen taking place in the sky above the site, complete with the sounds of men screaming and cannon firing. For the first century or so after the battle villagers would come out and sit on the nearby hills to watch it. India | Italy | Kineton Fight | Northumberland | Skye | Warwickshire |
Web Design Bradford | firstname.lastname@example.org | Tel: 01274 729 280
770 pages of ghost information.
Copyright © 2005-2013 Eleventh Floor Ltd. All Rights Reserved.
|
<urn:uuid:097c704f-d783-4082-b55f-f0a963304d8f>
|
CC-MAIN-2013-20
|
http://ghosts.org.uk/ghost/2874/haunted/battlefield/the-battle-of-naseby/naseby.html
|
2013-05-22T01:10:39Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941393
| 265
|
Bezoar stones were first documented in a western publication of a four-volume catalogue entitled, Albertus Seba’s Cabinet of Natural Curiosities: Locupletissimi Rerum Naturalium Thesauri Accurata Descriptio.
Albertus Seba (1665-1736) was a Dutch pharmacist, zoologist and natural specimens collector. The thesaurus shows illustrations of his entire collections – from strange and exotic plants to snakes, frogs, crocodiles, shellfish, corals, insects, butterflies and more, as well as fantastic beasts, such as a hydra and a dragon.
In a modern reprint of the thesaurus, there is a mention in the notes section:
“Since ancient times, stones taken from particular animals were considered to possess magical and medical powers. In Seba’s day bezoars were extraordinary popular. These stones formed from hairs that had accumulated and gummed together in the stomachs of ruminants. In a broader sense other stones taken from animals are likewise termed bezoars.”
|
<urn:uuid:91e4d5b3-8c66-49a4-9164-f07ece0bb510>
|
CC-MAIN-2013-20
|
http://butterflyberry-maitreyanto.blogspot.jp/2012/02/batu-landak-dari-perut-landak.html
|
2013-06-19T06:35:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94091
| 222
|
Teach children planning and problem solving not programming
When I first heard that children in secondary education (11-16 year olds) here in the UK were to be taught programming I thought it was another example of a poorly thought through fad ruining our education system. Schools already have enough trouble finding goos maths and science teachers, and the average school leavers knowledge of these subjects is not that great, now resources and time are being diverted to a specialist subject for which it is hard to find good teachers. After talking to a teacher about his experience teaching Scratch to 11-13 year olds I realised he was not teaching programming but teaching how to think through problems, breaking them down into subcomponents and cover all possibilities; a very worthwhile subject to teach.
As I see it the ‘writing code’ subject needs to be positioned as the teaching of planning and problem solving skills (ppss, p2s2, a suitable acronym is needed) rather than programming. Based on a few short conversations with those involved in teaching, the following are a few points I would make:
- Stay with one language (Scratch looks excellent).
- The more practice students get with a language the more fluent they become, giving them more time to spend solving the problem rather than figuring out how to use the language.
- Switching to a more ‘serious’ language because it is similar to what professional programmers use is a failure to understand the purpose of what is being taught and a misunderstanding of why professionals still use ‘text’ based languages (because computer input has historically been via a keyboard and not a touch sensitive screen; I expect professional programmers will slowly migrate over to touch screen programming languages).
- Give students large problems to solve (large as in requiring lots of code). Small programs are easy to hold in your head, where the size of small depends on intellectual capacity; the small program level of coding is all about logic. Large programs cannot be held in the head and this level of coding is all about structure and narrative (there are people who can hold very large programs in their head and never learn the importance of breaking things down so others can understand them), logic does not really appear at this level. Large problems can be revisited six months later; there is no better way of teaching the importance of making things easy for others to understand than getting a student to modify one of their own programs a long time after they originally wrote it (I’m sure many will start out denying ever having written the horrible code handed back to them).
- Problems should not be algorithms. Yes, technically all programs are algorithms but most are not mathematical algorithms in the sense that, say, sorting and searching are, real life problems are messy things that involve lots of simple checks for various conditions and ad-hoc approaches to solving a problem. Teachers should resist mapping computing problems to the Scratch domain, e.g., tree walking algorithms mapped to walking the branches of a graphical tree or visiting all parts of a maze.
|
<urn:uuid:2f91daa3-f1b1-4048-a507-64c2624d3a38>
|
CC-MAIN-2013-20
|
http://shape-of-code.coding-guidelines.com/2012/09/20/teach-children-planning-and-problem-solving-not-programming/
|
2013-05-23T11:55:41Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965468
| 614
|
For more information about National Park Service air resources, please visit http://www.nature.nps.gov/air/.
Air Quality at Yellowstone National Park
What’s in the Air?
Most people who visit national parks expect clean air and clear views. However, Yellowstone National Park (NP) is downwind of significant pollutant sources, including power plants, agricultural areas, industry, and oil and gas development. Yellowstone NP, America’s first national park and home to the Old Faithful, is located in Wyoming, Montana, and Idaho. Here, even emissions of over-snow vehicles affect winter air quality at the park. Air pollution can harm the park’s natural and scenic resources such as surface waters, vegetation, and visibility.
How is air pollution affecting Yellowstone National Park?
- Levels of nitrogen in wet deposition are increasing and may harm higher elevation lakes and sensitive vegetation in the park. more »
- Regional manmade sources, and some of the park’s natural geothermal features, emit harmful contaminants and contribute to mercury deposition into park ecosystems. more »
- Ground-level ozone concentrations are relatively low, however, plant species known to be sensitive to ozone are found in the park. more »
- Fine particles of air pollution sometimes cause haze in the park, affecting how well and how far visitors can see vistas and landmarks. more »
What is the National Park Service doing about air pollution at the park?
- Monitoring nitrogen, sulfur, mercury, ozone, fine particles, and haze to assess status and trends. more »
- Evaluating the impacts of air pollution on park ecosystems. more »
- Working with federal, state, and local agencies, industry, and public interest groups to develop strategies to reduce air pollution and protect and restore park resources through its Winter Use Plan and other efforts. The NPS also reviews plans for development that may increase air pollution in national parks. more »
- Reducing emission contributions through landmark recycling efforts, using alternative fuels and hybrid vehicles, and developing an action plan to reduce emissions in all operations across the entire Greater Yellowstone ecosystem. more »
Pollutants including nitrogen, sulfur, mercury, ozone, and fine particles affect resources such as forests, streams, wildlife, and scenic vistas. Find out how on our Yellowstone NP Air Pollution Impacts web page.
Studies and monitoring help the NPS understand the environmental impacts of air pollution. Access air quality data and see what is happening with Studies and Monitoring at Yellowstone NP.
Last Updated: August 08, 2011
|
<urn:uuid:90651eed-5f6c-4806-944c-01f0ee2c9eee>
|
CC-MAIN-2013-20
|
http://www.nature.nps.gov/air/Permits/aris/yell/index.cfm
|
2013-05-23T04:56:03Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.888281
| 522
|
Progressive Retinal Atrophy (PRA)
Progressive retinal atrophy, or PRA as it is frequently termed, is a hereditary eye disorder. It is inherited as a simple autosomal recessive in most breeds. PRA has been recognized in humans, in most purebred dogs and some breeds of cats.
PRA is a disease of the retina, a tissue that is located inside the back of the eye. The retina contains specialized cells called photoreceptors. These absorb the light focused on them by the eye's lens, and convert the light into electrical nerve signals. The nerve signals are passed by the optic nerve to the brain where they are perceived as vision. In PRA in Abyssinian and Somali cats, the photoreceptors degenerate gradually.
Early in the disease, night vision deteriorates as the cat will face increased problems adjusting its vision to dim light. Later in the process, daytime vision also fails. As the retina degenerates it gets thinner and light will be strongly reflected (hyper reflection). At the same time the pupils become increasingly dilated, in a vain attempt to gather more light, causing a noticeable "shine" to their eyes. The lens may become cloudy, or opaque, resulting in a cataract. Total blindness occurs typically at around 4-5 years but could be delayed for several years.
Affected cats will adapt to their handicap as long as their environment remains constant, and they are not faced with situations requiring excellent vision.
Diagnosis of PRA is normally made by ophthalmoscopic examination. This requires dilatation of the pupil by application of eyedrops. PRA can be diagnosed by ophthalmoscopic changes: increased reflectivity (shininess) of the fundus (the inside of the back of the eye, overlain by the retina); reduction in the diameter and branching pattern of the retina's blood vessels; and shrinking of the optic nerve head (the nerve connecting the retina to the brain).
As there is now a DNA test for the rdAc mutation (causing the most common PRA), one also knows that a cat with double set of this recessive gene (the rdAc) will develop PRA, although there may be no signs at an ophthalmoscopic examination.
Clubs presently participating in this health programme:
|
<urn:uuid:4a7d017e-1a72-406f-ab0f-4702f352b78a>
|
CC-MAIN-2013-20
|
http://pawpeds.com/healthprogrammes/pra_is.html
|
2013-05-22T00:29:30Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957768
| 484
|
A Program Designed to Encourage Kids to Think Outside the Box
I discovered a really great website that’s full of innovation-generating ideas for kids (but they would be SO much fun for adults, too): Think!
As I read the mind-stretching “assignments,” I was reminded of Learning to Love You More and creative design challenges such as this one at the Tech Museum of Innovation in San Jose, CA. The ideas are generally fun and simple, and encourage experimentation, problem-solving, curiosity, and exploration…all good skills for helping children develop their abilities to generate new ideas and think independently.
My daughter is mostly too young to be my test-subject, but having taught children ages 5-18, I can see the potential in these activities and look forward to trying these out in our future. Here are a couple examples from the site…
Build the largest structure you can and send its measurements in with your pictures. You may use — one box of paperclips, one bag of straws, and one deck of cards.
Paper and Pencil
The only things that you need for this challenge are a stop watch, paper, and pencil. In 60 seconds, write down all of the things that you can do with a brick and a blanket. If your list is less than 10 items long, give yourself another 60 seconds and add some more. Good luck! Share your lists — we’ll make one big list.
|
<urn:uuid:ab4b89c8-0e78-4092-b3b2-c234dee0ffd8>
|
CC-MAIN-2013-20
|
http://tinkerlab.com/think/
|
2013-05-23T04:26:02Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.945524
| 304
|
Searching for Undiscovered Particles Around Black HolesCategory: Science & Technology
Posted: June 22, 2012 11:03AM
Discovering new particles can be very difficult because of their small size and smaller lifespan. Take neutrinos for example; these particles are so small they will whiz through matter without ever interacting with it but also have such a short lifespan that if it were not for relativity, we would be unable to detect those made by the Sun. In our massive accelerators it is even more difficult though as we sometimes do not find a particle by detecting it but by detecting what it decays into. Now researchers at the Vienna University of Technology are looking to find one kind of particle hanging around black holes using gravitational waves.
Called axions, these proposed particles have extremely low mass and have yet to be detected. What the researchers have proposed is that we can use their low mass, which translates to low energy as well, to find them around black holes. Those massive structures have so much energy in and around them that some of it can easily transform into axions. These particles would then be orbiting the black hole in a similar way to how electrons orbit a nucleus, but with on major difference. Electrons are fermions while axions should be bosons.
A property of all fermions is that they obey the Pauli Exclusion Principle which forbids two fermions to have the same quantum state at the same time. This is why electron orbitals have certain capacities. Bosons, which include photons, do not obey this principle and can coexist in the same place. This means it would be possible for a cloud of axions to form and orbit a black hole. This cloud may not be stable though and eventually it will collapse. This sudden event would vibrate spacetime in such a way as to make gravitational waves that we can detect.
According to General Relativity, spacetime can be distorted by gravity. You can think of it as a sheet partially stretched out. If you drag your finger along the sheet, you cause it to stretch out more where your finger is and the wrinkles that stretching causes are gravitational waves. Researchers have built massive facilities to detect these distortions, but they are not accurate enough to discern a gravitational wave from a normal vibration. Hopefully that accuracy can be achieved by 2016, but until then, the knowledge of axions and particles like it will have to wait.
|
<urn:uuid:56d8fa01-a538-4567-833f-7fb882738e3e>
|
CC-MAIN-2013-20
|
http://www.overclockersclub.com/news/31887/
|
2013-05-21T11:10:04Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.960794
| 492
|
The Darwin Genetic Research Station on Gagarin IV had developed Human children who had aggressive immune systems, capable of attacking disease organisms before they entered a Human body. The children's antibodies sought out and attacked Humans infected with the Thelusian flu, a fact not discovered until 2365, when the entire crew of the USS Lantree was killed after exposure to the children.
In mid-2365, the first officer of the Lantree suffered a case of Thelusian flu. Shortly thereafter the Lantree stopped at Darwin Station.
The first officer transmitted the disease to the rest of the Lantree crew, who began aging at a rapid rate, and later died. The station personnel also contracted the disease, as did Doctor Pulaski later, until a cure was discovered, using the transporter to revert a person's genetic pattern to a previous transporter trace or DNA code. (TNG: "Unnatural Selection")
|
<urn:uuid:6f58fc7d-75ac-48f9-bae8-f68e9b37da9f>
|
CC-MAIN-2013-20
|
http://en.memory-alpha.org/wiki/Thelusian_flu?oldid=1448859
|
2013-05-20T02:07:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.967223
| 189
|
is the standard method used by scientists to determine the
age of certain fossilized remains. As scientists will often
claim something to be millions or billions of years old
(ages that do not conform to the Biblical account of the age
of the earth), Christians are often left wondering about the
accuracy of the carbon-14 method. The truth is, carbon-14
dating (or radiocarbon dating, as it’s also called) is not a
precise dating method in many cases, due to faulty
assumptions and other limitations on this method.
has a weight of twelve atomic mass units (AMU’s), and is the
building block of all organic matter (plants and animals).
A small percentage of carbon atoms have an atomic weight of
14 AMU’s. This is carbon-14. Carbon-14 is an unstable,
radioactive isotope of carbon 12. As with any radioactive
isotope, carbon-14 decays over time. The half-life of
carbon 14 is approximate 5,730 years. That means if you
took one pound of 100 percent carbon-14, in 5,730 years, you
would only have half a pound left.
created in the upper atmosphere as nitrogen atoms are
bombarded by cosmic radiation. For every one trillion
carbon-12 atoms, you will find one carbon-14 atoms. The
carbon-14 that results from the reaction caused by cosmic
radiation quickly changes to carbon dioxide, just like
normal carbon-12 would. Plants utilize, or “breath in”
carbon dioxide, then ultimately release oxygen for animals
to inhale. The carbon-14 dioxide is utilized by plants in
the same way normal carbon dioxide is. This carbon-14
dioxide then ends up in humans and other animals as it moves
up the food chain.
There is then a
ratio of carbon-14 to carbon-12 in the bodies of plants,
humans, and other animals that can fluctuate, but will be
fixed at the time of death. After death, the carbon-14
would begin to decay at the rate stated above. In 1948, Dr.
W.F. Libby introduced the carbon-14 dating method at the
University of Chicago. The premise behind the method is to
determine the ratio of carbon-14 left in organic matter, and
by doing so, estimate how long ago death occurred by running
the ratio backwards. The accuracy of this method, however,
relies on several faulty assumptions.
carbon-14 dating to be accurate, one must assume the rate of
decay of carbon-14 has remained constant over the years.
However, evidence indicates that the opposite is true.
Experiments have been performed using the radioactive
isotopes of uranium-238 and iron-57, and have shown that
rates can and do vary. In fact, changing the environments
surrounding the samples can alter decay rates.
The second faulty
assumption is that the rate of carbon-14 formation has
remained constant over the years. There are a few reasons
to believe this assumption is erroneous. The industrial
revolution greatly increased the amount of carbon-12
released into the atmosphere through the burning of coal.
Also, the atomic bomb testing around 1950 caused a rise in
neutrons, which increased carbon-14 concentrations. The
great flood which Noah and family survived would have
uprooted and/or buried entire forests. This would decrease
the release of carbon-12 to the atmosphere through the decay
carbon-14 dating to be accurate, the concentrations of
carbon-14 and carbon-12 must have remained constant in the
atmosphere. In addition to the reasons mentioned in the
previous paragraph, the flood provides another evidence that
this is a faulty assumption. During the flood, subterranean
water chambers that were under great pressure would have
been breached. This would have resulted in an enormous
amount of carbon-12 being released into the oceans and
atmosphere. The effect would be not unlike opening a can of
soda and having the carbon dioxide fizzing out. The water
in these subterranean chambers would not have contained
carbon-14, as the water was shielded from cosmic radiation.
This would have upset the ratio of carbon-14 to carbon-12.
To make carbon-14
dating work, Dr. Libby also assumed that the amount of
carbon-14 being presently produced had equaled the amount of
carbon-12 – he assumed that they had reached a balance. The
formation of carbon-14 increases with time, and at the time
of creation was probably at or near zero. Since carbon-14
is radioactive, it begins to decay immediately as it’s
formed. If you start with no carbon-14 in the atmosphere,
it would take over 50,000 years for the amount being
produced to reach equilibrium with the amount decaying. One
of the reasons we know that the earth is less than 50,000
years old is because of the biblical record. Another reason
we can know this is because the amount of carbon-14 in the
atmosphere is only 78% what it would be if the earth were
Libby and the evolutionist crowd have assumed that all plant
and animal life utilize carbon-14 equally as they do
carbon-12. To be grammatically crass, this ain’t
necessarily so. Live mollusks off the Hawaiian coast have
had their shells dated with the carbon-14 method. These
test showed that the shells died 2000 years ago! This news
came as quite a shock to the mollusks that had been using
those shells until just recently.
We’ve listed five
faulty assumptions here that have caused overestimates of
age using the carbon-14 method. The list of non-compliant
dates from this method is endless. Most evolutionists today
would conclude that carbon-14 dating is – at best – reliable
for only the last 3000 to 3500 years. There is another
reason that carbon-14 dating has yielded questionable
results – human bias.
If you’ve ever
been part of a medical study, you’re probably familiar with
the terms “blind study” and “double-blind study”. In a
blind study, using carbon-14 dating for example, a person
would send in a few quality control samples along with the
actual sample to the laboratory. The laboratory analyst
should not know which sample is the one of interest. In
this way, the analyst could not introduce bias into the
dating of the actual sample. In a double-blind study (using
an experimental drug study as an example), some patients
will be given the experimental drug, while others will be
given a placebo (a harmless sugar pill). Neither the
patients nor the doctors while know who gets what. This
provides an added layer of protection against bias.
that do not fit a desired theory are often excluded by
alleging cross-contamination of the sample. In this manner,
an evolutionist can present a sample for analysis, and tell
the laboratory that he assumes the sample to be somewhere
between 50,000 years old and 100,000 years old. Dates that
do not conform to this estimate are thrown out. Repeated
testing of the sample may show nine tests that indicate an
age of 5000 to 10,000 years old, and one test that shows an
age of 65,000 years old. The nine results showing ages that
do not conform to the pre-supposed theory are excluded.
This is bad science, and it is practiced all the time to fit
with the evolutionary model.
The Shroud of
Turin, claimed to be the burial cloth of Christ, was
supposedly dated by a blind test. Actually, the control
specimens were so dissimilar that the technicians at the
three laboratories making the measurements could easily tell
which specimen was from the Shroud. This would be like
taking a piece of wood and two marbles and submitting them
to the lab with the instructions that “one of these is from
an ancient ponderosa pine, guess which.” The test would
have been blind if the specimens had been reduced to carbon
powder before they were given to the testing laboratories.
Humans are naturally biased. We tend to see what we want to
see, and explain away unwanted data.
Perhaps the best
description of the problem in attempting to use the
Carbon-14 dating method is to be found in the words of Dr.
Robert Lee. In 1981, he wrote an article for the
Anthropological Journal of Canada, in which stated:
troubles of the radiocarbon dating method are undeniably
deep and serious. Despite 35 years of technological
refinement and better understanding, the underlying
assumptions have been strongly challenged, and warnings
are out that radiocarbon may soon find itself in a crisis
situation. Continuing use of the method depends on a
fix-it-as-we-go approach, allowing for contamination here,
fractionation there, and calibration whenever possible. It
should be no surprise then, that fully half of the dates
are rejected. The wonder is, surely, that the remaining
half has come to be accepted…. No matter how useful it
is, though, the radiocarbon method is still not capable of
yielding accurate and reliable results. There are gross
discrepancies, the chronology is uneven and relative, and
the accepted dates are actually the selected dates.”
accuracy of carbon-14 dating relies on faulty assumptions,
and is subject to human bias. At best, radiocarbon dating
is only accurate for the past few thousand years. As we’ve
seen though, even relatively youthful samples are often
dated incorrectly. The Biblical record gives us an
indication of an earth that is relatively young. The most
reliable use of radiocarbon dating supports that position.
This method of dating, overall, tends to be as faulty and
ill conceived as the evolutionary model that is was designed
TO EVOLUTION MAIN PAGE
|
<urn:uuid:7d4420ef-da3e-4c61-885f-478078194d0d>
|
CC-MAIN-2013-20
|
http://www.contenderministries.org/evolution/carbon14.php
|
2013-05-19T19:04:37Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940806
| 2,164
|
Christmas in the Trenches
We first published this reflection by Jim Wallis in 2002. It has since become our Christmas tradition, kind of our own Charlie Brown Christmas special, if you will. With the ongoing conflicts raging during each passing year, it remains tragically relevant.
Take Action on This Issue
Silent Night, by Stanley Weintraub, is the story of Christmas Eve, 1914, on the World War I battlefield in Flanders. As the German, British, and French troops facing each other were settling in for the night, a young German soldier began to sing "Stille Nacht, Heilige Nacht." Others joined in. When they had finished, the British and French responded with other Christmas carols.
Eventually, the men from both sides left their trenches and met in the middle. They shook hands, exchanged gifts, and shared pictures of their families. Informal soccer games began in what had been "no-man's-land." And a joint service was held to bury the dead of both sides.
The generals, of course, were not pleased with these events. Men who have come to know each other's names and seen each other's families are much less likely to want to kill each other. War seems to require a nameless, faceless enemy.
So, following that magical night the men on both sides spent a few days simply firing aimlessly into the sky. Then the war was back in earnest and continued for three more bloody years. Yet the story of that Christmas Eve lingered - a night when the angels really did sing of peace on earth.
Folksinger John McCutcheon wrote a song about that night in Belgium, titled "Christmas in the Trenches," from the viewpoint of a young British solder. Several poignant verses are:
"The next they sang was "Stille Nacht," "Tis 'Silent Night'," says I.
And in two tongues one song filled up that sky
"There's someone coming towards us!" the front line sentry cried
All sights were fixed on one lone figure coming from their side
His truce flag, like a Christmas star, shone on that plain so bright
As he bravely strode unarmed into the night.
Soon one by one on either side walked into No Man's land
With neither gun nor bayonet we met there hand to hand
We shared some secret brandy and we wished each other well
And in a flare-lit soccer game we gave 'em hell.
We traded chocolates, cigarettes, and photographs from home
These sons and fathers far away from families of their own
Young Sanders played his squeeze box and they had a violin
This curious and unlikely band of men.
Soon daylight stole upon us and France was France once more
With sad farewells we each began to settle back to war
But the question haunted every heart that lived that wondrous night
"Whose family have I fixed within my sights?"
'Twas Christmas in the trenches, where the frost so bitter hung
The frozen fields of France were warmed as songs of peace were sung
For the walls they'd kept between us to exact the work of war
Had been crumbled and were gone for evermore."
My prayer for the new year is for a nation and world where people can come out of their trenches and together sing their hopes for peace. We here at Sojourners will carry on that mission, and we invite you to continue on the journey with us.
Blessings to you and your families.
|
<urn:uuid:cbab08bf-ee46-4f0e-9de6-0471c3070cbd>
|
CC-MAIN-2013-20
|
http://sojo.net/blogs/2008/12/24/christmas-trenches
|
2013-05-24T22:43:22Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.972339
| 728
|
It seems logical that antimicrobial resistance (AMR) originates in the places where antimicrobials are used. It seems logical, that is, until one starts to think about all the places antimicrobials are used and the results of survey studies that have found the presence of bacteria with genetic indicators of resistance to be widespread. In this third part of a four-part series on AMR we look at who is at fault for the increase in resistance.
Many people are pointing the finger at animal agriculture. Certainly, antibiotics are used in animal agriculture, maybe even a greater amount by volume than is used in human medicine. In animal agriculture, antibiotic use is governed by label requirements that indicate the dose, route of administration, and length of treatment. Most commonly, antibiotics are given to control infectious diseases. In some cases, antibiotics may also be given to prevent infectious diseases. For example, in dairy cows, it has been the standard recommendation to infuse broad-spectrum antibiotics into all four teats of all cows at the time they are dried off to help control intramammary infections.
Antibiotics have also been used to enhance performance. For example, in swine production, antimicrobials are fed to change the gut microbe population with the result that growth is increased.
Antibiotic use in animal agriculture should be done in consultation with a veterinarian. Dairy producers are encouraged to have a valid veterinary-client-patient relationship (VCPR) through which all issues of herd health and treatment are discussed. Veterinarian involvement has been shown to reduce the incidence of antibiotic residues in animals that leave the farm.
But animal agriculture is only one user of antimicrobials. Veterinary drugs are limited by label, however antibiotic use in human medicine does not have label limits. Individual doctors specify the drug use, dose, length of treatment and more. Doctors sometimes prescribe antibiotics when antibiotics are not indicated.
I represented Michigan State University Extension at a recent National Institute of Animal Agriculture (NIAA) conference dedicated to antimicrobial resistance. At the NIAA conference, Dr. Kurt Stevenson of The , reported a study that found that 25-40 percent of hospitalized patients receive antibiotics and 10-70 percent are unnecessary or sub-optimal in dosage. Inappropriate use of antibiotics is a risk factor for the development of antimicrobial resistance.
|
<urn:uuid:45b5d097-b201-4860-af44-b9a451e3fb81>
|
CC-MAIN-2013-20
|
http://www.cattlenetwork.com/cattle-resources/hot-topics/food-safety/Where-does-antimicrobial-resistance-come-from-185448462.html
|
2013-06-20T01:46:18Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930371
| 478
|
It’s quite a week for biofuel breakthroughs and news.
The new breakthrough is aimed at the dry-grind part of the ethanol production process. Basically, corn kernels are ground up, water and enzymes are added, starches are turned into sugars, and sugars are fermented to produce ethanol. The ethanol is recovered with distillation. At the end of the ethanol distillation process, there is a liquid left over – about 6 gallons for every 1 gallon of ethanol. Only about half of the leftover liquid can be recycled, and the process to remove solids and organic materials in it is expensive. When the fungus Rhizopus microsporus is added to the liquid and allowed to flourish, it makes possible as much as 80% of the organic matter and solids in the sillage to be removed, and the liquid leftover can then be recycled into the production process.
The fungus has another useful element – it can be eaten. Ethanol plants can harvest the protein- and nutrient-rich fungus and sell it as a livestock food supplement.
Implementing the new technology would cost an ethanol plant that produces 100 million gallons a year about $11 million – kind of a lot for ethanol plants right now, but still do-able. And, researchers say that investment could be paid back in as little as six months, thanks to the energy savings. The process is still waiting for a patent, and investors to help the project prove that the process can work on a commercial scale, so all this is still iffy. But iffy it works, then ethanol plants could have a new way to reduce overall costs and environmental impact on production.
written by Tim, August 01, 2008
written by erichansa, August 01, 2008
written by Dave Nofmeister, August 01, 2008
written by P, August 01, 2008
written by Garry Golden, August 01, 2008
written by Myzine, August 01, 2008
written by WillG, August 01, 2008
written by zeeol, August 01, 2008
written by Aaron, August 01, 2008
written by john v, April 07, 2010
|< Prev||Next >|
|
<urn:uuid:db1e84d6-e22a-4eb6-9744-9ff8d0e23f07>
|
CC-MAIN-2013-20
|
http://www.ecogeek.org/component/content/article/1949
|
2013-05-19T19:36:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949646
| 448
|
Although cancers in children are rare compared to cancers in adults, parents should know the signs and symptoms to look for and what to do if your child develops one of the 12 major types of cancer that can occur in childhood.
Leukemias are the most common childhood cancers, followed by brain cancers. Lymphomas are the third most common types of cancers in children. As children mature into their teenage years, they are at increased risk of developing osteosarcomas (bone cancers).
These are various types of childhood cancers:
Leukemias — These are cancers that arise in bone marrow and tissues that produce blood cells. The most common type of leukemia in children is acute lymphoblastic leukemia, or ALL, which arises in cells in the bone marrow. Another common type is acute myelogenous leukemia, or AML, a cancer of myeloid blood cells produced in bone marrow. Common signs of leukemia include bone and joint pain, bleeding, fever, and weakness. See Childhood Leukemia for more information.
Brain and central nervous system tumors — The most common types of brain cancers are called gliomas, which arise from glial cells in the brain and spinal cord. Signs include blurred or double vision, dizziness, and trouble walking.
Lymphomas— Lymphomas are cancers that arise in lymph tissue in the body’s immune system. Two major types are Hodgkin’s lymphoma, which affects lymph nodes in the neck, armpits, and groin; and non-Hodgkin’s lymphoma, which affects lymph nodes deep within the body. Signs include swelling of the glands in the neck, armpits, and groin.
Sarcomas — These cancerous tumors occur in bones and soft tissue, such as muscle. Osteosarcomas are common types of bone cancers that grow in legs and arms, close to joints. Rhabdomyosarcoma is a soft-tissue cancer found in muscles of the head, neck, arms, and legs. Signs include pain and a lump or swelling.
Liver cancers — The most common liver cancer in children is hepatoblastoma, a very rare cancer that most often affects children in the first 18 months of life. Signs include a painless lump, swelling, or pain in the abdomen and unexplained weight loss.
Kidney cancers — Wilm’s tumor can occur in one or both kidneys. A type of sarcoma called clear cell sarcoma can also occur in the kidneys of children. Signs include a lump, swelling, or pain in the abdomen.
Other childhood cancers — Retinoblastoma is a cancer of the retina, a thin membrane at the back of the eye. Germ cell tumors can arise in the testes, ovaries, and at the bottom of the spine, as well as in the chest, abdomen, and middle of the brain. Children with retinoblastoma may have no symptoms or a white pupil that does not reflect light. Signs of germ cell tumors include a lump, swelling, or mass that can be felt or seen.
|
<urn:uuid:c42cd274-59ca-4839-ae00-e5d7e195668e>
|
CC-MAIN-2013-20
|
http://www.patientresource.com/Childhood_Cancer.aspx
|
2013-05-19T02:45:01Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943237
| 639
|
The above image shows a 60-kilometer stretch of the Yangtze River in China. In the image, one can see the Xiling Gorge, which is the easternmost of the three big gorges along the Yangtze. The construction site of the Three Gorges Dam, slated to be the world’s largest, sits on the left-hand side of the image along the big bend in the river. The dam is being built in part to control flooding along the Yangtze.
This image was acquired on July 20, 2000, by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite. With its 14 spectral bands from the visible to the thermal infrared wavelength region and high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image Earth for the next six years to map and monitor the changing surface of our planet.
Size: 60 x 24 kilometers (36 x 15 miles)
Location: 30.6 degrees north latitude, 111.2 degrees east longitude
Orientation: north at top
Image Data: ASTER bands 1,2, and 3
Original Data Resolution: 15 meters
Date Acquired: July 20, 2000
|
<urn:uuid:612b3a1c-cfca-4c32-8aff-c5b43f179026>
|
CC-MAIN-2013-20
|
http://visibleearth.nasa.gov/view.php?id=60936
|
2013-05-19T09:47:16Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.878608
| 259
|
Lesson 7: The Call of Abraham
We will now learn about Abraham and how he and all the people who believe in him will be blessed by God.
Abraham lived in the city of Ur. In this city the people had very fine homes and there were many fine buildings. But the people did not obey God. They worshipped idols. They thought that the moon was greater than God. But, we know that God made the moon and we thank God for everything, because God gave us everything.
God wanted Abraham to leave this place because God knew that Abraham was a good man. So God spoke to Abraham and told him to leave the city and go to a land that God would show him. So Abraham, his wife Sarah, his nephew Lot, his father Terah, and his brother Nahor, all left the city of Ur. They would now have to live in tents. It would not be very easy for them to travel in those days. But Abraham had faith in God and believed that God would bless him if he obeyed God.
They all went as far as the city of Haran and stayed there for a time. Then Abraham's father, Terah, died. God spoke to Abraham and told him to travel further. God said to Abraham, "I will make of thee a great nation, and I will bless thee, and make thy name great; and thou shalt be a blessing; and I will bless them that bless thee, and curse them that curse thee: and in thee shall all families of the earth be blessed" (Genesis 12:2-3).
Then Abraham, Sarah, and Lot came to the city of Shechem. Abraham was 75 years old then. Now God spoke to Abraham at Shechem and told him that this was the land that God would give to him. Abraham was very thankful to God and worshipped Him. God was also well pleased with Abraham because Abraham showed God that he had faith in God. Abraham is called the friend of God. We are told in the Bible that we can share these wonderful promises that God gave Abraham, if we believe in Abraham and in Jesus (Galatians 3:26-29).
1. In what city did Abraham live?
2. Why was God pleased with Abraham?
3. How can we share the promises God made to Abraham?
Question: What is the Bible?
Answer: It is a book written by men that were told what to write by God.
|
<urn:uuid:b1aa3a1e-4c4e-4978-95f3-eb4ce79c8aad>
|
CC-MAIN-2013-20
|
http://www.bereans.org/lenny/index.php?f=07
|
2013-05-19T19:37:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.991578
| 504
|
Arion subfuscus is native to the Paleartic (Beyer and Saari, 1978) and has been introduced to northeastern North America, where it ranges from eastern Canada to South Carolina, and as far west as Indiana. In Europe its range has expanded to include the northwestern, central, and eastern regions (Pinceel et al. 2005). (Beyer and Saari, 1978; Pinceel, et al., 2005)
Habitat use by the terrestrial gastropod Arion subfuscus varies seasonally. In fall A. subfuscus can be found foraging in moist plant debris and small crevices in the soil. As winter approaches, it usually moves deeper into the soil, returning to the leaf litter in spring. In summer A. subfuscus must find adequate shelter to prevent desiccation (Beyer and Saari 1978). (Beyer and Saari, 1978)
Arion subfuscus is a terrestrial slug. Like most slugs A. subfuscus has a tough body covered in mucus and lacks a visible shell. A. subfuscus individuals will have one of four color groups: reddish brown, black, orange or yellow, and lateral or mantle bands may or may not be present (Beyer and Saari, 359).
Arion subfuscus is a pulmonate gastropod and thus lacks gills but instead has a lung developed from the mantle cavity. The lung is open to the outside by a small pore called the pneumostome, which permits air exchange but limits water loss. The mantle sits on top of the body and lung. Internal shells are very reduced and present only as calcareous grains under the rear part of the mantle (Nichols, Cooke and Whiteley, 62). (Beyer and Saari, 1978; Nichols, et al., 1971; Pearse, et al., 1987)
Arion subfuscus generally has an annual life cycle where eggs hatch in autumn and adults die in summer. This can vary, however, depending on geographical location and habitat (Beyer and Saari 1978). Adults lay eggs which hatch directly into small juvenile slugs. The slugs grow slowly during the first few months, followed by rapid growth resulting in sexual maturity. During the period of rapid growth, the hermaphrodite gland becomes enlarged and the ratio of gland weight to body weight reaches a maximum as body weight reaches a maximum in the spermatozoon stage. During the reproductive stage, body weight remains constant but the hermaphrodite gland decreases in size as the slugs move into post-reproductive phase (Barker 1991). (Barker, 1991; Beyer and Saari, 1978)
During copulation slugs exchange sperm through their protruding genitalia. Fertilization is internal, and several days after mating the slug will lay hundreds of eggs in the soil. Most adult slugs die soon after breeding and there is no parental care (Barker 1991).
In Arion and other genera, some individuals engage in apophallation: during sperm transfer male genitalia can become entangled. In such a case, the slugs may bite off each others penises to free themselves. Following apophallation the slug effectively becomes a female and never regains male functioning. (Barker, 1991)
Species such as A. subfuscus that belong to genus Arion lay their eggs in clusters in the soil. Although the eggs are left alone and there is no parental care, the eggs are chemically protected by a diterpene called miriamin. This chemical is a caustive agent that prevents the eggs from being eaten or damaged (Schroeder et al. 1999). (Schroeder, et al., 1999)
A. subfuscus is an annual species with a lifespan ranging from 8-12 months. Arion slugs generally hatch sometime between autumn and winter. They typically undergo a period of slow growth during winter followed by a period of rapid growth culminating in reproductive maturity. Slugs usually die post reproduction, but this can vary depending on the conditions and geographical location (Barker 1991). (Barker, 1991)
Arion subfuscus utilizes a muscular “foot” to creep slowly through vegetation and litter. A. subfuscus is most active during dusk or night to avoid desiccation(Beyer and Sarri 1977). (Beyer and Saari, 1978)
Home range is relatively confined because of its generalist feeding habits and avoidance of dessication (Beyer and Saari 1978). (Beyer and Saari, 1978)
In general terrestrial gastropods have poor ability to perceive objects by vision, and have little or no auditory perception. The primary sense used in perception is smell. The olfactory organs on a slug are located at the tips of the tentacles. The 4 tentacles can regenerate if they are removed and the olfactory organ will regenerate with it. Slugs also have chemoreceptors to detect toxins. The chemoreceptors are located on the lips. They eyes of the slugs are not primarily for vision. They are thought to be used to perceive light and to set its circadian rhythm (Barker, 2001). (Barker, 2001)
Arion subfuscus uses its radula to scrape and consume its food (Pearse et al., 1987). It appears to have a broad diet which includes fungi and decaying plants as major components, but also yellowed foliage, exposed plant parts, animal feces, insect larvae, dead or injured earthworms, and algae. A. subfuscus was observed foraging 6 m from the ground on tree trunks (Beyer and Sarri, 1977). (Beyer and Saari, 1978; Pearse, et al., 1987)
Arion subfuscus is an example of a generalist species and has been found living in woodlands, arable lands, edge habitats, and around human habitations. They can survive in a variety of soils and microhabitats including soil, plant litter and vegetation. As generalist herbivores, slugs could be a major factor in limiting the geographical ranges of plants (Scheidel and Bruelheide, 1999).
Because terrestrial slugs store environmental chemicals in their bodies, these toxic residues may be passed along the food chain and ultimately affect the biodiversity of an ecosystem (Martin, 2000). (Martin, 2000; Scheidel and Bruelheide, 1999)
A. subfuscus is not listed as endangered, threatened, vulnerable in any part of its range.
Kelly Amanda (author), Rutgers University, Jessica Mazzara (author), Rutgers University, Christen McCoy (author), Rutgers University, Philip Nicodemo (author), Rutgers University, David V. Howe (editor), Rutgers University, Renee Mulcrone (editor), Special Projects.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa.
living in landscapes dominated by human agriculture.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
uses smells or other chemicals to communicate
active at dawn and dusk
a substantial delay (longer than the minimum time required for sperm to travel to the egg) takes place between copulation and fertilization, used to describe female sperm storage.
an animal that mainly eats decomposed plants and/or animals
particles of organic material from dead and decomposing organisms. Detritus is the result of the activity of decomposers (organisms that decompose organic material).
animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature
union of egg and spermatozoan
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
a distribution that more or less circles the Arctic, so occurring in both the Nearctic and Palearctic biogeographic regions.
Found in northern North America and northern Europe or Asia.
fertilization takes place within the female's body
referring to animal species that have been transported to and established populations in regions outside of their natural range, usually through human action.
an animal that mainly eats fungus
the area in which the animal is naturally found, the region in which it is endemic.
active during the night
reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body.
"many forms." A species is polymorphic if its individuals can be divided into two or more easily recognized groups, based on structure, color, or other similar characteristics. The term only applies when the distinct groups can be found in the same area; graded or clinal variation throughout the range of a species (e.g. a north-to-south decrease in size) is not polymorphism. Polymorphic characteristics may be inherited because the differences have a genetic basis, or they may be the result of environmental influences. We do not consider sexual differences (i.e. sexual dimorphism), seasonal changes (e.g. change in fur color), or age-related changes to be polymorphic. Polymorphism in a local population can be an adaptation to prevent density-dependent predation, where predators preferentially prey on the most common morph.
breeding is confined to a particular season
remains in the same area
offspring are all produced in a single group (litter, clutch, etc.), after which the parent usually dies. Semelparous organisms often only live through a single season/year (or other periodic change in conditions) but may live for many seasons. In both cases reproduction occurs as a single investment of energy in offspring, with no future chance for investment in reproduction.
reproduction that includes combining the genetic contribution of two individuals, a male and a female
mature spermatozoa are stored by females following copulation. Male sperm storage also occurs, as sperm are retained in the male epididymes (in mammals) for a period that can, in some cases, extend over several weeks or more, but here we use the term to refer only to sperm storage by females.
living in residential areas on the outskirts of large cities or towns.
uses touch to communicate
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
living in cities and large towns, landscapes dominated by human structures and activity.
uses sight to communicate
Barker, G. 2001. The Biology of Terrestrial Molluscs. New York, NY: CABI Publishing.
Barker, G. 1991. Biology of slugs (Agriolimacidae and Arionidae: Mollusca) in New Zealand hill country pastures. Oecologia, Vol. 85, No. 4: 581-595.
Beyer, W., D. Saari. 1978. Activity and ecological distribution of the slug, Arion subfuscus (Draparnaud) (Stylommotophora, Arionidae). American Midland Naturalist, 100/2: 359-367.
Cook, A. 1992. The function of trail following in the pulmonate slug, Limax pseudoflavus. Animal Behaviour, 43/5: 813-821.
Davison, A., C. Wade, P. Mordan, S. Chiba. 2005. Sex and darts in slugs and snails (Mollusca: Gastropoda: Stylommatophora). Journal of Zoology, 267: "329-338".
Frank, T. 2003. Influence of slug herbivory on the vegetation development in an experimental wildflower strip. Basic and Applied Ecology, 4/2: "139-147".
Leonard, J., J. Pearse, A. Harper. 2002. Comparative reproductive biology of Ariolimax californicus and A. dolichophallus. Invertebrate reproduction & development, 41/1-3: "83-93".
Martin, S. 2000. Terrestrial snails and slugs (Mollusca: Gastropoda) of Maine. Northeastern Naturalist, 7/1: 33-88.
McCraken, G., R. Selander. 1980. Self-fertilization and monogenic strains in natural populations of terrestrial slugs. Proceedings of the National Academy of Sciences of the United States, 77/1: "684-688".
Nichols, D., J. Cooke, D. Whitely. 1971. The Oxford Book of Invertebrates. London: Oxford University Press.
Pearse, V., J. Pearse, M. Buschbaum, R. Buschbaum. 1987. Living Invertebrates. California: Blackwell Scientific Publications and The Boxwood Press.
Pinceel, J., K. Jordaens, N. Van Houtte, G. Bernon, T. Backeljau. 2005. Population genetics and identity of an introduced terrestrial slug: Arion subfuscus s.l. in the north-east USA (Gastropoda, Pulmonata, Arionidae). Genetica, 125: 155-171.
Scheidel, U., H. Bruelheide. 1999. Selective slug grazing on montane meadow plants. Journal of Ecology, 87: 828-838.
Schroeder, F., A. Gonzàlez, T. Eisner, J. Meinwald. 1999. Miriamin, a defensive diterpene from the eggs of a land slug (Arion sp.). Proceedings of the National Academy of Sciences of the United States, 96/24: "13620-13625".
|
<urn:uuid:12080a28-329e-475c-8279-74d09bc7b659>
|
CC-MAIN-2013-20
|
http://animaldiversity.ummz.umich.edu/site/accounts/information/Arion_subfuscus.html
|
2013-05-23T19:07:47Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.905837
| 2,990
|
Because of the increasing incidences of behavior problems in companion birds, many are losing their homes. Problems that may develop include biting, screaming, feather picking, and phobias. The biting bird can terrorize the entire family and anyone else that comes within reach. The screaming bird can get the owner evicted. The feather-plucker may continue on to self-mutilation. The phobic bird suddenly acts like familiar people are deadly predators.
Behavior problems develop when the bird's basic needs are not being met. These needs include food, water, shelter, sleep, and proper social interaction. Once these needs are met in an appropriate manner, the behavior problems will be easier to resolve or at least control.
Cage Size: Overly small cages are a common problem with companion birds and cause stress, which often leads to behavior problems. Larger cages do cost money, sometimes hundreds of dollars. The bird may outgrow his original cage bought three years ago and need an upgrade. Be prepared for that need. An absolute minimum cage size for the larger birds is 1-1/2 times the bird's wingspan in width, depth, and height. This gives the bird room to stretch and move without damaging the wing or tail feathers on the cage bars.
Cage Location: Cage location can be critical. Some birds are very gregarious and need to be in the middle of the family activity as much of the day as possible. Some nervous birds need to be in a quieter room, but one that still is occupied by the family for social interaction. Placing the cage so one side is against a wall or providing a hiding place in the cage may relieve stress as the bird is able to relax and stop looking for predators. It is not advisable to place the cage in front of a window as a permanent location as the bird cannot relax his search for enemies.
Cage Height: The optimal cage height is slightly below eye level. Do not place the bird's cage on the floor as this may cause a nervous bird's anxiety to increase. The bird essentially has no way to avoid the predators that he is always on the lookout for if he is on the ground.
Height and Shouldering: Parrots should not be allowed to sit on shoulders especially as adolescents. Birds sitting on a human's shoulder are within easy reach of causing severe damage to the owner's eyes, ears, nose, and lips. The bird may cause damage intentionally (biting) or unintentionally (grabbing onto something to keep from falling). Either way, the damage to the owner and to the owner-pet bond has occurred.
Boredom: This is a major factor in behavior problems because the bird has nothing to keep him occupied so he finds something to do on his own. If the family members are gone to work or school for 8-10 hours a day, they must provide outlets for the bird's energy. In the wild, a bird divides his time between interacting with his mate and flock, finding and eating food, and grooming. Toys should be provided and rotated on a regular basis (every couple days or weekly) to provide new entertainment for the bird. Food can be hidden in toys, hung in the cage (make sure it is safe), or provided in large pieces the bird needs to break up before eating. Parrots are intelligent birds and you need to provide an outlet for their curiosity and energy.
Sleep Deprivation: Many companion birds originate in the tropics. They would normally see 10-12 hours of darkness year-round. Adult parrots should receive 10-12 hours of sleep each night. This is best accomplished by moving the parrot from the family room to a quiet darkened room for sleeping. In the morning, the bird is then moved back to the family room where he can interact with his family. A small 'sleep cage' can be set up and left in the bird's 'bedroom' and the regular cage left in the family room.
All pets have normal behaviors that can become problems for the humans in the pet's life. Normal dog behavior is to bark and dig. Cats normally scratch and climb to high places. Parrots normally chew on items. These become problems when they occur at the wrong time or place for the human family. The humans need to teach the proper behavior and set rules from the first time the pet comes into the household. For those owners who already have a bird with a behavior problem, help from an avian veterinarian or animal behaviorist may be necessary to correct the offending behavior. Typically, the human family needs to make changes in the way they handle the bird. Small things, such as the owner deciding when the bird will get a treat or leave/return to his cage will raise the ranking of the human in the bird's eyes.
With the proper education, the bird owner will be able to have a healthy, happy, well-adjusted companion bird.
|
<urn:uuid:ced64165-890f-43fd-89ad-22f248602086>
|
CC-MAIN-2013-20
|
http://www.peteducation.com/article_print.cfm?c=15+1795&aid=1514
|
2013-05-20T02:06:14Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962152
| 997
|
Science Fair Project Encyclopedia
Vitis x bourquina
Vitis x champinii
Vitis x doaniana
Vitis x labruscana
Vitis x novae-angliae
A grape is the fruit of a vine in the family Vitaceae. It is commonly used for making grape juice, jelly, wine and raisins, or can be eaten raw. Grapes constitute approximately 50% of all fruit grown in the world.
Many species of grape exist including:
- Vitis vinifera, the European winemaking grapes
- Vitis labrusca, the North American table and grape juice grapes, sometimes used for wine
- Vitis riparia , a wild grape of North America, sometimes used for winemaking
- Vitis rotundifolia , the muscadines, used for jelly and sometimes wine
- Vitis aestivalis , the variety Norton is used for winemaking
- Vitis lincecumii (also called Vitis aestivalis or Vitis lincecumii), Vitis berlandieri (also called Vitis cinerea var. helleri), Vitis cinerea , Vitis rupestris are used for making hybrid wine grapes and for pest-resistant rootstocks.
Hybrids also exist, primarily crosses of V. vinifera with one or more varieties of V. labrusca , V. riparia or V. aestivalis . Hybrids tend to be less susceptible to frost and disease (notably phylloxera), but their wine has little of the characteristic "foxy" odor of labrusca .
Currently, a large fraction of the grape crop goes to producing grape juice to be used as a sweetener for fruits canned 'with no added sugar' and '100% natural'.
Wild grapes are often considered a nuisance weed as they cover other plants and form thick entangling vines.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|All Science Fair Projects.com Site
All Science Fair Projects Homepage
|Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice|
|
<urn:uuid:3504b5c9-d68b-40be-8c68-0741b6f1350a>
|
CC-MAIN-2013-20
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Grapes
|
2013-05-24T01:36:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.752675
| 475
|
Founded in 1836, Emory and Henry is a small, private liberal arts college affiliated with the United Methodist Church and named in honor of John Emory, a Methodist bishop, and Patrick Henry, a hero of the Revolutionary War (1775–1783) and Virginia's first governor. Early on, students worked the college's farm as a way to defray the costs of tuition, and the school hired local slave labor for cooking, cleaning, and farm work. After suffering through the financial crises of the 1830s and 1840s, Emory and Henry was debt free by the 1850s. Its most famous student was Stuart, a native of Patrick County, who attended the school from 1848 until 1850 before enrolling at the U.S. Military Academy at West Point.
During the presidential campaign of 1860, many Emory and Henry students campaigned on behalf of the Constitutional Union Party, a political refuge for cautious border Whigs and nativists who were intent on preserving slavery but alarmed by the belligerence of fire-eating Democrats and Northern Republicans. After the Republican candidate, Abraham Lincoln, was elected and Virginia seceded from the Union, most students set aside their political differences and withdrew from classes in order to join the war effort. The college's president, Ephraim Emerson Wiley, served as a chaplain, ministering to wounded soldiers who were relocated to the college grounds.
Emory and Henry's location in the foothills of the Appalachian Mountains kept it isolated from the military campaigns that raged across the Shenandoah Valley and the Piedmont. Still, it was threatened by periodic Union raids targeting the nearby Wytheville lead mines and the salt production facility at Saltville, the latter of which was crucial in provisioning the Confederate army. One such raid in October 1864 resulted in the Battle of Saltville, where outnumbered Confederate cavalry managed to drive back a determined assault led by Union general Stephen G. Burbridge.
Union prisoners of war, many of them wounded and belonging to the 5th U.S. Colored Cavalry, were transferred to the Emory hospital, where, according to a Union surgeon left behind to care for them, Confederate troops killed at least five to seven of the black troopers along with a white lieutenant, Elza C. Smith. Some historians, including Thomas Mays, have argued that as many as forty-six were killed that day, both on the battlefield and in the hospital. But scholar William Marvel has argued that a smaller number, anywhere from five to as many as two dozen, is more likely.
Emory and Henry College reopened in August 1865 with a few antebellum students returning to complete their degrees.
Cite This EntryAPA Citation:
First published: January 28, 2009 | Last modified: April 5, 2011
|
<urn:uuid:aff451b5-c1a3-40bc-9baa-fda6863cb599>
|
CC-MAIN-2013-20
|
http://www.encyclopediavirginia.org/Emory_and_Henry_College_During_the_Civil_War
|
2013-05-22T00:21:42Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.975228
| 570
|
Profiles and Perspectives of Our American Identity
The districts in this northern New Jersey consortium vary in size and demography, and many face challenges that have prevented them from offering rigorous history professional development. Profiles and Perspectives of Our American Identity will engage teachers in annual activities that include two full-day history seminars, one field research experience, a variety of supplemental activities (e.g., lesson study, research) and a 5-day summer academy. Every year, participants will have at least 112 hours of professional development in content, methods and research. A core group of 40 teachers, with a minimum of two from each district, will participate for all five years and will be selected mainly from schools in need of improvement. An additional 15 to 20 teachers will participate each year based on need and availability. The project will invite teachers to compare and contrast local, regional and national events in American history through profiles of well-known and ordinary individuals and their perspectives on ideas, decisions, events and issues. For example, the profiles (based largely on personal papers and primary sources) from Colonial America will include Thomas Jefferson, George Washington, Native Americans, indentured servants and farmers. Their perspectives on such issues as government and law, steps toward unity, social and economic conditions, family life and religion will be considered. Project staff and historians will help participants learn to employ inductive instruction, address diverse learning styles, use "History Habits of Mind" and essential questions, and conduct historical research. Teachers will develop classroom libraries of teaching materials and, through lesson study, will collaborate to design, deliver, observe and refine lesson plans.
|
<urn:uuid:e0dd0d63-d335-4791-9a65-c45e3fdfae40>
|
CC-MAIN-2013-20
|
http://www.teachinghistory.org/tah-grants/tah-project-database/24916
|
2013-05-25T12:35:45Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939753
| 326
|
July 27, 2007 The capacity to resist peer pressure in early adolescence may depend on the strength of connections between certain areas of the brain, according to a study carried out by University of Nottingham researchers.
New findings suggest that enhanced connections across brain regions involved in decision-making may underlie an individual's ability to resist the influence of peers.
The study, published in the July 25 issue of The Journal of Neuroscience, suggests that brain regions which regulate different aspects of behaviour are more interconnected in children with high resistance to peer influence.
Professor Tomas Paus and colleagues at The University of Nottingham used functional neuroimaging to scan adolescents while they watched video clips of neutral or angry hand and face movements. Previous research has shown that anger is the most easily recognised emotion.
Professor Paus and his team observed 35 ten-year-olds with high and low resistance to peer influence, measured by a questionnaire. The researchers then showed the children video clips of angry hand movements and angry faces and measured their brain activity.
They found that the brains of all children showed activity in regions important for planning and extracting information about social cues from movement, but the connectivity within these regions was stronger in children who were marked as less vulnerable to peer influence.
Those children were also found to have more activity in the prefrontal cortex, an area important for decision-making and inhibition of unwanted behaviour.
Professor Paus said: “This is important if we are to understand how the adolescent brain attains the right balance between acknowledging the influences of others and maintaining one's independence.”
Future research will involve follow-ups with the same children to determine whether their resistance to peer influence is related to the brain changes observed in this study.
The work was a supported by grants from the Santa Fe Institute Consortium and the Canadian Institutes of Health Research.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
<urn:uuid:0ad0afea-8e8b-40a9-9b3f-12d6bac93063>
|
CC-MAIN-2013-20
|
http://www.sciencedaily.com/releases/2007/07/070725093605.htm
|
2013-05-19T02:31:18Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.94866
| 414
|
London Railways 1897
Date of publication:
Publisher: Old House Books
The railway network that helped to make London the greatest city in the world.
A coloured map of London and its suburbs showing branch lines, underground lines, tram lines and the mainline and suburban stations that served the needs of the first commuters in the year of Queen Victoria's Diamond Jubilee. Each map has a booklet describing the development of London’s railways before 1897 as well as current and future plans.
Of all the great innovations of the nineteenth century it was the railways that contributed the most. In London new railway lines ran to the docks where ships were discharging previously unseen raw materials from an Empire that straddled the globe. By rail these goods could now be dispersed all over the country to factories and towns with rapidly increasing populations. London, the hub of the Empire, had become the world's greatest commercial centre and, for the first time, people were able to live in the healthier suburbs and travel into the city to work. The Victorians were passionate railway builders both underground and overground and all the outlying towns, long since devoured by the metropolis, were connected to the great termini by remarkable engineering feats that involved tunnels, cuttings, embankments, bridges and viaducts all of which were constructed by thousands of manual labourers.
This map shows what they achieved and when they had finished London had the finest railway network in the world at a time when you could set your clock by a passing steam train.
|
<urn:uuid:615a778d-8865-430b-84ce-172d4822bce7>
|
CC-MAIN-2013-20
|
http://nationalarchives.gov.uk/bookshop/details.aspx?titleId=1222&tab=0
|
2013-05-24T02:04:50Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.971682
| 311
|
Q.1 Which two functions are provided by the upper layers of the OSI model? (Choose two.)
placing electrical signals on the medium for transmission
initiating the network communication process
encrypting and compressing data for transmission
segmenting and identifying data for reassembly at the destination
choosing the appropriate path for the data to take through the network
Q.2 Which is a function of the transport layer of the OSI model?
routes data between networks
converts data to bits for transmission
delivers data reliably across the network using TCP
formats and encodes data for transmission
transmits data to the next directly connected device
Q.3 Which common Layer 1 problem can cause a user to lose connectivity?
incorrect subnet mask
incorrect default gateway
loose network cable
NIC improperly installed
Q.4 Which three command line utilities are most commonly used to troubleshoot issues at Layer 3? (Choose three.)
a packet sniffer
Q.5 Which address is used by the router to direct a packet between networks?
source MAC address
destination MAC address
source IP address
destination IP address
Q.6 What is the correct encapsulation order when data is passed from Layer 1 up to Layer 4 of the OSI model?
bits, frames, packets, segments
frames, bits, packets, segments
packets, frames, segments, bits
segments, packets, frames, bits
Q.7 What are two goals of the ISP help desk? (Choose two.)
conserving support resources
sales of network services
Q.8 In what two ways do Level 1 and Level 2 help desk technicians attempt to solve a customer's problems? (Choose three.)
talking to the customer on the telephone
upgrading hardware and software
using various web tools
making an onsite visit
installing new equipment
with remote desktop sharing applications
Q.9 A customer calls the help desk about setting up a new PC and cable modem and being unable to access the Internet. What three questions would the technician ask if the bottom-up troubleshooting approach is used? (Choose three.)
Is the NIC link light blinking?
What is the IP address and subnet mask?
Can the default gateway be successfully pinged?
Is the network cable properly attached to the modem?
Is the Category 5 cable properly connected to the network slot on the PC?
Can you access your e-mail account?
Q.10 A customer calls to report a problem accessing an e-commerce web site. The help desk technician begins troubleshooting using a top-down approach. Which question would the technician ask the customer first?
Can you access other web sites?
Is there a firewall installed on your computer?
What is your IP address?
Is the link light lit on your NIC card?
Q.11 Which statement describes the process of escalating a help desk trouble ticket?
The help desk technican resolves the customer problem over the phone and closes the trouble ticket.
Remote desktop utilities enable the help desk technician to fix a configuration error and close the trouble ticket.
After trying unsuccessfully to fix a problem, the help desk technician sends the trouble ticket to the onsite support staff.
When the problem is solved, all information is recorded on the trouble ticket for future reference.
Q.12 What are two functions of the physical layer of the OSI model? (Choose two.)
adding the hardware address
converting data to bits
encapsulating data into frames
Q.13 A customer calls the ISP help desk after setting up a new PC with a cable modem but being unable to access the Internet. After the help desk technician has verified Layer 1 and Layer 2, what are three questions the help desk technician should ask the customer? (Choose three.)
What is your subnet mask?
What is your IP address?
Is the NIC link light blinking?
Can you ping the default gateway?
Is the network cable properly attached to the cable modem?
Is the network cable correctly connected to the network port on the PC?
Q.14 Which scenario represents a problem at Layer 4 of the OSI model?
An incorrect IP address on the default gateway.
A bad subnet mask in the host IP configuration.
A firewall filtering traffic addressed to TCP port 25 on an email server.
An incorrect DNS server address being given out by DHCP.
Q.15 What are two basic procedures of incident management? (Choose two.)
opening a trouble ticket
using diagnostic tools to identify the problem
surveying network conditions for further analysis
configuring new equipment and software upgrades
adhering to a problem-solving strategy
e-mailing a problem resolution to the customer
Q.16 Which level of support is supplied by an ISP when providing managed services?
Q.17 What is the first step that is used by a help desk technician in a systematic approach to helping a customer solve a problem?
identify and prioritize alternative solutions
isolate the cause of the problem
define the problem
select an evaluation process
Q.18 A network technician has isolated a problem at the transport layer of the OSI model. Which question would provide further information about the problem?
Do you have a firewall that is configured on your PC?
Do you have a link light on your network card?
Is your PC configured to obtain addressing information using DHCP?
What default gateway address is configured in your TCP/IP settings?
Can you ping http://www.cisco.com?
Q.19 An ISP help desk technician receives a call from a customer who reports that no one at their business can reach any websites, or get their e-mail. After testing the communication line and finding everything fine, the technician instructs the customer to run nslookup from the command prompt. What does the technician suspect is causing the customer's problem?
improper IP address configuration on the host
hardware failure of the ISR used to connect the customer to the ISP
bad cables or connections at the customer site
failure of DNS to resolve names to IP addresses
Q.20 Which layers of the OSI model are commonly referred to as the upper layers?
application, presentation, session
application, session, network
presentation, transport, network
presentation, network, data link
session, transport, network
Mobile QR code of this page
|
<urn:uuid:7f299001-b2fc-4d73-85ca-273770803cc1>
|
CC-MAIN-2013-20
|
http://ccnaexamsanswers.blogspot.com/2011/01/dsmbisp-chapter-2-exam-answers.html
|
2013-06-19T19:35:56Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.873225
| 1,330
|
Because of the confluence of these major biotic zones, the Sevilleta NWR presents an ideal setting to investigate how climate variability and climate change act together to affect ecosystem dynamics at biotic transition zones. Moreover, the rapid growth and expansion of the City of Albuquerque and its suburbs to the north increasingly will have an impact on ecosystem processes at the Sevilleta, and these urban forces will interact with climatic variation to catalyze change in this aridland region.
Key Abiotic Drivers
A pervasive limiting resource in these aridland ecosystems is water. In central New Mexico, precipitation inputs vary seasonally, annually and on decadal time scales. In the southwestern US, the amount and timing of seasonal and annual precipitation are influenced by two major climate cycles, the El Niño Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO). ENSO regulates variability in winter precipitation with high precipitation occurring during El Niño periods, and low precipitation during La Niña periods. ENSO events typically occur every 3-4 years and usually last only through one winter season. More recently it has been suggested that a longer-term climatic event, the Pacific Decadal Oscillation, may have profound effects on regional climate in the southwestern United States (Gutzler et al. 2002). The PDO, which oscillates on approximately 50-year cycles, modulates ENSO events and it may be the cause of periodic, extended, severe droughts in the region (Milne et al. in press).
Precipitation patterns for central New Mexico from 1900-1999. Note the regional drought during the 1950s was characterized by low precipitation in both summer and winter seasons.
Available soil moisture is not only a function of precipitation inputs but also temperature. Mean annual temperature from 1989-2002 at the Deep Well meteorological station on McKenzie Flats, a grassland site on the Sevilleta, is 13.2° C, with a low of 1.6°C in January and a high of 25.1°C in July. In addition, this site receives approximately 250 mm of precipitation annually, about 60% of which falls during the summer monsoon season from June through September and the remainder primarily from winter frontal systems. However, the relative contribution of summer monsoon and winter rains varies from year to year, creating a highly variable seasonal pattern of water inputs from year to year. A climate diagram based on data from Socorro, NM, south of the Sevilleta shows that on average, the lower elevations in this region are in moisture deficit most of the year, with potential surpluses only during August, December, and January.
In the piñon-juniper woodlands in the upper elevations of the Los Piños Mountains east of the Deep Well site, annual precipitation is about 365 mm and average annual temperature is 12.7°C with a low of 2.5°C in January and a high of 23.0°C in July. At this upper elevation site, there is a longer annual period of water surplus, on average, than in the lower elevation grasslands at the base of the mountains.
Key Biotic Transitions
Over the past two decades, one of the key organizing themes in ecology has been patch dynamics (Pickett and White 1985). Patch dynamics refers to a change in ecological properties within and among patches through time (Pickett and Thompson 1978, White and Pickett 1985). A patch is a discrete, bounded area of any spatial scale that differs from its surroundings in its biotic and abiotic structure and composition (Pickett and Cadenasso 1995). The historical emphasis on patch dynamics has led implicitly to the impression that patch change is driven by relatively consistent internal dynamical phenomena (Pickett et al. 2001). Yet, in some systems, biotic transitions at patch boundaries may be the most dynamic aspects of patches, and processes occurring at boundaries may drive overall patch change (Wiens et al. 1985, Cadenasso et al. 1997, Davies et al. 2001, Peters et al., submitted). In other cases, patches and their boundaries may be highly stable, changing slowly or not at all over ecological time frames (Weltzin and McPherson 1999). To address this variability and interaction, we developed a new conceptual framework for interactions at patch boundaries from which we derive testable hypotheses for studies of patch dynamics along biotic transitions, a term that we use to include boundaries at all scales (Peters et al. submitted). We use this framework to structure our LTER research along the three transition zones described below.
Conceptual model of biotic transitions at multiple spatial scales. A biotic transition consists of two states (A, B) with a boundary between them. The boundary consists of patches from both states that vary in size, type, spatial configuration, and degree of connectivity. The model is applicable across a range of spatial scales, such as individual plants where the boundary consists of root or leaf patches from each plant, assemblages of plants where the boundary consists of patches of individual plants of one species interacting with plants of a different species from an adjacent patch, associations or groups of plant assemblages where each assemblage dominated by one species is a patch, and the boundary consists of these interacting groups of plants, and landscapes consisting of a mosaic of boundaries and states at all smaller scales.
Grassland and Shrubland Transitions
Grassland to shrubland transitions are occurring throughout the southwestern United States in response to a variety of biotic and abiotic drivers (Archer 1989, Schlesinger et al. 1990). At the SEV LTER, the biotic transition that has been, and will remain, the major focus of our research is from blue grama (Bouteloua gracilis)-dominated grassland to Chihuahuan desert vegetation including black grama (B. eriopoda) grassland and creosote (Larrea tridentata) shrubland on McKenzie Flats. Essentially, these three species form nearly monodominant patches that represent the end-member states of a complex array of patch types and animal species comprising this relatively abrupt transition zone. In addition to pervasive water deficits noted above, the grassland to shrubland transition occurs on extremely nutrient poor soils. In a cross-site study of 13 ecosystems ranging from alpine tundra to tropical forest (Zak et al. 1994), the grassland site at the Sevilleta was found to have the lowest total soil N, lowest N mineralization rate and lowest soil carbon of all 13 sites in the study.
Originally, we hypothesized that the dynamics of the grassland to shrubland transition were driven by ENSO events. However, after three such events since the start of the LTER in 1989, there is little evidence that directional change is occurring across this biotic transition zone in response to El Niño or La Niña cycles (Li 2000, 2002). Another mechanism contributing to stasis along this transition zone is population declines in burrow-forming mammals, such as prairie dogs and kangaroo rats. More recently, fire has been incorporated as a management tool in the Sevilleta National Wildlife Refuge, and we will be extending our research activities to assess the interactions of fire, animals and longer-term climate variation on this grassland to shrubland transition.
Piñon-Juniper Woodland Transitions
The savanna to woodland transition occurs along an elevation gradient on the north end of the Los Pinos Mountains. This transition zone was part of the original Sevilleta LTER program in 1989, but activities there were reduced over time. This transition begins with savanna characterized by scattered individuals of Juniperus monosperma and a dense understory of perennial grasses (primarily Sporobolis spp.) to woodland of J. monosperma and Pinus edulis with a sparse herbaceous understory. In the piñon-juniper woodland, total soil nitrogen and carbon are comparable to levels in other sites across North America, but like in the grassland on McKenzie Flats, nitrogen mineralization rates in PJ soils are extremely low (Zak et al. 1994).
Originally, we hypothesized that the elevational boundary between savanna and woodland was a function of occasional periods of extreme drought. Evidence supporting this hypothesis can be found in the dead juniper carcasses located at the foothills of the Los Pinos Mountains on the east side of the Sevilleta NWR. It is now believed that these long, severe droughts are the product of the PDO, which is hypothesized to have a return interval of 52+11 years (Milne et al, in press). Currently, the southwestern US is experiencing a severe and prolonged drought, which has many of the characteristics of that experienced in the early 1950's, and a massive die-off of pines and juniper is occurring throughout the region. Bark beetles and fungal pathogens help to increase tree mortality as individuals are weakened by drought. Circumstantial evidence from fertilization experiments at the Sevilleta suggests that piñon mortality rates may be higher in areas of greater resource abundance, thus we hypothesize that mortality of patches of pines may be exacerbated by regional patterns of drought coupled with gradients in atmospheric nitrogen deposition. Overall, this is a significant regional transition that may be part of a long-term cycle from woodland to grassland and back as climate fluctuates on decadal time scales.
Riparian Zone Transitions Along the Middle Rio Grande Basin
The Rio Grande, which bisects the State of New Mexico, contains the second largest drainage basin in the southwestern US. Within New Mexico, >60% of the state's population lives along the river and that population is rapidly growing. The Rio Grande provides a considerable amount of surface water for agricultural and other uses and demands on that water are increasing at unprecedented rates (Bartolino and Cole 2002). Ecologically, a dramatic biotic transition within the riparian zone ('bosque') is occurring in the Rio Grande Basin as the native forest of cottonwoods is rapidly being replaced by two widely dispersed non-native species, Russian olive and salt cedar. This transition is creating significant ecological challenges related to state and regional water management and policy (Dahm et al. 2002). Although the original Sevilleta LTER research program did not extend into the Middle Rio Grande riparian zone, we feel that doing so represents a key opportunity to regionalize the Sevilleta LTER and to address important ecological and management issues in the State of New Mexico. In this case, the middle Rio Grande Basin extends from Otowi Bridge near Santa Fe south through Albuquerque and the Sevilleta to Elephant Butte Reservoir about 150 kilometers south of Albuquerque. Climatically, this region varies from the north where moisture deficits are more severe to the south where there is an increasing period of greater summertime water availability.
Historically, changes in these riparian ecosystems were driven by flood frequency and intensity. Now that the river is highly regulated, floods are rare, the hydrologic regime has been drastically altered, and human-caused fires are common. Since 1990, over 50% of the bosque in the Middle Rio Grande basin has burned. We hypothesize that these changes in disturbance regime will enhance rate of replacement of native species, increase evapotranspiration, and reduce nitrogen retention in the riparian zone.
Linking Causes, Response Functions & Consequences of Biotic Transition
Clearly, each of these biotic transition zones could be the focus of intensive study, but independently, they do not constitute a broadly based LTER research program. Our goal is to balance our efforts between understanding each study system in detail, and addressing questions of broad significance and generality. We need to do the former to understand the mechanisms that operate within each transition zone. Moreover, as 'research tenants' at the Sevilleta, on land managed by the US Fish and Wildlife Service, we have an obligation to communicate the information we gain on the Sevilleta through LTER research to the Refuge staff. At the same time, we wish to address fundamental ecological issues of broad generality. As a consequence, we believe that our three apparently disparate transition zones are, in fact, linked conceptually, and it is this conceptual linkage that allows us to generalize our research beyond the important goal of simply knowing our sites and systems well. Thus, we have linked our transitions zone research through a series of parallel hypotheses. Over the long term we will address and refine these hypotheses to seek commonalities and differences among systems that we can use to gain further insight into how our systems function as well as comparative insight for broader generality.
Hypothesis linking causes, response functions and consequences of a biotic transition from grassland to shrubland. Parallel hypothesis can be derived for other transitions, as well, such as the transition from woodland to grassland in response to regional drought and the transition from native to non-native species along the Rio Grande in response to human alterations of the natural disturbance regime.
Integration of Ongoing and Planned Research Projects
We have organized our research efforts around three interrelated system components: abiotic drivers, ecosystem processes and biotic responses and feedbacks. In our case, the main abiotic drivers are (1) seasonal, annual and decadal variations in climate, (2) geomorphology, soil texture and depth, and surface hydrology, and (3) season and periodicity of fire. These abiotic drivers affect biogeochemical cycles, particularly nitrogen, phosphorus and carbon, as well as water storage, use and losses. Biotic responses to the coupling of these abiotic drivers and ecosystem processes include patterns and controls on net primary production, and the distribution, abundance, diversity and dynamics of plant and animal populations and communities. Although there is considerable research linking primary production and plant community structure (Waide et al. 1999, Mittelbach et al. 2001), one of the core activities of the Sevilleta LTER has been investigations of fluxes in NPP and their impact on the distribution and abundance of consumers, particularly small mammal populations (Ernst et al. 2001, Friggens 2003). This has direct relevance to human health issues in response to the regional prevalence and potential outbreaks of vector-borne diseases, such as Hanta and Plague (Yates et al 2003).
Sevilleta LTER Research Projects: Integration of experiments and measurements conducted by the Sevilleta LTER program. Asterisks indicate areas in which we plan to increase research efforts in the future.
Patterns & Controls on Net Primary Production
Given that the region including the Sevilleta LTER generally suffers from chronic water and nutrient deficits, we hypothesize that population dynamics in this area are strongly controlled by bottom-up forces, first by water and then by soil nutrients. We do not mean to imply that top-down forces are not important, but we hypothesize that bottom-up forces are significantly stronger than top-down forces. The key biotic driver is net primary production, which varies as a function of temporal availability of moisture coupled with variation in readily accessible nutrients, particularly nitrogen. In general, annual NPP of grasslands at the Sevilleta peaks in July-August each year during the period of the North American Monsoon. During intervening dry periods, plant available nitrogen accumulates in these grassland soils. Longer dry periods lead to greater accumulations of plant available nitrogen (Kieft et al. 1998, White et al., submitted), so that interannual fluctuations in net primary production result from differences in water inputs coupled with the length of time of intervening dry periods. Like in mesic grasslands, the periodic coupling of abundant soil moisture and nutrients leads to transient bursts in seasonal NPP that cannot be sustained (Seastedt and Knapp 1993). These bursts in NPP then provide the template for extreme population fluxes of small mammals. Population sizes of higher-level consumers, such as coyotes lag behind the population growth of the herbivores in this system. Thus, much of the temporal dynamics in consumer populations appears to be driven by bottom-up forces, followed by rapid declines as NPP declines.
Production begins in spring following the winter rains and peaks in July or August as the dominant C4 grasses respond to the North American Monsoon. In general, spring production reflects the growth of C3 species, especially annuals in response to precipitation and perhaps nitrogen availability. Periodic fire, nutrient accumulation, and highly localized monsoonal storms lead to high spatial and temporal variation in summer, and thus annual NPP.
One of the challenges for any LTER site is to seek a balance between understanding the details of a particular research site and addressing a suite of broadly based questions derived from ecological theory. We believe that research at the Sevilleta LTER site has led to a number of significant accomplishments of broad scientific significance and of relevance for the growth and maturation of the SEV LTER Program. Here we highlight several of those accomplishments and then describe our objectives for the next phase of research and development of the program.
Long-term research at the Sevilleta has documented the considerable resilience of these nutrient-poor, aridland ecosystems to disturbances, such as fire and grazing (e.g., Gosz and Gosz 1996, Ryerson and Parmenter 2001, Peters 2002a,b). The region has a long history of grazing, and in some cases, this has led to significant ecosystem degradation. In addition, the site has experienced several lightening-caused fires over the past 10 years. In both cases, there is clear evidence that species composition, soil resources, and standing crop biomass have returned to predisturbance conditions relatively rapidly (Munson et al., in prep). This resilience is somewhat surprising given the extreme environmental constraints that govern biotic processes in this region.
When the Sevilleta began, many thought that biotic transitions in the Southwest would be strongly driven by interannual variation in climate, particularly in response to ENSO events. However, long-term research at the Sevilleta demonstrates that this original ENSO hypothesis is not correct. Not only are these ecosystems resilient over time, but they also appear to be relatively stable across large spatial scales. There is little evidence of large-scale biotic transition being driven by ENSO events (Li 2000, 2002).
Detailed mechanistic studies coupled with long-term data led us to conceptualize a broadly applicable general model of biotic transitions that links patch and edge dynamics (Peters et al., submitted). Traditionally, patch dynamics and boundary dynamics have been treated as somewhat independent phenomena. But, in many cases, the dynamics of a patch are explicitly a function of the dynamics of the patch boundary. As landscapes worldwide continue to be modified by human activities, boundaries will be an increasingly important feature of landscapes. Our patch dynamics-boundary dynamics model can provide a framework for understanding the causes and consequences of landscape change in other ecosystems.
Through a suite of observational and manipulative experiments, we have gained knowledge that is specific to the Sevilleta study area concerning end-member interactions and dynamics in each of our study systems (e.g., Gosz and Gosz 1996, Peters 2002, Bhark and Small 2003). By end-member interactions we mean detailed understanding of pattern and process in core areas dominated by blue grama, black grama, creosote bush, riparian forests, and piñon-juniper woodland. Through this knowledge, we will now begin to expand our efforts into more complex mixtures of species to more fully understand the dynamics of biotic transitions in space and time.
Conceptual model of structure-function interactions in and across biotic transition zones.
One of the advantages of LTER is the opportunity to establish long-term experimental manipulations that provide the foundation for integrated studies of ecological systems. To that end, we have garnered external funding to initiate a long-term, integrated rainfall manipulation experiment. Rainfall manipulation shelters are being used to modify ambient climatic variables to allow us to more fully understand the role of water inputs and fluxes in these arid land ecosystems, as well as how well these ecosystems recover from extended drought.
LTER programs generate a considerable amount of complex data, and information management is one of the key goals of the LTER Network as a whole. The Sevilleta LTER has implemented an information management system fully in compliance with LTER Network goals and objectives. The information manager interacts with researchers from project inception to conclusion to ensure that well-documented, high quality data are archived and made publicly accessible within two years after the project ends. Research at the Sevilleta is supported by a UNIX server offering file, web, and email services as well as software including SAS, ArcGIS and ERDAS Imagine. Synthetic research and educational activities by the broader community of ecological scientists is fostered by Sevilleta contributions to network-level databases such as ClimDB, translation of Sevilleta metadata into EML (the LTER network metadata standard), and participation in research projects such as SEEK, the Scientific Environment for Ecological Knowledge.
Sevilleta Schoolyard Program
Finally, the Sevilleta LTER is proud of its educationally ambitious and scientifically rigorous Schoolyard LTER Program. This program, the Bosque Ecosystem Monitoring Program (BEMP), meets national and state educational standards for science education, involves over hundreds of school kids each year, connects K-12 students and teachers with UNM undergraduate interns and faculty, provides a source of curriculum activities for school teachers, and produces scientifically rigorous long-term data on the riparian ecosystems of the middle Rio Grande Basin. Currently, BEMP includes 14 school systems throughout the Middle Rio Grande Basin, including two Indian Pueblo schools, a variety of schools in the City of Albuquerque, plus rural school systems as far south as the Sevilleta.
Looking Ahead: Future Goals
Together, the research and educational accomplishments of the Sevilleta LTER Program are of national, regional, state and local significance. Where are we going from here? The SEV LTER has a number of scientific and educational objectives that will guide many of our activities over the next decade:
New hypotheses. We will derive a new set of hypotheses on the impacts of long-term climate drivers, such as the Pacific Decadal Oscillation, on biotic transitions and ecosystem functioning.
Large-scale precipitation experiment. Implement. In the next 1-2 years we plan to construct large-scale (minimum 100 meter) irrigation transects that will allow us to simulate climatic pulses, such as extreme precipitation events, increased total winter precipitation, decreased intervals between precipitation events, etc. to test a variety of ecological hypotheses.
Increase core research activities. We are in the process of increasing core research activities in areas of more complex mixed-species assemblages to better understand transition dynamics in this aridland ecosystem. Also, we are revitalizing core research sites in the piñon-juniper woodland.
Seek additional funding. We will continue to seek external funding to allow us to expand our research efforts in the piñon-juniper woodland and bosque forests.
Intensify research on NPP. We will explicitly increase our research to gain a much-improved knowledge of pattern and controls of net primary production, particularly in the grassland and forest areas.
Along with these research goals, we will greatly expand our species-scale micrometeorological network of wireless sensors to generate a small-scale, wireless, real-time sensor grid for the measurement of soil moisture, relative humidity, soil and air temperatures, and solar radiation. This will be coupled with integrated process-based measurements. Together we anticipate that these sensor networks will allow us to achieve a more integrated understanding of ecosystem dynamics. This will facilitate linking ground-based and remotely sensed data, currently a significant research challenge in any system, as a means of scaling up from plot to site to region. One avenue that will facilitate this effort is our continuing partnership with SEEK, the Scientific Environment for Ecological Knowledge, a large Information Technology Research project at the LTER Network Office in Albuquerque. Finally, we plan to seek resources to vastly increase base support for our outstanding Schoolyard LTER Program.
Archer, S.R. 1989. Have Southern Texas savannas been converted to woodlands in recent history? American Naturalist 134: 545-561.
Bartolino, J.R. and J.C. Cole. 2002. Ground water resources of the Middle Rio Grande Basin, New Mexico. US Geological Survey, Circular 1222.
Bhark, E. W., and E. E. Small. 2003. The relationship between plant canopies and the spatial variability of infiltration in grassland and shrubland of the northern Chihuahuan Desert, New Mexico. Ecosystems 6: 185-196.
Cadenasso, M. L., M. M. Traynor, and S. T. A. Pickett. 1997. Functional location of forest edges: gradients of multiple physical factors. Canadian Journal Forest Research 27: 774-782.
Dahm, C.H., J. R. Cleverly, J.E.A. Coonrod, J.R. Thibault, D.E. McDonnell and D.J. Gilroy. 2002. Evapotranspiration at the land/water interface in a semi-arid drainage basin. Freshwater Biology 47: 831-843.
Davies, K. F., B. A. Melbourne, and C. R. Margules. 2001. Effects of within- and between-patch processes on community dynamics in a fragmentation experiment. Ecology 82: 1830-1846.
Ernest, S.K.M., J.H. Brown, R.R. Parmenter. 2000. Rodents, plants, and precipitation: spatial and temporal dynamics of consumers and resources. Oikos. 88:470-482.
Friggens, M.T. 2003. Relating small mammal dynamics to precipitation and vegetation on the Sevilleta National Wildlife Refuge, New Mexico. MS Thesis, University of New Mexico.
Gosz, R.J., and J.R. Gosz. 1996. Species interactions on the biome transition zone in New Mexico: Response of blue grama (Bouteloua gracilis) and black grama (Bouteloua eriopoda) to fire and herbivory. Journal of Arid Environments. 34:101-114.
Gutzler, D.S., D.M. Kann and C. Thornbrugh. 2002. Modulation of ENSO-based long-lead outlooks of Southwest US winter precipitation by the Pacific Decadal Oscillation. Weather and Forecasting 17: 1163-1172.
Hobbie, J.E., S.R. Carpenter, N.B. Grimm, J.R. Gosz, and T.R. Seastedt. 2003. The US Long Term Ecological Research Program. BioScience 53: 21-32.
Kieft, T.L., C.S. White, S.R. Loftin, R. Aguilar, J.A. Craig, and D.A. Skaar. 1998. Temporal dynamics in soil carbon and nitrogen resources at a grassland-shrubland ecotone. Ecology. 79(2):671-683.
Li, B.-L. 2000. Fractal geometry applications in description and analysis of patch patterns and patch dynamics. Ecological Modelling. 132:33-50.
Li, B.-L. 2002. A theoretical framework of ecological phase transitions for characterizing tree-grass dynamics. Acta Biotheoretica. 50:141-154.
Mittelbach G.G., C.F. Steiner, S.M. Scheiner, K.L. Gross, H.L. Reynolds, R.B. Waide, M.R.
Willig, S.I. Dodson and L. Gough. 2001. What is the observed relationship between species richness and productivity? Ecology 82: 2381-2396.
Milne, B.T., et al. 2003. Multidecadal drought cycles in South-central New Mexico: Patterns and Consequences, In: D. Greenland, D. Goodin, R. Smith editor(s). Climate Variabililty and Ecosystem Response at Long Term Ecological Research (LTER) Sites. Oxford University Press, New York, in press.
Peters, D.P.C. 2002. Plant species dominance at a grassland-shrubland ecotone: an individual-based gap dynamics model of herbaceous and woody species. Ecological Modelling. 152:5-32.
Peters, D.P.C. 2002. Recruitment potential of two perennial grasses with different growth forms at a semiarid-arid transition zone. American Journal of Botany. 89:1616-1623.
Peters, D.P.C., J.R. Gosz, R.R. Parmenter, W.T. Pockman, E.E. Small and S.L. Collins. Submitted. Understanding biotic transitions at multiple scales: integrating patch dynamics with ecotone theory. BioScience.
Pickett, S.T.A., and M.L. Cadenasso. 1995. Landscape ecology: spatial heterogeneity in ecological systems. Science 269: 331-334.
Pickett, S.T.A., M.L. Cadenasso, and C.G. Jones. 2001. Generation of heterogeneity by organisms: creation, maintenance, and transformation. Pages 33-52 In: M.L. Hutchings, E.A.
John, and A.J.A. Stewart, eds. Ecological consequences of habitat heterogeneity. Blackwell, London.
Pickett, S.T.A., and J.N. Thompson. 1978. Patch dynamics and the design of nature reserves. Biological Conservation 13: 27-37.
Pickett, S.T.A., and P.S. White, eds. 1985. The ecology of natural disturbance and patch dynamics. Academic Press, Orlando, FL, USA.
Schlesinger, W.H., J F. Reynolds, G.L. Cunningham, L. Huenneke, W.M. Jarrell, R.A. Virginia, and W.G. Whitford. 1990. Biological feedbacks in global desertification. Science 247: 1043-1048.
Seastedt, T.R. and A.K. Knapp. 1993. Consequences of non-equilibrium resource availability across multiple time scales: the transient maxima hypothesis. American Naturalist 141: 621-633.
Waide, R. B., M. R. Willig, G. Mittelbach, C. Steiner, L. Gough, S. I. Dodson, G. P. Juday, and R. R. Parmenter. 1999. The Relationship Between Primary Productivity and Species Richness. Annual Review of Ecology and Systematics. 30:257-300.
White, C.S., D.I. Moore and J. A. Craig. Submitted. Regional scale drought increases potential soil fertility in semi-arid grasslands. Biology and Fertility of Soils.
Weltzin, J. F., and G. R. McPherson. 1999. Facilitation of conspecific seedling recruitment and shifts in temperate savanna ecotones. Ecological Monographs 69: 513-534.
Wiens, J. A., C. S. Crawford, and J. R. Gosz. 1985. Boundary dynamics: a conceptual framework for studying landscape ecosystems. Oikos 45: 421-427.
Yates, T. L., J. N. Mills, C. A. Parmenter, T. G. Ksiazek, R. R. Parmenter, J. R. Vande Castle, C. H. Calisher, S. T. Nichol, K. D. Abbott, J. C. Young, M. L. Morrison, B. J. Beaty, J. L. Dunnum, R. J. Baker, J. Salazar-Bravo, and C. J. Peters. 2002. The ecology and evolutionary history of an emergent disease: Hantavirus pulmonary syndrome. BioScience. 52:989-998.
Zak, D.R., D. Tilman, R.R. Parmenter, C.W. Rice, F.M. Fisher, J. Vose, D. Milchunas, and C.W. Martin. 1994. Plant Production and Soil Microorganisms in Late-Successional Ecosystems: A Continental-Scale Study. Ecology. 75(8):2333-2347.
|
<urn:uuid:7eef8113-5b93-4ad9-b245-b90510ca6f69>
|
CC-MAIN-2013-20
|
http://lternet.edu/category/sites/sev
|
2013-05-23T12:01:57Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.904623
| 6,900
|
for National Geographic News
Humans have had a refined artistic bent for at least 33,000 years, according to the discovery of three deftly carved ivory figurines in a cave in southwestern Germany. The miniature statues include a horse, a diving waterfowl, and a half-man, half-lion.
The figurines come from an ongoing excavation of Hohle Fels Cave in the Ach Valley and are dated to a time when some of the earliest known relatives of modern humans populated Europe, an era known as the Aurignacian.
The discovery complements similarly dated ivory sculptures recovered from three other Aurignacian caves in the Ach and Lone Valleys of Germany, adding support to the belief that by 30,000 years ago humans were culturally modern.
The half-man, half-lion figurine, known as a Lowenmensch, was of particular excitement for Nicholas Conard, a paleoanthropologist at the University of Tuebingen in Germany, who describes the figurines in tomorrow's issue of the science journal Nature.
"I'm usually very calm actually; I've been digging for a long time," he said. "But that got my heart pumping a bit."
The Lowenmensch is the second such figurine found. German archaeologists discovered one in 1939 at an Aurignacian site in the Lone Valley. "If there are two, there must be hundreds of these things, they must have been part of daily life," said Conard.
The newly discovered Lowenmensch is of comparable age. These ivory figurines from these four sites in Germany are among the oldest examples of figurative art known worldwide, added Conard.
The figurines are each well polished from heavy handling, suggesting that rather than sitting on a shelf as an artifact to be admired they played a central role in the culture of these early Europeans.
For decades, archaeologists have debated the cultural significance of the figurines. The new finds, said Conard, place some constraints on the interpretations.
One of the main theories, championed by the late German archaeologist Joachim Hahn, is that they represent powerful, fast, and aggressive animals, reflecting admiration, fear and respect for them.
Another theory, supported by South African archaeologist David Lewis-Williams, among others, is that the figurines are evidence of shamanism.
SOURCES AND RELATED WEB SITES
|
<urn:uuid:9ea11cfd-ce69-4f4f-a105-7632f9371cf3>
|
CC-MAIN-2013-20
|
http://news.nationalgeographic.com/news/2003/12/1217_031217_modernhumans.html
|
2013-06-18T23:41:10Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959563
| 499
|
CALCULATE MOLECULAR WEIGHT - MOLAR MASS CALCULATOR
Enter a chemical formula to calculate its molar mass and elemental composition:
Computing molar mass (molar weight)
To calculate molar mass of a chemical compound enter it's formula and click 'Calculate!'.In chemical formula you may use:
Examples of molar mass computations:
- Any chemical element
- Functional groups: D, Ph, Me, Et, Pr, Bu, AcAc, Ac, For, Ts, Tos, Bz, TMS, tBu, Bzl, Bn, Dmg
- parantesis () or brackets .
Computing molecular weight (molecular mass)
To calculate molecular weight of a chemical compound enter it's formula, specify its isotope mass number after each element in square brackets.
Examples of molecular weight computations:
Definitions of molecular mass, molecular weight, molar mass and molar weight
Weights of atoms and isotopes are from NIST article.
- Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). (1 u is equal to 1/12 the mass of one atom of carbon-12)
- Molar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol.
Give us feedback about your experience with Molecular Weight Calculator.
Related: Molecular weights of amino acids
|molecular weights calculated today
|
<urn:uuid:eb05fdf7-6656-4561-bbdd-5184bcd3bb3e>
|
CC-MAIN-2013-20
|
http://www.webqc.org/mmcalc.php
|
2013-06-18T22:37:29Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.803903
| 326
|
Features include interactive map, in-depth stories, and more.Download now. »
The week's top five must-sees,
delivered to your inbox.
The Democratic Party is one of the two major contemporary political parties in the United States along with the Republican Party. Since the 1930s, the party has promoted a socially liberal and progressive platform, and its Congressional caucus is composed of progressives, liberals, and centrists. The party has the lengthiest record of continuous operation in the United States and is among the oldest political parties in the world. Current President of the United States Barack Obama is the 15th Democrat to hold the office of Presidency. As of the 113th Congress following the 2012 elections, the Democratic Party currently holds a minority of seats in the House of Representatives and a majority of seats in the Senate, as well as a minority of state governorships and control of a minority of state legislatures. (via Freebase)
|
<urn:uuid:abb02345-e2f5-439f-9982-2a03544a00e2>
|
CC-MAIN-2013-20
|
http://news.linktv.org/topics/democratic-party-united-states
|
2013-06-19T18:59:11Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.967313
| 191
|
Most filter elements are ceramic, glass fibre or composite depth filters that trap microorganisms in a maze of tiny passageways and dead-ends. You can usually clean ceramic filter elements by abrading the surface with a scrub pad after it begins to clog with silt, vegetative debris and organisms. Generally, non-ceramic filters cannot be cleaned by scrubbing but some can be back-flushed. Non-cleanable primary or back-up filter elements made of paper, metal, or glass fibre have to be discarded when they clog.
Filter Pore Sizes
Filter pore size, usually expressed in micrometres (microns), ranges from 6 microns down to 0.2 microns. Manufacturer literature often lists either ‘nominal’ or ‘absolute’ pore sizes.
Absolute pore size is the maximum size of any particle that can pass through a filter tested with microscopic beads. This means that a filter with an absolute pore size of one micron will allow nothing larger than one micron to pass through it.
Nominal pore size refers to a reduction level for particles, usually above 90%. This means that a filter with a nominal pore size of one micron will allow up to 10% of particles larger than one micron in diameter to pass through the filter.
It is important that you assess and compare water filters only on their absolute pore size and not their nominal pore size.
In addition, while pore sizes are tested with rigid microscopic beads, microorganisms are flexible, so you should leave a margin of error when selecting a filter for a particular task. For example, if you are selecting a water filter for Cryptosporidium oocysts, which are as small as three microns it is preferable to choose a filter with an absolute pore size in the one or two micron range, or smaller, to ensure that no oocysts are forced through the filter under the pressure of pumping.
While microfilters are very effective against protozoa, helminth eggs and larvae, and even tiny bacteria, no filter element has pores small enough to physically remove viruses unless the viruses are attached to larger particles or clumped together.
The smallest microfilter pore size currently available is in the 0.2 micron range while viruses can be as small as 27 nm (0.027 microns), which is only 1/7 the diameter of the pores.
|Pathogen||Organism Size in microns||Maximum Absolute Pore size in microns|
|Viruses||0.027–0.05||Too small to filter with hand-held devices|
|Worm eggs and larvae||20–150||5–6|
|
<urn:uuid:9bc9c533-3c10-4400-9d4a-02ab98b9067d>
|
CC-MAIN-2013-20
|
http://backcountrywater.com/category/water-treatment-methods/water-filters-and-purifiers/water-microfilters/
|
2013-05-23T12:45:13Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.932052
| 567
|
1810-1860 American Industrial Revolution:
1810, US industry $115 million worth of goods; 1860, $2 billion - second only to Britain in total output.
1810 most cloth still made in homes; 1860 nation's largest industry.
Boston merchant Francis Cabot Lowell:
inspected textile mills of Manchester - power looms;
1820s established new mill town, Lowell MA
idea of industrialization reinforcing American virtues;
new labor source - farmers' daughters in New England;
women working about 73 hours a week, 12/13 hours a day;
paternalistic system of labor management - protected environment,
company-run boardinghouses, close supervision;
nurturing cultural environment - library, lecture series, magazine Lowell Offering.
two levels of waterwheels powering 10,000 looms, employing over 10,000 women & men.
At peak producing one million yards of cloth per week;
1850, Lowell second largest city in Mass.
1830s, increasing competition;
1834 company announced wage cuts up to 15%
strike - 800 workers walked out;
1836 another strike;
1844 Sarah Bagley (come to Lowell from NH 1836) - organized Lowell Female Labor Reform Association;
2000 signatures on petition demanding 10-hour workday;
Testified before Mass. legislature
Set stage for later actions - showed that working women not docile, ready to defend well-being & organize;
Lowell employed immigrant Irish workers;
1860, over 60,000 women employed in New England cotton textile industry alone - more in papermaking, shoemaking;
overall, more working on family farms, as domestic servants, or in "feminine occupations" - millinery, schoolteaching;
1853 Sarah Josepha Hale, Godey's: "Home is woman's world; the training of the young her profession; the happiness of the household her riches; the improvement of morals her glory. Such would be her position were the world rightly ordered."
"Louis A. Godey employs now 88 female operatives in the different departments of the Lady's Book!"
1830s, one out of every four or five American-born white women in Massachusetts taught at some point in life.
Catherine Beecher: "Let every woman become so cultivated and refined in intellect that her taste and judgment will be respected; so benevolent in feeling and action that her motives will be reverenced; so unassuming and unambitious that competition will be banished; so gentle and easy... that every heart will repose in her presence; then, the fathers, husbands, and sons will find an influence thrown around them...."
"separate spheres" - "cult of domesticity" or the "cult of true womanhood";
Early 1800s immigrants from Ireland,Germany
|
<urn:uuid:2a31fae1-f5e5-459b-803e-7e171c9c477f>
|
CC-MAIN-2013-20
|
http://www.public.iastate.edu/~f2003.hist.386/early1800s.html
|
2013-05-22T08:01:05Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.912111
| 589
|
This is an image of the Moon showing various minerals found on the surface.
Click on image for full size
Evidence about the Formation of the Moon
Any successful theory must account for everything we know about the
Moon now. Those things include
- the moon seems to be made of the same material as the Earth's upper mantle.
- the Moon has little or no iron in it, and is composed of material unlike the composition of the Earth as a whole.
- the moon is rounded in shape, like the other planets and not like an asteriod or comet
- the moon orbits in the same direction as the Earth
- the Moon is located in the same plane as the Earth (the ecliptic plane)
- theory suggests that the Moon seems to have drifted away from the Earth over the length of geologic history.
- this means that the Moon was once much closer to the Earth
- this means that the Moon was once bigger in the sky and much brighter than at present
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, as well as books
on science education!
You might also be interested in:
How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more
Any successful theory must account for everything we know about the Moon now, as well as make predictions about future observations. There are three theories about how the moon came to be in place: that...more
Any successful theory must account for everything we know about the Moon now. Those things include the moon seems to be made of the same material as the Earth's upper mantle. the Moon has little or no...more
Although the Moon does not appear to have a magnetosphere surrounding it, it *is* a magnetic object in space. Scientists think that the magnetism of the Moon's surface is leftover from a time when the...more
The atmosphere of the Moon may come from a couple of sources, one source is outgassing or the release of gases such as radon, which originate deep inside the Moon. Gases are released from the interior...more
In decades past is was accepted that the Moon contained no water. Moon rocks collected by Apollo astronauts (at lunar equatorial regions) contained no traces of water. Lunar mapping performed by the Galileo...more
Here's a safe and easy way to make lightning. You will need a cotton or wool blanket. This experiment works best on a dry, cool night. Turn out all the lights and let your eyes adjust to the darkness....more
It takes 3 seconds for sound to travel 1 kilometer (5 seconds to travel 1 mile). The next time a thunderstorm comes your way, look out your bedroom window and watch for lightning. When you see a lightning...more
|
<urn:uuid:07373574-0639-46a6-97ed-bc577fce17e2>
|
CC-MAIN-2013-20
|
http://www.windows2universe.org/earth/moon/formation_evidence.html&edu=mid
|
2013-05-22T21:40:00Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955299
| 627
|
Search Health Information
A hemangioma is an abnormal buildup of blood vessels in the skin or internal organs.
Cavernous hemangioma; Strawberry nevus
Causes, incidence, and risk factors:
About 30% of hemangiomas are present at birth. The rest appear in the first several months of life.
The hemangioma may be:
- In the top skin layers (capillary hemangioma )
- Deeper in the skin (cavernous hemangioma)
- A mixture of both
- A red to reddish-purple, raised sore (lesion) on the skin
- A massive, raised tumor with blood vessels
Most hemangiomas are on the face and neck.
Signs and tests:
Hemangiomas are diagnosed by a physical examination. In the case of deep or mixed lesions, a CT or MRI scan may be performed.
Occasionally, a hemangioma may occur with other rare conditions. Additional tests may be done for these syndromes.
Superficial or "strawberry" hemangiomas often are not treated. When they are allowed to disappear on their own, the result is usually normal-appearing skin. In some cases, a laser may be used to remove the small vessels.
Cavernous hemangiomas that involve the eyelid and block vision are generally treated with steroid injections or laser treatments. These quickly reduce the size of the lesions, allowing vision to develop normally. Large cavernous hemangiomas or mixed hemangiomas may be treated with oral steroids and injections of steroids directly into the hemangioma.
Recently, lasers have been used to reduce the size of the hemangiomas. Lasers that emit yellow light damage the vessels in the hemangioma without damaging the skin over it. Some physicians use a combination of steroid injection and laser therapy .
Small, superficial hemangiomas often disappear on their own. About 50% go away by age 5, and 90% are gone by age 9.
- Bleeding (especially if the hemangioma is injured)
- Problems with breathing and eating
- Psychological problems, from skin appearance
Secondary infections and sores
- Visible changes in the skin
- Vision problems (amblyopia , strabismus )
Calling your health care provider:
All birthmarks, including hemangiomas, should be evaluated by the health care provider during a routine examination.
Hemangiomas of the eyelid may interfere with the development of normal vision and must be treated in the first few months of life. Hemangiomas that interfere with breathing, feeding, or other vital functions should also be treated early.
There is no known way to prevent hemangiomas.
|Review Date: 10/3/2008|
Reviewed By: Kevin Berman, MD, PhD, Atlanta Center for Dermatologic Disease, Atlanta, GA. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
|
<urn:uuid:f9998671-8bca-4935-8f7b-488fe360ea86>
|
CC-MAIN-2013-20
|
http://www.ololrmc.com/body.cfm?id=904&action=detail&AEProductID=Adam2004_5101&AEArticleId=001459
|
2013-05-22T01:24:59Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.902235
| 752
|
Luis Ángel Firpo sends Jack Dempsey outside the ring; painting by George Bellows
|Also known as||Pugilism, English boxing, Western Boxing, Sweet Science, Gentleman's Sport|
|Country of origin||Greece (Ancient Boxing)
United Kingdom (Modern Boxing)
|Olympic sport||Since 698 B.C.|
Boxing (pugilism, prize fighting, the sweet science or in Greek pygmachia) is a martial art and combat sport in which two people engage in a contest of strength, speed, reflexes, endurance, and will by throwing punches with gloved hands against another opponent.
Amateur boxing is an Olympic and Commonwealth sport and is a common fixture in most of the major international games - it also has its own World Championships. Boxing is supervised by a referee over a series of one- to three-minute intervals called rounds. The result is decided when an opponent is deemed incapable to continue by a referee, is disqualified for breaking a rule, resigns by throwing in a towel, or is pronounced the winner or loser based on the judges' scorecards at the end of the contest.
The birth hour of boxing as a sport may be its acceptance by the ancient Greeks as an Olympic game as early as 688 BC. Boxing evolved from 16th- and 18th-century prizefights, largely in Great Britain, to the forerunner of modern boxing in the mid-19th century, again initially in Great Britain and later in the United States. In 2004, ESPN ranked boxing as the most difficult sport in the world.
Early history
- See also Ancient Greek boxing
First depicted in Sumerian relief (in Iraq) carvings from the 3rd millennium BC, while an ancient Egyptian relief from the 2nd millennium BC depicts both fist-fighters and spectators. Both depictions show bare-fisted contests. Other depictions can be seen in Assyrian, Babylonian (Today Iraq) and Hittite art. The earliest evidence for fist fighting with any kind of gloves can be found on Minoan Crete (c. 1500–900 BC), and on Sardinia, if we consider the boxing statues of Prama mountains (c. 2000–1000 BC).
Modern boxing
Broughton's rules (1743)
Records of Classical boxing activity disappeared after the fall of the Western Roman Empire when the wearing of weapons became common once again and interest in fighting with the fists waned. However, there are detailed records of various fist-fighting sports that were maintained in different cities and provinces of Italy between the 12th and 17th centuries. There was also a sport in ancient Rus called Kulachniy Boy or "Fist Fighting".
As the wearing of swords became less common, there was renewed interest in fencing with the fists. The sport would later resurface in England during the early 16th century in the form of bare-knuckle boxing sometimes referred to as prizefighting. The first documented account of a bare-knuckle fight in England appeared in 1681 in the London Protestant Mercury, and the first English bare-knuckle champion was James Figg in 1719. This is also the time when the word "boxing" first came to be used. It should be noted, that this earliest form of modern boxing was very different. Contests in Mr. Figg's time, in addition to fistfighting, also contained fencing and cudgeling. On 6 January 1681, the first recorded boxing match took place in Britain when Christopher Monck, 2nd Duke of Albemarle (and later Lieutenant Governor of Jamaica) engineered a bout between his butler and his butcher with the latter winning the prize.
Early fighting had no written rules. There were no weight divisions or round limits, and no referee. In general, it was extremely chaotic. The first boxing rules, called the Broughton's rules, were introduced by champion Jack Broughton in 1743 to protect fighters in the ring where deaths sometimes occurred. Under these rules, if a man went down and could not continue after a count of 30 seconds, the fight was over. Hitting a downed fighter and grasping below the waist were prohibited. Broughton also invented and encouraged the use of "mufflers", a form of padded gloves, which were used in training and exhibitions. The first paper on boxing was published in the early 1700s by a successful Cornish Wrestler from Bunnyip, Cornwall named Sir Thomas Parkyns who was a Physics student of Sir Isaac Newton & added it to his style and the paper was actually a single page in his extensive Wrestling & Fencing manual that entailed a system of headbutting, punching, eye gouging, chokes & hard throws not common in modern Boxing.
These rules did allow the fighters an advantage not enjoyed by today's boxers; they permitted the fighter to drop to one knee to begin a 30-second count at any time. Thus a fighter realizing he was in trouble had an opportunity to recover. However, this was considered "unmanly" and was frequently disallowed by additional rules negotiated by the Seconds of the Boxers. Intentionally going down in modern boxing will cause the recovering fighter to lose points in the scoring system. Furthermore, as the contestants did not have heavy leather gloves and wristwraps to protect their hands, they used different punching technique to preserve their hands because the head was a common target to hit full out as almost all period manuals have powerful straight punches with the whole body behind them to the face (including forehead) as the basic blows.
London Prize Ring rules (1838)
- Fights occurred in a 24 feet (7.3 m)-square ring surrounded by ropes.
- If a fighter were knocked down, he had to rise within 30 seconds under his own power to be allowed to continue.
- Biting, headbutting and hitting below the belt were declared
Marquess of Queensberry rules (1867)
In 1867, the Marquess of Queensberry rules were drafted by John Chambers for amateur championships held at Lillie Bridge in London for Lightweights, Middleweights and Heavyweights. The rules were published under the patronage of the Marquess of Queensberry, whose name has always been associated with them.
There were twelve rules in all, and they specified that fights should be "a fair stand-up boxing match" in a 24-foot-square or similar ring. Rounds were three minutes with one-minute rest intervals between rounds. Each fighter was given a ten-second count if he were knocked down, and wrestling was banned.
The introduction of gloves of "fair-size" also changed the nature of the bouts. An average pair of boxing gloves resembles a bloated pair of mittens and are laced up around the wrists. The gloves can be used to block an opponent's blows. As a result of their introduction, bouts became longer and more strategic with greater importance attached to defensive maneuvers such as slipping, bobbing, countering and angling. Because less defensive emphasis was placed on the use of the forearms and more on the gloves, the classical forearms outwards, torso leaning back stance of the bare knuckle boxer was modified to a more modern stance in which the torso is tilted forward and the hands are held closer to the face.
Through the late nineteenth century, boxing or prizefighting was primarily a sport of dubious legitimacy. Outlawed in England and much of the United States, prizefights were often held at gambling venues and broken up by police. Brawling and wrestling tactics continued, and riots at prizefights were common occurrences. Still, throughout this period, there arose some notable bare knuckle champions who developed fairly sophisticated fighting tactics.
The English case of R v. Coney in 1882 found that a bare-knuckle fight was an assault occasioning actual bodily harm, despite the consent of the participants. This marked the end of widespread public bare-knuckle contests in England.
Throughout the early twentieth century, boxers struggled to achieve legitimacy, aided by the influence of promoters like Tex Rickard and the popularity of great champions from John L. Sullivan to Jack Dempsey. Shortly after this era, boxing commissions and other sanctioning bodies were established to regulate the sport and establish universally recognized champions.
The Marquess of Queensberry rules have been the general rules governing modern boxing since their publication in 1867.
A boxing match typically consists of a determined number of three-minute rounds, a total of up to 12 rounds (formerly 15). A minute is typically spent between each round with the fighters in their assigned corners receiving advice and attention from their coach and staff. The fight is controlled by a referee who works within the ring to judge and control the conduct of the fighters, rule on their ability to fight safely, count knocked-down fighters, and rule on fouls.
Up to three judges are typically present at ringside to score the bout and assign points to the boxers, based on punches that connect, defense, knockdowns, and other, more subjective, measures. Because of the open-ended style of boxing judging, many fights have controversial results, in which one or both fighters believe they have been "robbed" or unfairly denied a victory. Each fighter has an assigned corner of the ring, where his or her coach, as well as one or more "seconds" may administer to the fighter at the beginning of the fight and between rounds. Each boxer enters into the ring from their assigned corners at the beginning of each round and must cease fighting and return to their corner at the signaled end of each round.
A bout in which the predetermined number of rounds passes is decided by the judges, and is said to "go the distance". The fighter with the higher score at the end of the fight is ruled the winner. With three judges, unanimous and split decisions are possible, as are draws. A boxer may win the bout before a decision is reached through a knockout; such bouts are said to have ended "inside the distance". If a fighter is knocked down during the fight, determined by whether the boxer touches the canvas floor of the ring with any part of their body other than the feet as a result of the opponent's punch and not a slip, as determined by the referee, the referee begins counting until the fighter returns to his or her feet and can continue.
Should the referee count to ten, then the knocked-down boxer is ruled "knocked out" (whether unconscious or not) and the other boxer is ruled the winner by knockout (KO). A "technical knockout" (TKO) is possible as well, and is ruled by the referee, fight doctor, or a fighter's corner if a fighter is unable to safely continue to fight, based upon injuries or being judged unable to effectively defend themselves. Many jurisdictions and sanctioning agencies also have a "three-knockdown rule", in which three knockdowns in a given round result in a TKO. A TKO is considered a knockout in a fighter's record. A "standing eight" count rule may also be in effect. This gives the referee the right to step in and administer a count of eight to a fighter that he feels may be in danger, even if no knockdown has taken place. After counting the referee will observe the fighter, and decide if he is fit to continue. For scoring purposes, a standing eight count is treated as a knockdown.
In general, boxers are prohibited from hitting below the belt, holding, tripping, pushing, biting, or spitting. The boxer's shorts are raised so the opponent is not allowed to hit to the groin area with intent to cause pain or injury. Failure to abide by the former may result in a foul. They also are prohibited from kicking, head-butting, or hitting with any part of the arm other than the knuckles of a closed fist (including hitting with the elbow, shoulder or forearm, as well as with open gloves, the wrist, the inside, back or side of the hand). They are prohibited as well from hitting the back, back of the neck or head (called a "rabbit-punch") or the kidneys. They are prohibited from holding the ropes for support when punching, holding an opponent while punching, or ducking below the belt of their opponent (dropping below the waist of your opponent, no matter the distance between).
If a "clinch" – a defensive move in which a boxer wraps his or her opponents arms and holds on to create a pause – is broken by the referee, each fighter must take a full step back before punching again (alternatively, the referee may direct the fighters to "punch out" of the clinch). When a boxer is knocked down, the other boxer must immediately cease fighting and move to the furthest neutral corner of the ring until the referee has either ruled a knockout or called for the fight to continue.
Violations of these rules may be ruled "fouls" by the referee, who may issue warnings, deduct points, or disqualify an offending boxer, causing an automatic loss, depending on the seriousness and intentionality of the foul. An intentional foul that causes injury that prevents a fight from continuing usually causes the boxer who committed it to be disqualified. A fighter who suffers an accidental low-blow may be given up to five minutes to recover, after which they may be ruled knocked out if they are unable to continue. Accidental fouls that cause injury ending a bout may lead to a "no contest" result, or else cause the fight to go to a decision if enough rounds (typically four or more, or at least three in a four-round fight) have passed.
Unheard of these days, but common during the early 20th Century in North America, a "newspaper decision (NWS)" might be made after a no decision bout had ended. A "no decision" bout occurred when, by law or by pre-arrangement of the fighters, if both boxers were still standing at the fight's conclusion and there was no knockout, no official decision was rendered and neither boxer was declared the winner. But this did not prevent the pool of ringside newspaper reporters from declaring a consensus result among themselves and printing a newspaper decision in their publications. Officially, however, a "no decision" bout resulted in neither boxer winning or losing. Boxing historians sometimes use these unofficial newspaper decisions in compiling fight records for illustrative purposes only. Often, media outlets covering a match will personally score the match, and post their scores as an independent sentence in their report.
Professional vs. amateur boxing
Throughout the 17th through 19th centuries, boxing bouts were motivated by money, as the fighters competed for prize money, promoters controlled the gate, and spectators bet on the result. The modern Olympic movement revived interest in amateur sports, and amateur boxing became an Olympic sport in 1908. In their current form, Olympic and other amateur bouts are typically limited to three or four rounds, scoring is computed by points based on the number of clean blows landed, regardless of impact, and fighters wear protective headgear, reducing the number of injuries, knockdowns, and knockouts. Currently scoring blows in amateur boxing are subjectively counted by ringside judges, but the Australian Institute for Sport has demonstrated a prototype of an Automated Boxing Scoring System, which introduces scoring objectivity, improves safety, and arguably makes the sport more interesting to spectators. Professional boxing remains by far the most popular form of the sport globally, though amateur boxing is dominant in Cuba and some former Soviet republics. For most fighters, an amateur career, especially at the Olympics, serves to develop skills and gain experience in preparation for a professional career.
Amateur boxing
Amateur boxing may be found at the collegiate level, at the Olympic Games and Commonwealth Games, and in many other venues sanctioned by amateur boxing associations. Amateur boxing has a point scoring system that measures the number of clean blows landed rather than physical damage. Bouts consist of three rounds of three minutes in the Olympic and Commonwealth Games, and three rounds of three minutes in a national ABA (Amateur Boxing Association) bout, each with a one-minute interval between rounds.
Competitors wear protective headgear and gloves with a white strip across the knuckle. A punch is considered a scoring punch only when the boxers connect with the white portion of the gloves. Each punch that lands cleanly on the head or torso with sufficient force is awarded a point. A referee monitors the fight to ensure that competitors use only legal blows. A belt worn over the torso represents the lower limit of punches – any boxer repeatedly landing low blows below the belt is disqualified. Referees also ensure that the boxers don't use holding tactics to prevent the opponent from swinging. If this occurs, the referee separates the opponents and orders them to continue boxing. Repeated holding can result in a boxer being penalized or ultimately disqualified. Referees will stop the bout if a boxer is seriously injured, if one boxer is significantly dominating the other or if the score is severely imbalanced. Amateur bouts which end this way may be noted as "RSC" (referee stopped contest) with notations for an outclassed opponent (RSCO), outscored opponent (RSCOS), injury (RSCI) or head injury (RSCH).
Professional boxing
Professional bouts are usually much longer than amateur bouts, typically ranging from ten to twelve rounds, though four round fights are common for less experienced fighters or club fighters. There are also some two- and three-round professional bouts, especially in Australia. Through the early twentieth century, it was common for fights to have unlimited rounds, ending only when one fighter quit, benefiting high-energy fighters like Jack Dempsey. Fifteen rounds remained the internationally recognized limit for championship fights for most of the twentieth century until the early 1980s, when the death of boxer Duk Koo Kim eventually prompted the World Boxing Council and other organizations sanctioning professional boxing to reduce the limit to twelve rounds.
Headgear is not permitted in professional bouts, and boxers are generally allowed to take much more damage before a fight is halted. At any time, however, the referee may stop the contest if he believes that one participant cannot defend himself due to injury. In that case, the other participant is awarded a technical knockout win. A technical knockout would also be awarded if a fighter lands a punch that opens a cut on the opponent, and the opponent is later deemed not fit to continue by a doctor because of the cut. For this reason, fighters often employ cutmen, whose job is to treat cuts between rounds so that the boxer is able to continue despite the cut. If a boxer simply quits fighting, or if his corner stops the fight, then the winning boxer is also awarded a technical knockout victory. In contrast with amateur boxing, professional male boxers have to be bare chested.
Boxing styles
Definition of Style
"Style" is often defined as the strategic approach a fighter takes during a bout. No two fighters' styles are alike, as it is determined by that individual's physical and mental attributes. There are three main styles in boxing: Out-fighter ("Boxer"), Brawler (or slugger), and In-fighter ("Swarmer"). These styles may be divided into several special subgroups, such as counter puncher, etc. The main philosophy of the styles is, that each style has an advantage over one, but disadvantage over the other one. It follows the rock-paper-scissors scenario - boxer beats brawler, swarmer beats boxer, and brawler beats swarmer.
A classic "boxer" or stylist (also known as an "out-fighter") seeks to maintain distance between himself and his opponent, fighting with faster, longer range punches, most notably the jab, and gradually wearing his opponent down. Due to this reliance on weaker punches, out-fighters tend to win by point decisions rather than by knockout, though some out-fighters have notable knockout records. They are often regarded as the best boxing strategists due to their ability to control the pace of the fight and lead their opponent, methodically wearing him down and exhibiting more skill and finesse than a brawler. Out-fighters need reach, hand speed, reflexes, and footwork.
Notable out-fighters include Muhammad Ali, Larry Holmes, Joe Calzaghe, Floyd Mayweather Jr., Salvador Sanchez, Gene Tunney, Ezzard Charles, Willie Pep, Meldrick Taylor, Ricardo Lopez, Roy Jones, Jr., and Sugar Ray Leonard. This style was also used by fictional boxer Apollo Creed.
A boxer-puncher is a well-rounded boxer who is able to fight at close range with a combination of technique and power, often with the ability to knock opponents out with a combination and in some instances a single shot. Their movement and tactics are similar to that of an out-fighter (although they are generally not as mobile as an out-fighter), but instead of winning by decision, they tend to wear their opponents down using combinations and then move in to score the knockout. A boxer must be well rounded to be effective using this style.
Notable boxer-punchers include Wladimir Klitschko, Lennox Lewis, Joe Louis, Oscar de la Hoya, Archie Moore, Manny Pacquiao, Miguel Cotto, Nonito Donaire, Sam Langford, Henry Armstrong, Sugar Ray Robinson, Tony Zale, Carlos Monzón, Alexis Argüello, Erik Morales, Terry Norris, Marco Antonio Barrera, Naseem Hamed, Thomas Hearns and Victor Ortiz.
Counter puncher
Counter punchers are slippery, defensive style fighters who often rely on their opponent's mistakes in order to gain the advantage, whether it be on the score cards or more preferably a knockout. They use their well-rounded defense to avoid or block shots and then immediately catch the opponent off guard with a well placed and timed punch. A fight with a skilled counter-puncher can turn into a war of attrition, where each shot landed is a battle in itself. Thus, fighting against counter punchers requires constant feinting and the ability to avoid telegraphing ones attacks. To be truly successful using this style they must have good reflexes, a high level of prediction and awareness, pinpoint accuracy and speed, both in striking and in footwork.
Notable counter punchers include Vitali Klitschko, Floyd Mayweather, Jr., Evander Holyfield, Max Schmeling, Chris Byrd, Jim Corbett, Jack Johnson, Bernard Hopkins, Laszlo Papp, Jerry Quarry, Anselmo Moreno, James Toney, Marvin Hagler, Juan Manuel Márquez, Humberto Soto, Roger Mayweather, Pernell Whitaker and Sergio Gabriel Martinez
Counter punchers usually wear their opponents down by causing them to miss their punches. The more the opponent misses, the faster they'll tire, and the psychological effects of being unable to land a hit will start to sink in. The counter puncher often tries to outplay their opponent entirely, not just in a physical sense, but also in a mental and emotional sense. This style can be incredibly difficult, especially against seasoned fighters, but the pay-off from winning a fight without getting hit is often worth it. They usually try to stay away from the center of the ring, in order to outmaneuver and chip away at their opponents. A large advantage in counter-hitting is the forward momentum of the attacker, which drives them further into your return strike. As such, knockouts are more common than one would expect from a defensive style.
A brawler is a fighter who generally lacks finesse and footwork in the ring, but makes up for it through sheer punching power. Mainly Irish, Irish-American, Mexican, and Mexican-American boxers popularized this style. Many brawlers tend to lack mobility, preferring a less mobile, more stable platform and have difficulty pursuing fighters who are fast on their feet. They may also have a tendency to ignore combination punching in favour of continuous beat-downs with one hand and by throwing slower, more powerful single punches (such as hooks and uppercuts). Their slowness and predictable punching pattern (single punches with obvious leads) often leaves them open to counter punches, so successful brawlers must be able to absorb substantial amounts of punishment.
A brawler's most important assets are power and chin (the ability to absorb punishment while remaining able to continue boxing). Examples of this style include George Foreman, Sonny Liston, John L. Sullivan, Max Baer, Prince Naseem Hamed, Ray Mancini, David Tua, Arturo Gatti, Micky Ward, Michael Katsidis, James Kirkland, Marcos Maidana, Jake Lamotta, and Ireland's John Duddy. This style of boxing was also used by fictional boxers Rocky Balboa and James "Clubber" Lang. Manny Pacquiao is also a great example of an exceptional brawler.
Brawlers tend to be more predictable and easy to hit but usually fare well enough against other fighting styles because they train to take punches very well. They often have a higher chance than other fighting styles to score a knockout against their opponents because they focus on landing big, powerful hits, instead of smaller, faster attacks. Oftentimes they place focus on training on their upper body instead of their entire body, to increase power and endurance. They also aim to intimidate their opponents because of their power, stature and ability to take a punch.
In-fighters/swarmers (sometimes called "pressure fighters") attempt to stay close to an opponent, throwing intense flurries and combinations of hooks and uppercuts. A successful in-fighter often needs a good "chin" because swarming usually involves being hit with many jabs before they can maneuver inside where they are more effective. In-fighters operate best at close range because they are generally shorter and have less reach than their opponents and thus are more effective at a short distance where the longer arms of their opponents make punching awkward. However, several fighters tall for their division have been relatively adept at in-fighting as well as out-fighting.
The essence of a swarmer is non-stop aggression. Many short in-fighters utilize their stature to their advantage, employing a bob-and-weave defense by bending at the waist to slip underneath or to the sides of incoming punches. Unlike blocking, causing an opponent to miss a punch disrupts his balance, permits forward movement past the opponent's extended arm and keeps the hands free to counter. A distinct advantage that in-fighters have is when throwing uppercuts where they can channel their entire bodyweight behind the punch; Mike Tyson was famous for throwing devastating uppercuts. Julio César Chávez was known for his hard "chin", punching power, body attack and the stalking of his opponents. Some in-fighters, like Mike Tyson, have been known for being notoriously hard to hit. The key to a swarmer is aggression, endurance, chin, and bobbing-and-weaving.
Notable in-fighters include Joe Frazier, Mike Tyson (in his early career), Rocky Marciano, Jack Dempsey, Wayne McCullough, Amir Khan, Harry Greb, David Tua, Ricky Hatton and Julio César Chávez.
Combinations of styles
All fighters have primary skills with which they feel most comfortable, but truly elite fighters are often able to incorporate auxiliary styles when presented with a particular challenge. For example, an out-fighter will sometimes plant his feet and counter punch, or a slugger may have the stamina to pressure fight with his power punches.
Style matchups
There is a generally accepted rule of thumb about the success each of these boxing styles has against the others. In general, an in-fighter has an advantage over an out-fighter, an out-fighter has an advantage over a brawler, and a brawler has an advantage over an in-fighter; these form a cycle with each style being stronger relative to one, and weaker relative to another, with none dominating, as in rock-paper-scissors. Naturally, many other factors, such as the skill level and training of the combatants, determine the outcome of a fight, but the widely held belief in this relationship among the styles is embodied in the cliché amongst boxing fans and writers that "styles make fights."
Brawlers tend to overcome swarmers or in-fighters because, in trying to get close to the slugger, the in-fighter will invariably have to walk straight into the guns of the much harder-hitting brawler, so, unless the former has a very good chin and the latter's stamina is poor, the brawler's superior power will carry the day. A famous example of this type of match-up advantage would be George Foreman's knockout victory over Joe Frazier.
Although in-fighters struggle against heavy sluggers, they typically enjoy more success against out-fighters or boxers. Out-fighters prefer a slower fight, with some distance between themselves and the opponent. The in-fighter tries to close that gap and unleash furious flurries. On the inside, the out-fighter loses a lot of his combat effectiveness, because he cannot throw the hard punches. The in-fighter is generally successful in this case, due to his intensity in advancing on his opponent and his good agility, which makes him difficult to evade. For example, the swarming Joe Frazier, though easily dominated by the slugger George Foreman, was able to create many more problems for the boxer Muhammad Ali in their three fights. Joe Louis, after retirement, admitted that he hated being crowded, and that swarmers like untied/undefeated champ Rocky Marciano would have caused him style problems even in his prime.
The boxer or out-fighter tends to be most successful against a brawler, whose slow speed (both hand and foot) and poor technique makes him an easy target to hit for the faster out-fighter. The out-fighter's main concern is to stay alert, as the brawler only needs to land one good punch to finish the fight. If the out-fighter can avoid those power punches, he can often wear the brawler down with fast jabs, tiring him out. If he is successful enough, he may even apply extra pressure in the later rounds in an attempt to achieve a knockout. Most classic boxers, such as Muhammad Ali, enjoyed their best successes against sluggers.
An example of a style matchup was the historical fight of Julio César Chávez, a swarmer or in-fighter, against Meldrick Taylor, the boxer or out-fighter (see Chavez versus Taylor). The match was nicknamed "Thunder Meets Lightning" as an allusion to tremendous punching power of Chávez and blinding speed of Taylor. Chávez was the epitome of the "Mexican" style of boxing. He relentlessly stalked and closed in on the other fighter, ignoring whatever punishment he took for the chance to dish out his own at close range, particularly in the form of a crunching body attack that would either wear down his opponents until they collapsed in pain and exhaustion, or became too tired to defend as Chávez shifted his attack to the head and went for a knockout. During the fight, Taylor's brilliant hand and foot speed and boxing abilities gave him the early advantage, allowing him to begin building a large lead on points, but in the end, Chavez's punishment wore down Taylor and knocked him down with a tremendous right hand in the last round.
Since boxing involves forceful, repetitive punching, precautions must be taken to prevent damage to bones in the hand. Most trainers do not allow boxers to train and spar without wrist wraps and boxing gloves. Hand wraps are used to secure the bones in the hand, and the gloves are used to protect the hands from blunt injury, allowing boxers to throw punches with more force than if they did not utilize them. Gloves have been required in competition since the late nineteenth century, though modern boxing gloves are much heavier than those worn by early twentieth-century fighters. Prior to a bout, both boxers agree upon the weight of gloves to be used in the bout, with the understanding that lighter gloves allow heavy punchers to inflict more damage. The brand of gloves can also affect the impact of punches, so this too is usually stipulated before a bout.
A mouth guard is important to protect the teeth and gums from injury, and to cushion the jaw, resulting in a decreased chance of knockout. Both fighters must wear soft soled shoes to reduce the damage from accidental (or intentional) stepping on feet. While older boxing boots more commonly resembled those of a professional wrestler, modern boxing shoes and boots tend to be quite similar to their amateur wrestling counterparts.
Boxers practice their skills on two basic types of punching bags. A small, tear-drop-shaped "speed bag" is used to hone reflexes and repetitive punching skills, while a large cylindrical "heavy bag" filled with sand, a synthetic substitute, or water is used to practice power punching and body blows. In addition to these distinctive pieces of equipment, boxers also utilize sport-nonspecific training equipment to build strength, speed, agility, and stamina. Common training equipment includes free weights, rowing machines, jump rope, and medicine balls.
The modern boxing stance differs substantially from the typical boxing stances of the 19th and early 20th centuries. The modern stance has a more upright vertical-armed guard, as opposed to the more horizontal, knuckles-facing-forward guard adopted by early 20th century hook users such as Jack Johnson.
In a fully upright stance, the boxer stands with the legs shoulder-width apart and the rear foot a half-step in front of the lead man. Right-handed or orthodox boxers lead with the left foot and fist (for most penetration power). Both feet are parallel, and the right heel is off the ground. The lead (left) fist is held vertically about six inches in front of the face at eye level. The rear (right) fist is held beside the chin and the elbow tucked against the ribcage to protect the body. The chin is tucked into the chest to avoid punches to the jaw which commonly cause knock-outs and is often kept slightly offcenter. Wrists are slightly bent to avoid damage when punching and the elbows are kept tucked in to protect the ribcage. Some boxers fight from a crouch, leaning forward and keeping their feet closer together. The stance described is considered the "textbook" stance and fighters are encouraged to change it around once its been mastered as a base. Case in point, many fast fighters have their hands down and have almost exaggerated footwork, while brawlers or bully fighters tend to slowly stalk their opponents.
Left-handed or southpaw fighters use a mirror image of the orthodox stance, which can create problems for orthodox fighters unaccustomed to receiving jabs, hooks, or crosses from the opposite side. The southpaw stance, conversely, is vulnerable to a straight right hand.
North American fighters tend to favor a more balanced stance, facing the opponent almost squarely, while many European fighters stand with their torso turned more to the side. The positioning of the hands may also vary, as some fighters prefer to have both hands raised in front of the face, risking exposure to body shots.
Modern boxers can sometimes be seen tapping their cheeks or foreheads with their fists in order to remind themselves to keep their hands up (which becomes difficult during long bouts). Boxers are taught to push off with their feet in order to move effectively. Forward motion involves lifting the lead leg and pushing with the rear leg. Rearward motion involves lifting the rear leg and pushing with the lead leg. During lateral motion the leg in the direction of the movement moves first while the opposite leg provides the force needed to move the body.
There are four basic punches in boxing: the jab, straight right/left hand, hook and uppercut. If a boxer is right-handed (orthodox), his left hand is the lead hand and his right hand is the rear hand. For a left-handed boxer or southpaw, the hand positions are reversed. For clarity, the following discussion will assume a right-handed boxer.
Cross - in counter-punch with a looping
Short straight-punch – in short range and close range
Cross-counter (counter punch)
Half uppercut - a combination of a wide Uppercut/straight punch
- Jab – A quick, straight punch thrown with the lead hand from the guard position. The jab is accompanied by a small, clockwise rotation of the torso and hips, while the fist rotates 90 degrees, becoming horizontal upon impact. As the punch reaches full extension, the lead shoulder can be brought up to guard the chin. The rear hand remains next to the face to guard the jaw. After making contact with the target, the lead hand is retracted quickly to resume a guard position in front of the face.
- The jab is recognised as the most important punch in a boxer's arsenal because it provides a fair amount of its own cover and it leaves the least amount of space for a counter punch from the opponent. It has the longest reach of any punch and does not require commitment or large weight transfers. Due to its relatively weak power, the jab is often used as a tool to gauge distances, probe an opponent's defenses, harass an opponent, and set up heavier, more powerful punches. A half-step may be added, moving the entire body into the punch, for additional power. Some notable boxers who have been able to develop relative power in their jabs and use it to punish or 'wear down' their opponents to some effect include Larry Holmes and Wladimir Klitschko.
- Cross – A powerful, straight punch thrown with the rear hand. From the guard position, the rear hand is thrown from the chin, crossing the body and traveling towards the target in a straight line. The rear shoulder is thrust forward and finishes just touching the outside of the chin. At the same time, the lead hand is retracted and tucked against the face to protect the inside of the chin. For additional power, the torso and hips are rotated counter-clockwise as the cross is thrown.
- Weight is also transferred from the rear foot to the lead foot, resulting in the rear heel turning outwards as it acts as a fulcrum for the transfer of weight. Body rotation and the sudden weight transfer is what gives the cross its power. Like the jab, a half-step forward may be added. After the cross is thrown, the hand is retracted quickly and the guard position resumed. It can be used to counter punch a jab, aiming for the opponent's head (or a counter to a cross aimed at the body) or to set up a hook. The cross can also follow a jab, creating the classic "one-two" combination. The cross is also called a "straight" or "right", especially if it does not cross the opponent's outstretched jab.
- Hook – A semi-circular punch thrown with the lead hand to the side of the opponent's head. From the guard position, the elbow is drawn back with a horizontal fist (knuckles pointing forward) and the elbow bent. The rear hand is tucked firmly against the jaw to protect the chin. The torso and hips are rotated clockwise, propelling the fist through a tight, clockwise arc across the front of the body and connecting with the target.
- At the same time, the lead foot pivots clockwise, turning the left heel outwards. Upon contact, the hook's circular path ends abruptly and the lead hand is pulled quickly back into the guard position. A hook may also target the lower body and this technique is sometimes called the "rip" to distinguish it from the conventional hook to the head. The hook may also be thrown with the rear hand. Notable left hookers include Joe Frazier and Mike Tyson.
- Uppercut – A vertical, rising punch thrown with the rear hand. From the guard position, the torso shifts slightly to the right, the rear hand drops below the level of the opponent's chest and the knees are bent slightly. From this position, the rear hand is thrust upwards in a rising arc towards the opponent's chin or torso.
- At the same time, the knees push upwards quickly and the torso and hips rotate anti-clockwise and the rear heel turns outward, mimicking the body movement of the cross. The strategic utility of the uppercut depends on its ability to "lift" the opponent's body, setting it off-balance for successive attacks. The right uppercut followed by a left hook is a deadly combination employing the uppercut to lift the opponent's chin into a vulnerable position, then the hook to knock the opponent out.
- These different punch types can be thrown in rapid succession to form combinations or "combos". The most common is the jab and cross combination, nicknamed the "one-two combo". This is usually an effective combination, because the jab blocks the opponent's view of the cross, making it easier to land cleanly and forcefully.
- A large, swinging circular punch starting from a cocked-back position with the arm at a longer extension than the hook and all of the fighter's weight behind it is sometimes referred to as a "roundhouse", "haymaker", or sucker-punch. Relying on body weight and centripetal force within a wide arc, the roundhouse can be a powerful blow, but it is often a wild and uncontrolled punch that leaves the fighter delivering it off balance and with an open guard.
- Wide, looping punches have the further disadvantage of taking more time to deliver, giving the opponent ample warning to react and counter. For this reason, the haymaker or roundhouse is not a conventional punch, and is regarded by trainers as a mark of poor technique or desperation. Sometimes it has been used, because of its immense potential power, to finish off an already staggering opponent who seems unable or unlikely to take advantage of the poor position it leaves the puncher in.
- Another unconventional punch is the rarely used bolo punch, in which the opponent swings an arm out several times in a wide arc, usually as a distraction, before delivering with either that or the other arm.
- An illegal punch to the back of the head or neck is known as a rabbit punch.
There are several basic maneuvers a boxer can use in order to evade or block punches, depicted and discussed below.
- Slip – Slipping rotates the body slightly so that an incoming punch passes harmlessly next to the head. As the opponent's punch arrives, the boxer sharply rotates the hips and shoulders. This turns the chin sideways and allows the punch to "slip" past. Muhammad Ali was famous for extremely fast and close slips, as was an early Mike Tyson.
- A slipper will also most likely be a good counter puncher. Most of the time a slipper will immediately strike their opponent back.
- Sway or fade – To anticipate a punch and move the upper body or head back so that it misses or has its force appreciably lessened. Also called "rolling with the punch" or " Riding The Punch".
- Duck or break – To drop down with the back straight so that a punch aimed at the head glances or misses entirely.
- Bob and weave – Bobbing moves the head laterally and beneath an incoming punch. As the opponent's punch arrives, the boxer bends the legs quickly and simultaneously shifts the body either slightly right or left. Once the punch has been evaded, the boxer "weaves" back to an upright position, emerging on either the outside or inside of the opponent's still-extended arm. To move outside the opponent's extended arm is called "bobbing to the outside". To move inside the opponent's extended arm is called "bobbing to the inside". Joe Frazier, Jack Dempsey, Mike Tyson and Rocky Marciano were masters of bobbing and weaving.
- Parry/block – Parrying or blocking uses the boxer's shoulder, hands or arms as defensive tools to protect against incoming attacks. A block generally receives a punch while a parry tends to deflect it. A "palm" or "cuff" is a block which intentionally takes the incoming punch on that portion of the defender's glove.Floyd Mayweather Jr., is a master of this technique.
- The cover-Up – Covering up is the last opportunity (other than rolling with a punch) to avoid an incoming strike to an unprotected face or body. Generally speaking, the hands are held high to protect the head and chin and the forearms are tucked against the torso to impede body shots. When protecting the body, the boxer rotates the hips and lets incoming punches "roll" off the guard. To protect the head, the boxer presses both fists against the front of the face with the forearms parallel and facing outwards. This type of guard is weak against attacks from below.
- The clinch – Clinching is a form of trapping or a rough form of grappling and occurs when the distance between both fighters has closed and straight punches cannot be employed. In this situation, the boxer attempts to hold or "tie up" the opponent's hands so he is unable to throw hooks or uppercuts. To perform a clinch, the boxer loops both hands around the outside of the opponent's shoulders, scooping back under the forearms to grasp the opponent's arms tightly against his own body. In this position, the opponent's arms are pinned and cannot be used to attack. Clinching is a temporary match state and is quickly dissipated by the referee. Clinching is technically against the rules, and in amateur fights points are deducted fairly quickly for it. It is unlikely, however, to see points deducted for a clinch in professional boxing.
Philly Shell or Shoulder roll defense -This is actually a variation of the cross-arm defense. The lead arm (left for an orthodox fighter and right for a southpaw) is placed across the torso usually somewhere in between the belly button and chest and the lead hand rests on the opposite side of the fighter's torso. The back hand is placed on the side of the face (right side for orthodox fighters and left side for southpaws). The lead shoulder is brought in tight against the side of the face (left side for orthodox fighters and right side for southpaws). This style is used by fighters who like to counterpunch.
To execute this guard a fighter must be very athletic and experienced. This style is so effective for counterpunching because it allows fighters to slip punches by rotating and dipping their upper body and causing blows to glance off the fighter. After the punch glances off, the fighter's back hand is in perfect position to hit their out-of-position opponent. The shoulder lean is used in this stance. To execute the shoulder lean a fighter rotates and ducks (to the right for orthodox fighters and to the left for southpaws) when their opponents punch is coming towards them and then rotates back towards their opponent while their opponent is bringing their hand back.
The fighter will throw a punch with their back hand as they are rotating towards their undefended opponent. The weakness to this style is that when a fighter is stationary and not rotating they are open to be hit so a fighter must be athletic and well conditioned to effectively execute this style. To beat this style, fighters like to jab their opponents shoulder causing the shoulder and arm to be in pain and to demobilize that arm. Fighters that used this defense include Sugar Ray Robinson, Ken Norton (also used this defense), Pernell Whitaker, James Toney, and Floyd Mayweather Jr.. Floyd Mayweather Jr., is considered to be the master of this technique.
Less common strategies
- The "rope-a-dope" strategy : Used by Muhammad Ali in his 1974 "the Rumble in the Jungle" bout against George Foreman, the rope-a-dope method involves lying back against the ropes, covering up defensively as much as possible and allowing the opponent to attempt numerous punches. The back-leaning posture, which does not cause the defending boxer to become as unbalanced as they would during normal backward movement, also maximizes the distance of the defender's head from his opponent, increasing the probability that punches will miss their intended target. Weathering the blows that do land, the defender lures the opponent into expending energy while conserving his/her own. If successful, the attacking opponent will eventually tire, creating defensive flaws which the boxer can exploit. In modern boxing, the rope-a-dope is generally discouraged since most opponents are not fooled by it and few boxers possess the physical toughness to withstand a prolonged, unanswered assault. Recently, however, eight-division world champion Manny Pacquiao skillfully used the strategy to gauge the power of welterweight titlist Miguel Cotto in November 2009. Pacquiao followed up the rope-a-dope gambit with a withering knockdown.
- Bolo punch : Occasionally seen in Olympic boxing, the bolo is an arm punch which owes its power to the shortening of a circular arc rather than to transference of body weight; it tends to have more of an effect due to the surprise of the odd angle it lands at rather than the actual power of the punch. This is more of a gimmick than a technical maneuver; this punch is not taught, being on the same plane in boxing technicality as is the Ali shuffle. Nevertheless, a few professional boxers have used the bolo-punch to great effect, including former welterweight champions Sugar Ray Leonard, and Kid Gavilan. Middleweight champion Ceferino Garcia is regarded as the inventor of the bolo punch.
- Overhand right : The overhand right is a punch not found in every boxer's arsenal. Unlike the right cross, which has a trajectory parallel to the ground, the overhand right has a looping circular arc as it is thrown over-the-shoulder with the palm facing away from the boxer. It is especially popular with smaller stature boxers trying to reach taller opponents. Boxers who have used this punch consistently and effectively include former heavyweight champions Rocky Marciano and Tim Witherspoon, as well as MMA champions Chuck Liddell and Fedor Emelianenko. The overhand right has become a popular weapon in other tournaments that involve fist striking.
- Check hook : A check hook is employed to prevent aggressive boxers from lunging in. There are two parts to the check hook. The first part consists of a regular hook. The second, trickier part involves the footwork. As the opponent lunges in, the boxer should throw the hook and pivot on his left foot and swing his right foot 180 degrees around. If executed correctly, the aggressive boxer will lunge in and sail harmlessly past his opponent like a bull missing a matador. This is rarely seen in professional boxing as it requires a great disparity in skill level to execute. Technically speaking it has been said that there is no such thing as a check hook and that it is simply a hook applied to an opponent that has lurched forward and past his opponent who simply hooks him on the way past. Others have argued that the check hook exists but is an illegal punch due to it being a pivot punch which is illegal in the sport.
Floyd Mayweather, Jr. employed the use of a check hook against Ricky Hatton, which sent Hatton flying head first into the corner post and being knocked down. Hatton managed to get himself to his feet after the knockdown but was clearly dazed and it was only a matter of moments before Mayweather landed a flurry of punches which sent Hatton crashing to the canvas, giving Mayweather a TKO victory in the 10th round and handing Hatton his first defeat.
Ring corner
In boxing, each fighter is given a corner of the ring where he rests in between rounds and where his trainers stand. Typically, three men stand in the corner besides the boxer himself; these are the trainer, the assistant trainer and the cutman. The trainer and assistant typically give advice to the boxer on what he is doing wrong as well as encouraging him if he is losing. The cutman is a cutaneous doctor responsible for keeping the boxer's face and eyes free of cuts and blood. This is of particular importance because many fights are stopped because of cuts that threaten the boxer's eyes.
In addition, the corner is responsible for stopping the fight if they feel their fighter is in grave danger of permanent injury. The corner will occasionally throw in a white towel to signify a boxer's surrender (the idiomatic phrase "to throw in the towel", meaning to give up, derives from this practice). This can be seen in the fight between Diego Corrales and Floyd Mayweather. In that fight, Corrales' corner surrendered despite Corrales' steadfast refusal.
Medical concerns
Knocking a person unconscious or even causing concussion may cause permanent brain damage. There is no clear division between the force required to knock a person out and the force likely to kill a person. Since 1980, more than 200 amateur boxers, professional boxers and Toughman fighters have died due to ring or training injuries. Thus,[clarification needed] in 1983, the Journal of the American Medical Association called for a ban on boxing. The editor, Dr. George Lundberg, called boxing an "obscenity" that "should not be sanctioned by any civilized society." Since then, the British, Canadian and Australian Medical Associations also have called for bans on boxing.
Supporters of the ban state that boxing is the only sport where hurting the other athlete is the goal. Dr. Bill O'Neill, boxing spokesman for the British Medical Association, has supported the BMA's proposed ban on boxing: "It is the only sport where the intention is to inflict serious injury on your opponent, and we feel that we must have a total ban on boxing." Opponents respond that such a position is misguided opinion, stating that amateur boxing is scored solely according to total connecting blows with no award for "injury". They observe that many skilled professional boxers have had rewarding careers without inflicting injury on opponents by accumulating scoring blows and avoiding punches winning rounds scored 10-9 by the 10-point must system, and they note that there are many other sports where concussions are much more prevalent. In 2007, one study of amateur boxers showed that protective headgear did not prevent brain damage, and another found that amateur boxers faced a high risk of brain damage. The Gothenburg study analyzed temporary levels of neurofiliment light in cerebral spinal fluid which they conclude is evidence of damage, even though the levels soon subside and much more comprehensive studies of neurologiocal function on larger samples performed by Johns Hopkins University and accident rates analyzed by National Safety Council show amateur boxing is a comparatively safe sport.
Professional boxing is forbidden in Norway, Iceland, Iran and North Korea. It was banned in Sweden until 2007 when the ban was lifted but strict restrictions, including four three-minute rounds for fights, were imposed. It was banned in Albania from 1965 till the fall of Communism in 1991; it is now legal.
Boxing Hall of Fame
|This section does not cite any references or sources. (July 2012)|
The sport of boxing has two internationally recognized boxing halls of fame; the International Boxing Hall of Fame (IBHOF) and the World Boxing Hall of Fame (WBHF), with the IBHOF being the more widely recognized boxing hall of fame.
The WBHF was founded by Everett L. Sanders in 1980. Since its inception the WBHOF has never had a permanent location or museum, which has allowed the more recent IBHOF to garner more publicity and prestige. Among the notable names in the WBHF are Ricardo "Finito" Lopez, Gabriel "Flash" Elorde, Michael Carbajal, Khaosai Galaxy, Henry Armstrong, Jack Johnson, Roberto Durán, George Foreman, Ceferino Garcia and Salvador Sanchez. Boxing's International Hall of Fame was inspired by a tribute an American town held for two local heroes in 1982. The town, Canastota, New York, (which is about 15 miles (24 km) east of Syracuse, via the New York State Thruway), honored former world welterweight/middleweight champion Carmen Basilio and his nephew, former world welterweight champion Billy Backus. The people of Canastota raised money for the tribute which inspired the idea of creating an official, annual hall of fame for notable boxers.
The International Boxing Hall of Fame opened in Canastota in 1989. The first inductees in 1990 included Jack Johnson, Benny Leonard, Jack Dempsey, Henry Armstrong, Sugar Ray Robinson, Archie Moore, and Muhammad Ali. Other world-class figures include Salvador Sanchez, Roberto "Manos de Piedra" Durán, Ricardo Lopez, Gabriel "Flash" Elorde, Vicente Saldivar, Ismael Laguna, Eusebio Pedroza, Carlos Monzón, Azumah Nelson, Rocky Marciano, Pipino Cuevas and Ken Buchanan. The Hall of Fame's induction ceremony is held every June as part of a four-day event.
The fans who come to Canastota for the Induction Weekend are treated to a number of events, including scheduled autograph sessions, boxing exhibitions, a parade featuring past and present inductees, and the induction ceremony itself.
Governing and sanctioning bodies
- Governing Bodies
- Sanctioning Bodies
- International Boxing Federation (I.B.F.)
- World Boxing Association (W.B.A.)
- World Boxing Council (W.B.C.)
- World Boxing Organization (W.B.O.)
Boxer rankings
There are various organizations and websites, that rank boxers in both weight class and pound-for-pound manner.
See also
- World Professional Boxing Federation
- United States Boxing Council
- Automated Boxing Scoring System
- Boxing at the Summer Olympics
- Boxing training
- Weight class (boxing)
- Boxing styles and technique
- Helmet boxing
- List of current world boxing champions
- List of female boxers
- List of Triple Champions of Boxing
- List of left-handed boxers
- NCAA Boxing Championship
- Purse bid
- White collar boxing
- Women's boxing
- U.S. intercollegiate boxing champions
- Upcoming boxing matches
- "ESPN.com: Page 2 - Sport Skills Difficulty Rankings". Sports.espn.go.com. Retrieved 2012-05-18.
- Michael Poliakoff. "Encyclopædia Britannica entry for Boxing". Britannica.com. Retrieved 2012-05-18.
- James B. Roberts and Alexander G. Skutt (1999). James Figg, IBOHF[dead link]
- John Rennie (2006) East London Prize Ring Rules 1743[dead link]
- Anonymous ("A Celebrated Pugilist"), The Art and Practice of Boxing, 1825
- Daniel Mendoza, The Modern Art of Boxing, 1790
- "Clay Moyle and Arly Allen (2006), ''1838 Prize Rules''". Cyberboxingzone.com. Retrieved 2012-05-18.
- Leonard–Cushing fight Part of the Library of Congress Inventing Entertainment educational website. Retrieved 12/14/06.
- "Encyclopædia Britannica (2006). ''Queensbury Rules'', Britannica". Britannica.com. Retrieved 2012-05-18.
- "Tracy Callis (2006). ''James Corbett''". Cyberboxingzone.com. 18 February 1933. Retrieved 2012-05-18.
- "Andrew Eisele (2005). ''Olympic Boxing Rules'', About.com". Boxing.about.com. 9 April 2012. Retrieved 2012-05-18.
- "BoxRec Boxing Records". Boxrec.com. Retrieved 2012-05-18.
- "BoxRec Boxing Records". Boxrec.com. 25 February 2007. Retrieved 2012-05-18.
- Bert Randolph Sugar (2001). "Boxing", World Book Online Americas Edition Owingsmillsboxingclub.com
- "The Science of Boxing Styles". Boxing Training Fitness. Retrieved 2012-12-04.
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.162
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.254
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.384
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.337
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.120
- James Roberts and Alexander Skutt, Joe Frazier, The Boxing Register, 1999, p.204
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.403
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.353,
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.75
- James Roberts, Alexander Skutt, The Boxing Register, 1999, p.98, 99
- James Roberts and Alexander Skutt, The Boxing Register, 1999, p.339, 340
- Goldman, Herbert G. (2012). Boxing: A Worldwide Record of Bouts and Boxers. NC, USA: McFarland. ISBN 978-0-7864-6054-0.
- "Phrases.org". Phrases.org. Retrieved 2012-05-18.
- Boxing Brain Damage, BBC News
- Svinth, Joseph R. "Death Under the Spotlight" Electronic Journals of Martial Arts and Sciences, Accessed 25 November 2007
- Lundberg, George D. "Boxing should be banned in civilized countries." Journal of the American Medical Association. 1983, pp. 249–250.
- BMA.org.uk[dead link]
- Australian Medical Association. "CMA.ca". Ama.com.au. Retrieved 2012-05-18.
- News on Boxing Ban BBC Online
- "Football Concussion Controversy: Brain Damage, Tests, and More". Webmd.com. Retrieved 2012-12-04.
- "Amateur boxers suffer brain damage too". New Scientist (2602): 4. 8 May 2007.
- "Does Amateur Boxing Cause Brain Damage?". American Academy of Neurology. 2 May 2007.
- "War for the Sport of Boxing: Me vs. Pediatricians and Other MDs Criticizing It". Bleacher Report. 2 September 2011. Retrieved 2012-12-04.
- "American Association of Professional Ringside Physicians". Aaprp.org. 17 September 2011. Retrieved 2012-05-18.
- Hauser, Thomas. "Medical Issues and the AAPRP" SecondsOut.com, Accessed 25 November 2007
- Fish, Jim (26 June 2007). "Boxers bounce back in Sweden". BBC News.
- Accidents Take Lives of Young Alumni (July/August 2005). Illinois Alumni, 18(1), 47.
- Death Under the Spotlight: The Manuel Velazquez Boxing Fatality Collection
- Fleischer, Nat, Sam Andre, Nigel Collins, Dan Rafael (2002). An Illustrated History of Boxing. Citadel Press. ISBN 0-8065-2201-1
- Fox, James A. (2001). Boxing. Stewart, Tabori and Chang. ISBN 1-58479-133-0
- Godfrey, John "Boxing" from Treatise Upon the Useful Science of Defense, 1747
- Gunn M, Ormerod D. The legality of boxing. Legal Studies. 1995;15:181.
- Halbert, Christy (2003). The Ultimate Boxer: Understanding the Sport and Skills of Boxing. Impact Seminars, Inc. ISBN 0-9630968-5-0
- Hatmaker, Mark (2004). Boxing Mastery: Advanced Technique, Tactics, and Strategies from the Sweet Science. Tracks Publishing. ISBN 1-884654-21-5
- McIlvanney, Hugh (2001). The Hardest Game: McIlvanney on Boxing. McGraw-Hill. ISBN 0-658-02154-0
- Myler, Patrick (1997). A Century of Boxing Greats: Inside the Ring with the Hundred Best Boxers. Robson Books (UK) / Parkwest Publications (US). ISBN 1-86105-258-8.
- Price, Edmund The Science of Self Defense: A Treatise on Sparring and Wrestling, 1867
- Robert Anasi (2003). The Gloves: A Boxing Chronicle. North Point Press. ISBN 0-86547-652-7
- Schulberg, Budd (2007). Ringside: A Treasury of Boxing Reportage. Ivan R. Dee. ISBN 1-56663-749-X
- Silverman, Jeff (2004). The Greatest Boxing Stories Ever Told: Thirty-Six Incredible Tales from the Ring. The Lyons Press. ISBN 1-59228-479-5
- Scully, John Learn to Box with the Iceman
- U.S. Amateur Boxing Inc. (1994). Coaching Olympic Style Boxing. Cooper Pub Group. 1-884-12525-5
- A Pictoral History Of Boxing, Sam Andre and Nat Fleischer, Hamlyn, 1988, ISBN 0-600-50288-0
|Wikiquote has a collection of quotations related to: Boxing|
|Wikimedia Commons has media related to: Boxing|
|
<urn:uuid:0979869d-8751-4727-94c2-a9dceb374794>
|
CC-MAIN-2013-20
|
http://en.wikipedia.org/wiki/Parrying_(boxing)
|
2013-05-25T13:11:31Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951033
| 13,623
|
Click and drag on the image above to rotate the artifact.
Lamps were an everyday feature in ancient Palestine. The earliest forms were made from wheel-made saucers that were pinched to create a funnel for the wick. The wicks were of twisted flax fibers, and lamps were filled with several types of oil, which served as fuel, the most common being olive oil. Other than daily lighting purposes, lamps were also used for cultic rituals. This lamp for example, is made of bronze and was found in Tomb 1 at Tel Dothan suggesting its was not used for everyday purposes.
|
<urn:uuid:c5d24d82-0adf-4a07-b3c0-7481af9d49e3>
|
CC-MAIN-2013-20
|
http://wheaton.edu/Academics/Departments/Theology/Archaeology-Museum/Bronze-Lamp
|
2013-05-22T21:23:35Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.995937
| 124
|
Environmental Assessments at the CNSC
- An Environmental Assessment (EA) is a planning tool to identify and minimize the possible environmental effects of a proposed project, conducted before the project is allowed to proceed.
- In accordance with the Canadian Environmental Assessment Act and its regulations, CNSC manages EAs to make sure nuclear projects are safe for the environment.
- The CNSC’s EA process is slightly different than EA processes at other federal departments and agencies because our Commission Tribunal makes most EA decisions.
- The CNSC manages approximately 25 EAs every year.
- EAs provide an opportunity for public and Aboriginal participation, which can strengthen the quality of an EA.
In this Fact Sheet
What is an Environmental Assessment?
What are the different types of Environmental Assessment?
What is a review panel?
What triggers an Environmental Assessment?
What type of CNSC projects need an Environmental Assessment?
Why is an Environmental Assessment important?
What is looked at during an Environmental Assessment?
What is the CNSC’s regulatory process?
How are the provinces and territories involved in an Environmental Assessment?
How is the scope of an Environmental Assessment determined?
What is an Environmental Impact Statement?
What does the CNSC do with the results of the technical studies and Environmental Impact Statement?
Who makes the decision on the Environmental Assessment?
Can a proponent begin undertaking project activities immediately following a positive EA decision?
Does the proponent have to carry out the project exactly as proposed in the EA?
How does the CNSC ensure the environment is protected during project activities?
How and when do the public and Aboriginal groups get involved?
What is participant funding?
An Environmental Assessment (EA) is a planning tool that federal departments and agencies use to identify the possible environmental effects of a proposed project and to determine if those effects can be mitigated. An EA is conducted before a project is allowed to proceed.
When considering certain licensing decisions, the Canadian Nuclear Safety Commission (CNSC) has EA obligations and responsibilities under the Canadian Environmental Assessment Act (CEAA), which is the basis for federal EAs in Canada.
There are three types of EAs: screenings, comprehensive studies and review panels. Each offers a systematic approach to documenting the environmental effects of a proposed project and determining the need to eliminate or minimize (mitigate) the adverse effects if any, to modify the project plan or to recommend further assessment.
A screening is usually conducted for projects that are unlikely to cause significant adverse environmental effects.
A comprehensive study is typically conducted for a large, complex project with the potential for significant adverse environmental effects and that may also generate significant public interest or concern.
A proposed project can be referred to a review panel or mediator if it is determined it will likely have significant adverse environmental effects, if its potential effects are uncertain, or if public concern warrants a referral.
A review panel is a group of experts who are selected for their knowledge and expertise to conduct an EA and submit recommendations to the Minister of the Environment and to the Responsible Authority (RA) for their consideration in subsequent decision-making. The RA is the federal department or agency that has a regulatory role with respect to the project and must ensure that the EA is completed.
A joint review panel is used for a project that requires a decision from the federal government and another level of government or government agency, such as the CNSC. Typically, a joint review panel is established under a Memorandum of Understanding that is reviewed by the public before it is finalized.
Under the CEAA, an EA is triggered when a federal department or agency:
- proposes a project
- provides financial assistance to an applicant (proponent)
- sells, leases or otherwise transfers the control or administration of federal land
- provides a licence, permit or approval
For the CNSC, an EA is typically triggered because a licence must be amended, approved or issued. These licences can be to prepare, construct, operate, decommission or abandon a site.
For the CNSC, an EA may be conducted for the following typical projects:
- nuclear power plants
- heavy water production plants
- uranium mines and mills
- processing and research facilities
- radioactive waste management facilities
The EA process provides a coordinated, thorough review of environmental, socio-economic and cultural issues associated with a proposed project. An EA enables the decision maker to consider environmental factors, as well as the views of potentially affected Aboriginal groups and the public, and therefore could help minimize or avoid potential adverse environmental effects.
An EA can also determine alternative actions, methods or locations, and other means of carrying out a project to help minimize any potential adverse environmental effects.
EAs provide an opportunity for public and Aboriginal participation. Groups and/or individuals can provide important information on local and traditional knowledge about a proposed project’s site and potential environmental effects, as well as voice concerns and ask questions. This in turn, strengthens the quality of the EA.
EAs consider environmental features or qualities that a community values. This involves assessing both environmental components (i.e. air, water and land) and human components (i.e. human health, traffic and aboriginal interests).
The CNSC’s EA process is slightly different than EA processes at other federal departments and agencies, because the Commission Tribunal makes most EA decisions.
When a proponent submits a licence application for the CNSC’s consideration, the CNSC must determine if an EA is required and if so, decide on the type of EA (screening, comprehensive study or review panel) to be conducted. If an EA is required, the EA decision must happen before any licensing action.
The CNSC is responsible for ensuring an EA is carried out and the Commission Tribunal is responsible for determining if a project is likely to cause significant adverse environmental effects. As the RA, the CNSC notifies other federal departments and agencies to determine if they have a responsibility to conduct an EA (they would also be a RA) or can contribute expert knowledge or information. A federal department or agency that contributes expert knowledge or information is called a Federal Authority (FA).
The CNSC ensures that all its licensing decisions under the Nuclear Safety and Control Act and EA decisions under the Canadian Environmental Assessment Act uphold the honour of the Crown and consider Aboriginal peoples’ potential or established Aboriginal or treaty rights pursuant to sections 35 of the Constitution Act, 1982 (together, the Aboriginal Interests).
The CNSC ensures that provincial and territorial departments are notified of the proposed project, as both federal and provincial/territorial EA legislation may apply. Most provinces and territories have EA cooperation agreements with the federal government that aim to prevent duplication by ensuring a project is subject to only one EA that will enable both levels of government to meet legal requirements.
Insofar as its statutory functions allow, the CNSC supports a whole-of-government approach to Aboriginal consultation, with an aim to coordinating consultative efforts with other federal, provincial, and/or territorial regulatory departments and agencies through a one-window approach, with respect to EA and licensing activities.
The CNSC, other potential RAs and FAs determine how the EA will be conducted and the scope of what should be assessed, including issues and impacts that are likely to be important. The project scope, factors to be considered in the EA, and the scope of these factors are outlined in the Scoping Information Document produced by CNSC staff. The Commission Tribunal reviews and approves the Scoping Information Document.
Once the Commission Tribunal approves the Scoping Information Document, the proponent uses it to conduct technical studies, from which it develops an Environmental Impact Statement (EIS). The technical studies and EIS consider the proposed project’s implications and ensure that any measures to protect the environment as a result of the project are implemented.
The proponent submits the results of the technical studies and EIS to the CNSC for analysis and evaluation to ensure the EIS is adequate, accurate and complete. Based on the comments, the EIS is accepted or revised, or additional studies are carried out.
The CNSC prepares a Screening Report or a Comprehensive Study Report (CSR) that summarizes the findings of the technical studies and EIS. These documents contain CNSC staff recommendations to the Commission Tribunal about the outcome of the EA (if there are expected adverse environmental effects that are likely to be significant), along with additional mitigation measures and follow-up programs that may be required.
The Screening Report or CSR is made available for public comment on the CNSC’s Web site (nuclearsafety.gc.ca) and on the Canadian Environmental Assessment Agency’s Web site (ceaa.gc.ca). Additionally, the CNSC directly sends project documentation to potentially impacted Aboriginal groups for review and requests information relating to any potential adverse impacts on potential or established Aboriginal or treaty rights. All comments received from the public and Aboriginal groups are part of the public record and the CNSC thoroughly reviews all submissions. Public comments and concerns along with explanations of how each was addressed in the Screening Report or CSR. These comments are issued to assist the Commission Tribunal, the proponent and other RAs in evaluating the environmental acceptability of the project.
Once the review of the environmental impact information is complete for a Screening Report or CSR, a decision is made by the Responsible Authority on whether the project is likely to cause significant adverse environmental effects. Decisions on screenings are made by the Commission Tribunal. Decisions on comprehensive studies are made by the Minister of the Environment. When there is a Review Panel, the Panel makes recommendations to the Minister of the Environment who then makes the EA decision.
No. Before the proponent can undertake any project activities that are within the scope of the EA, a positive licensing decision is required. It is up to the proponent to submit all necessary documentation in order to fulfill CNSC licensing requirements before a licensing decision can be made. Only after a positive licensing decision is made can the proponent begin work.
No. The EA is a planning tool. As project activities may only be conceptual at the EA stage, changes in project activities may occur during the detailed design stage. The CNSC licence and compliance process ensures any changes are within the bounds of the EA prior to their authorization and commencement.
Following the completion of the EA, the CNSC licensing process identifies the appropriate licence conditions required to ensure adequate monitoring to protect the environment during project activities. For an existing licence, this is often achieved through revisions to existing licence conditions, site programs and facility manuals, which must be adhered to in accordance with the site licence.
The CNSC requires licensees to report to the CNSC on an annual basis (in some cases monthly or quarterly) on the monitoring results of site programs, including the mitigation measures undertaken. The CNSC may also require specific project activities that were identified in the EA as having the potential to cause negative environmental effects to be monitored and reported in an annual EA Follow-up Report.
In addition to respecting regulatory limits, licensees are required to implement the “as low as reasonable achievable” (ALARA) principle in decision making and normal operations. The ALARA principle is a commitment to keeping radiation exposure and dose levels to as low as reasonably achievable, social and economic factors being taken into account.
At the beginning of the EA process, the CNSC identifies members of the public, Aboriginal groups and not-for-profit organizations who may be affected by a proposed project or who may have an interest in a project, and determines the need for consultation. During the screening and comprehensive study EA processes, the public and Aboriginal groups may participate in one or more of the following activities:
- review of the Scoping Information Document produced by CNSC staff
- review of the EIS and technical documents produced by the proponent
- review of the Screening Report or Comprehensive Study Report produced by CNSC staff
- Commission Tribunal hearings
- CNSC-led public and/or Aboriginal consultation sessions
- proponent-led public consultation sessions
- province or territorial-led public and/or Aboriginal consultation sessions
The Canadian Environmental Assessment Agency administers a Participant Funding Program (PFP) which provides money to help members of the public, not-for-profit organizations and Aboriginal groups prepare for and participate in key stages of the federal EA process.
The CNSC will be launching its own PFP in 2011. The CNSC’s PFP will be made available to stakeholders to participate in Commission proceedings that consider significant issues of interest to the public or to Aboriginal peoples in the vicinity of a proposed site or a currently licensed site.
|
<urn:uuid:58056561-b7aa-4da3-bb16-2ca7429f44aa>
|
CC-MAIN-2013-20
|
http://www.nuclearsafety.gc.ca/eng/ea/faq/index.cfm
|
2013-05-20T02:22:54Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92935
| 2,579
|
Neck of rib
|Neck of rib|
|A central rib of the left side. Inferior aspect. (Neck visible at upper right.)|
|A central rib of the left side, viewed from behind. (Neck visible at upper right.)|
|Gray's||subject #28 124|
The neck of the rib is the flattened portion of a rib bone which extends laterally from the head; it is about 2.5 cm. long, and is placed in front of the transverse process of the lower of the two vertebræ with which the head articulates.
Its anterior surface is flat and smooth, its posterior rough for the attachment of the ligament of the neck, and perforated by numerous foramina.
Of its two borders the superior presents a rough crest (crista colli costœ) for the attachment of the anterior costotransverse ligament; its inferior border is rounded.
On the posterior surface at the junction of the neck and body, and nearer the lower than the upper border, is an eminence—the tubercle; it consists of an articular and a non-articular portion.
- The articular portion, the lower and more medial of the two, presents a small, oval surface for articulation with the end of the transverse process of the lower of the two vertebræ to which the head is connected.
- The non-articular portion is a rough elevation, and affords attachment to the ligament of the tubercle. The tubercle is much more prominent in the upper than in the lower ribs.
|This musculoskeletal system article is a stub. You can help Wikipedia by expanding it.|
|
<urn:uuid:11e118ef-b1a9-4fd4-b530-0312ca45f4ac>
|
CC-MAIN-2013-20
|
http://en.m.wikipedia.org/wiki/Neck_of_rib
|
2013-05-20T12:00:37Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.892367
| 358
|
|Mean distance from
|Visual brightness (V)||-26.8m|
|Mean distance from
Milky Way core
|Galactic period||2.25-2.50×108 a|
|Velocity||217 km/s orbit around the center of the Galaxy, 20km/s relative to average velocity of other stars in stellar neighborhood|
|Mean diameter||1.392×106 km
(109 Earth diameters)
(342 Earth diameters)
|Surface area||6.09×1012 km²
|Surface gravity||273.95 m s-2 (27.9 g)|
from the surface
|Surface temperature||5780 K|
|Temperature of corona||5 MK|
|Core temperature||~13.6 MK|
|Luminosity (Lsol)||3.827×1026 W
or 100 lm/W efficacy
|Mean Intensity (Isol)||2.009×107 W m-2 sr-1|
(to the ecliptic)
(to the galactic plane)
of North pole
(19 h 4 min 30 s)
of North pole
(25 d 9 h 7 min 13 s)
|Photospheric composition (by mass)|
The Sun is the star at the center of the Earth's solar system. The Earth and other matter (including other planets, asteroids, comets, meteoroids, and dust) orbit the Sun, which by itself accounts for more than 99 percent of the solar system's mass. Energy from the Sun—in the form of insolation from sunlight—supports almost all life on Earth via photosynthesis, and drives the Earth's climate and weather.
About 74 percent of the Sun's mass is hydrogen, 25 percent is helium, and the rest is made up of trace quantities of heavier elements. The Sun is thought to be about 4.6 billion years old and about halfway through its main-sequence evolution. Within the Sun's core, nuclear fusion reactions take place, with hydrogen nuclei being fused into helium nuclei. Through these reactions, more than 4 million tons of matter are converted into energy each second, producing neutrinos and solar radiation. Current theory predicts that in about five billion years, the Sun will evolve into a red giant and then a white dwarf, creating a planetary nebula in the process.
The Sun is a magnetically active star. It supports a strong, changing magnetic field that varies year-to-year and reverses direction about every 11 years. The Sun's magnetic field gives rise to many effects that are collectively called solar activity. They include sunspots on the Sun's surface, solar flares, and variations in the solar wind that carry material through the solar system. The effects of solar activity on Earth include auroras at moderate to high latitudes, and the disruption of radio communications and electric power. Solar activity is thought to have played a large role in the formation and evolution of the solar system, and strongly affects the structure of the Earth's outer atmosphere.
Although it is the nearest star to Earth and has been intensively studied by scientists, many questions about the Sun remain unanswered. For instance, we do not know why its outer atmosphere has a temperature of over a million K while its visible surface (the photosphere) has a temperature of just 6,000 K. Current topics of scientific inquiry include the Sun's regular cycle of sunspot activity, the physics and origin of solar flares and prominences, the magnetic interaction between the chromosphere and the corona, and the origin of the solar wind.
The Sun is sometimes referred to by its Latin name Sol or its Greek name Helios. Its astrological and astronomical symbol is a circle with a point at its center: . Some ancient peoples of the world considered it a planet.
The Sun is placed in a spectral class called G2V. "G2" means that it has a surface temperature of approximately 5,500 K, giving it a white color. As a consequence of light scattering by the Earth's atmosphere, it appears yellow to us. Its spectrum contains lines of ionized and neutral metals, as well as very weak hydrogen lines. The "V" suffix indicates that the Sun, like most stars, is a main sequence star. This means that it generates its energy by nuclear fusion of hydrogen nuclei into helium and is in a state of hydrostatic balancemdash;neither contracting nor expanding over time. There are more than 100 million G2 class stars in our galaxy. Due to logarithmic size distribution, the Sun is actually brighter than 85 percent of the stars in the Galaxy, most of which are red dwarfs. The Sun will spend a total of approximately 10 billion years as a main sequence star. Its current age, determined using computer models of stellar evolution and nucleocosmochronology, is thought to be about 4.57 billion years. The Sun orbits the center of the Milky Way galaxy at a distance of about 25,000 to 28,000 light-years from the galactic center, completing one revolution in about 225–250 million years. The orbital speed is 220 km/s, equivalent to one light-year every 1,400 years, and one AU every 8 days.
It is suggested that the Sun is a third generation star, whose formation may have been triggered by shockwaves from a nearby supernova based on a high abundance of heavy elements such as gold and uranium in the solar system. These elements could most plausibly have been produced by endergonic nuclear reactions during a supernova, or by transmutation via neutron absorption inside a massive second-generation star.
The Sun does not have enough mass to explode as a supernova. Instead, in 4–5 billion years, it will enter a red giant phase, its outer layers expanding as the hydrogen fuel in the core is consumed and the core contracts and heats up. Helium fusion will begin when the core temperature reaches about 3×108 K. While it is likely that the expansion of the outer layers of the Sun will reach the current position of Earth's orbit, recent research suggests that mass lost from the Sun earlier in its red giant phase will cause the Earth's orbit to move further out, preventing it from being engulfed. However, Earth's water and most of the atmosphere will be boiled away.
Following the red giant phase, intense thermal pulsations will cause the Sun to throw off its outer layers, forming a planetary nebula. The Sun will then evolve into a white dwarf, slowly cooling over eons. This stellar evolution scenario is typical of low- to medium-mass stars.
Sunlight is the main source of energy near the surface of Earth. The solar constant is the amount of power that the Sun deposits per unit area that is directly exposed to sunlight. The solar constant is equal to approximately 1,370 watts per square meter of area at a distance of one AU from the Sun (that is, on or near Earth). Sunlight on the surface of Earth is attenuated by the Earth's atmosphere so that less power arrives at the surface—closer to 1,000 watts per directly exposed square meter in clear conditions when the Sun is near the zenith. This energy can be harnessed via a variety of natural and synthetic processes—photosynthesis by plants captures the energy of sunlight and converts it to chemical form (oxygen and reduced carbon compounds), while direct heating or electrical conversion by solar cells are used by solar power equipment to generate electricity or to do other useful work. The energy stored in petroleum and other fossil fuels was originally converted from sunlight by photosynthesis in the distant past.
Sunlight has several interesting biological properties. Ultraviolet light from the Sun has antiseptic properties and can be used to sterilize tools. It also causes sunburn, and has other medical effects such as the production of Vitamin D. Ultraviolet light is strongly attenuated by Earth's atmosphere, so that the amount of UV varies greatly with latitude due the longer passage of sunlight through the atmosphere at high latitudes. This variation is responsible for many biological adaptations, including variations in human skin color in different regions of the globe.
Observed from Earth, the path of the Sun across the sky varies throughout the year. The shape described by the Sun's position, considered at the same time each day for a complete year, is called the analemma and resembles a figure 8 aligned along a North/South axis. While the most obvious variation in the Sun's apparent position through the year is a North/South swing over 47 degrees of angle (due to the 23.5-degree tilt of the Earth with respect to the Sun), there is an East/West component as well. The North/South swing in apparent angle is the main source of seasons on Earth.
The sun is an averaged-sized star. It contains about 99 percent of the total mass of the solar system. The volume of the Sun is 1,303,600 times that of the Earth; 71 percent of hydrogen makes up the mass of the Sun. The Sun is a near-perfect sphere, with an oblateness estimated at about 9 millionths, which means that its polar diameter differs from its equatorial diameter by only 10 km. While the Sun does not rotate as a solid body (the rotational period is 25 days at the equator and about 35 days at the poles), it takes approximately 28 days to complete one full rotation; the centrifugal effect of this slow rotation is 18 million times weaker than the surface gravity at the Sun's equator. Tidal effects from the planets do not significantly affect the shape of the Sun, although the Sun itself orbits the center of mass of the solar system, which is located nearly a solar radius away from the center of the Sun mostly because of the large mass of Jupiter.
The Sun does not have a definite boundary as rocky planets do; the density of its gases drops approximately exponentially with increasing distance from the center of the Sun. Nevertheless, the Sun has a well-defined interior structure, described below. The Sun's radius is measured from its center to the edge of the photosphere. This is simply the layer below which the gases are thick enough to be opaque but above which they are transparent; the photosphere is the surface most readily visible to the naked eye. Most of the Sun's mass lies within about 0.7 radii of the center.
The solar interior is not directly observable, and the Sun itself is opaque to electromagnetic radiation. However, just as seismology uses waves generated by earthquakes to reveal the interior structure of the Earth, the discipline of helioseismology makes use of pressure waves traversing the Sun's interior to measure and visualize the Sun's inner structure. Computer modeling of the Sun is also used as a theoretical tool to investigate its deeper layers.
The temperature of sun's surface is about 5,800 K. The temperature at its core has been estimated about 15,000,000 K. Energy is produced in its core by nuclear fusion, converts hydrogen atoms and releases huge amounts of energy. it is the same reaction that occurs in a hydrogen bomb. The American physicist George Gamow had once calculated that if a pinhead could be brought to the same temperature, as at the core of the sun, it would set fire to everything for 100 kilometres around. At the center of the Sun, where its density reaches up to 150,000 kg/m3 (150 times the density of water on Earth), thermonuclear reactions (nuclear fusion) convert hydrogen into helium, releasing the energy that keeps the Sun in a state of equilibrium. About 8.9×1037 protons (hydrogen nuclei) are converted into helium nuclei every second, releasing energy at the matter-energy conversion rate of 4.26 million metric tons per second, 383 yottawatts (383×1024 W) or 9.15×1010 megatons of TNT per second. The fusion rate in the core is in a self-correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers, reducing the fusion rate and correcting the perturbation; and a slightly lower rate would cause the core to shrink slightly, increasing the fusion rate and again reverting it to its present level.
The core extends from the center of the Sun to about 0.2 solar radii, and is the only part of the Sun in which an appreciable amount of heat is produced by fusion; the rest of the star is heated by energy that is transferred outward. All of the energy produced by interior fusion must travel through many successive layers to the solar photosphere before it escapes into space.
The high-energy photons (gamma and X-rays) released in fusion reactions take a long time to reach the Sun's surface, slowed down by the indirect path taken, as well as by constant absorption and reemission at lower energies in the solar mantle. Estimates of the "photon travel time" range from as much as 50 million years to as little as 17,000 years. After a final trip through the convective outer layer to the transparent "surface" of the photosphere, the photons escape as visible light. Each gamma ray in the Sun's core is converted into several million visible light photons before escaping into space. Neutrinos are also released by the fusion reactions in the core, but unlike photons they very rarely interact with matter, so almost all are able to escape the Sun immediately. For many years measurements of the number of neutrinos produced in the Sun were much lower than theories predicted, a problem which was recently resolved through a better understanding of the effects of neutrino oscillation.
From about 0.2 to about 0.7 solar radii, solar material is hot and dense enough that thermal radiation is sufficient to transfer the intense heat of the core outward. In this zone there is no thermal convection; while the material grows cooler as altitude increases, this temperature gradient is too low to drive convection. Heat is transferred by radiation—ions of hydrogen and helium emit photons, which travel a brief distance before being reabsorbed by other ions.
From about 0.7 solar radii to the Sun's visible surface, the material in the Sun is not dense enough or hot enough to transfer the heat energy of the interior outward via radiation. As a result, thermal convection occurs as thermal columns carry hot material to the surface (photosphere) of the Sun. Once the material cools off at the surface, it plunges back downward to the base of the convection zone, to receive more heat from the top of the radiative zone. Convective overshoot is thought to occur at the base of the convection zone, carrying turbulent downflows into the outer layers of the radiative zone.
The thermal columns in the convection zone form an imprint on the surface of the Sun, in the form of the solar granulation and supergranulation. The turbulent convection of this outer part of the solar interior gives rise to a "small-scale" dynamo that produces magnetic north and south poles all over the surface of the Sun.
The visible surface of the Sun, the photosphere, is the layer below which the Sun becomes opaque to visible light. Above the photosphere visible sunlight is free to propagate into space, and its energy escapes the Sun entirely. The change in opacity is due to the decreasing amount of H− ions, which absorb visible light easily. Conversely, the visible light we see is produced as electrons react with hydrogen atoms to produce H− ions. Sunlight has approximately a black-body spectrum that indicates its temperature is about 6,000 K(10,340 °F / 5,727 °C), interspersed with atomic absorption lines from the tenuous layers above the photosphere. The photosphere has a particle density of about 1023/m3 (this is about 1 percent of the particle density of Earth's atmosphere at sea level).
During early studies of the optical spectrum of the photosphere, some absorption lines were found that did not correspond to any chemical elements then known on Earth. In 1868, Norman Lockyer hypothesized that these absorption lines were due to a new element which he dubbed "helium," after the Greek Sun god Helios. It was not until 25 years later that helium was isolated on Earth.
The parts of the Sun above the photosphere are referred to collectively as the solar atmosphere. They can be viewed with telescopes operating across the electromagnetic spectrum, from radio through visible light to gamma rays, and comprise five principal zones: the temperature minimum, the chromosphere, the transition region, the corona, and the heliosphere. The heliosphere, which may be considered the tenuous outer atmosphere of the Sun, extends outward past the orbit of Pluto to the heliopause, where it forms a sharp shock front boundary with the interstellar medium. The chromosphere, transition region, and corona are much hotter than the surface of the Sun; the reason why is not yet known.
The coolest layer of the Sun is a temperature minimum region about 500 km above the photosphere, with a temperature of about 4,000 K. This part of the Sun is cool enough to support simple molecules such as carbon monoxide and water, which can be detected by their absorption spectra. Above the temperature minimum layer is a thin layer about 2,000 km thick, dominated by a spectrum of emission and absorption lines. It is called the chromosphere from the Greek root chroma, meaning color, because the chromosphere is visible as a colored flash at the beginning and end of total eclipses of the Sun. The temperature in the chromosphere increases gradually with altitude, ranging up to around 100,000 K near the top.
Above the chromosphere is a transition region in which the temperature rises rapidly from around 100,000 K to coronal temperatures closer to one million K. The increase is due to a phase transition as helium within the region becomes fully ionized by the high temperatures. The transition region does not occur at a well-defined altitude. Rather, it forms a kind of nimbus around chromospheric features such as spicules and filaments, and is in constant, chaotic motion. The transition region is not easily visible from Earth's surface, but is readily observable from space by instruments sensitive to the far ultraviolet portion of the spectrum.
The corona is the extended outer atmosphere of the Sun, which is much larger in volume than the Sun itself. The corona merges smoothly with the solar wind that fills the solar system and heliosphere. The low corona, which is very near the surface of the Sun, has a particle density of 1014/m3-1016/m3. (Earth's atmosphere near sea level has a particle density of about 2x1025/m3.) The temperature of the corona is several million kelvin. While no complete theory yet exists to account for the temperature of the corona, at least some of its heat is known to be due to magnetic reconnection.
The heliosphere extends from approximately 20 solar radii (0.1 AU) to the outer fringes of the solar system. Its inner boundary is defined as the layer in which the flow of the solar wind becomes superalfvénic - that is, where the flow becomes faster than the speed of Alfvén waves. Turbulence and dynamic forces outside this boundary cannot affect the shape of the solar corona within, because the information can only travel at the speed of Alfvén waves. The solar wind travels outward continuously through the heliosphere, forming the solar magnetic field into a spiral shape, until it impacts the heliopause more than 50 AU from the Sun. In December 2004, the Voyager 1 probe passed through a shock front that is thought to be part of the heliopause. Both of the Voyager probes have recorded higher levels of energetic particles as they approach the boundary.
Sunspots and the solar cycle
When observing the Sun with appropriate filtration, the most immediately visible features are usually its sunspots, which are well-defined surface areas that appear darker than their surroundings due to lower temperatures. Sunspots are regions of intense magnetic activity where energy transport is inhibited by strong magnetic fields. They are often the source of intense flares and coronal mass ejections. The largest sunspots can be tens of thousands of kilometers across.
The number of sunspots visible on the Sun is not constant, but varies over a 10-12 year cycle known as the Solar cycle. At a typical solar minimum, few sunspots are visible, and occasionally none at all can be seen. Those that do appear are at high solar latitudes. As the sunspot cycle progresses, the number of sunspots increases and they move closer to the equator of the Sun, a phenomenon described by Spörer's law. Sunspots usually exist as pairs with opposite magnetic polarity. The polarity of the leading sunspot alternates every solar cycle, so that it will be a north magnetic pole in one solar cycle and a south magnetic pole in the next.
The solar cycle has a great influence on space weather, and seems also to have a strong influence on the Earth's climate. Solar minima tend to be correlated with colder temperatures, and longer than average solar cycles tend to be correlated with hotter temperatures. In the 17th century, the solar cycle appears to have stopped entirely for several decades; very few sunspots were observed during the period. During this era, which is known as the Maunder minimum or Little Ice Age, Europe experienced very cold temperatures. Earlier extended minima have been discovered through analysis of tree rings and also appear to have coincided with lower-than-average global temperatures.
Effects on Earth and other bodies
Solar activity has several effects on the Earth and its surroundings. Because the Earth has a magnetic field, charged particles from the solar wind cannot impact the atmosphere directly, but are instead deflected by the magnetic field and aggregate to form the Van Allen belts. The Van Allen belts consist of an inner belt composed primarily of protons and an outer belt composed mostly of electrons. Radiation within the Van Allen belts can occasionally damage satellites passing through them.
The Van Allen belts form arcs around the Earth with their tips near the north and south poles. The most energetic particles can 'leak out' of the belts and strike the Earth's upper atmosphere, causing auroras, known as aurorae borealis in the northern hemisphere and aurorae australis in the southern hemisphere. In periods of normal solar activity, aurorae can be seen in oval-shaped regions centered on the magnetic poles and lying roughly at a geomagnetic latitude of 65°, but at times of high solar activity the auroral oval can expand greatly, moving towards the equator. Aurorae borealis have been observed from locales as far south as Mexico.
Solar wind also affects the surfaces of Mercury, Moon, and asteroids in the form of space weathering Because they do not have any substantial atmosphere, solar wind ions hit their surface materials and either alter the atomic structure of the materials or form a thin coating containing submicroscopic (or nanophase) metallic iron particles. The space weathering effect has been puzzling reseachers working on planetary remote geochemical analysis until recently.
Solar neutrino problem
For many years the number of solar electron neutrinos detected on Earth was only a third of the number expected, according to theories describing the nuclear reactions in the Sun. This anomalous result was termed the solar neutrino problem. Theories proposed to resolve the problem either tried to reduce the temperature of the Sun's interior to explain the lower neutrino flux, or posited that electron neutrinos could oscillate, that is, change into undetectable tau and muon neutrinos as they traveled between the Sun and the Earth. Several neutrino observatories were built in the 1980s to measure the solar neutrino flux as accurately as possible, including the Sudbury Neutrino Observatory and Kamiokande. Results from these observatories eventually led to the discovery that neutrinos have a very small rest mass and can indeed oscillate.. Moreover, the Sudbury Neutrino Observatory was able to detect all three types of neutrinos directly, and found that the Sun's total neutrino emission rate agreed with the Standard Solar Model, although only one-third of the neutrinos seen at Earth were of the electron type.
Coronal heating problem
The optical surface of the Sun (the photosphere) is known to have a temperature of approximately 6,000 K. Above it lies the solar corona at a temperature of 1,000,000 K. The high temperature of the corona shows that it is heated by something other than the photosphere.
It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational and magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events.
Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfven waves have been found to dissipate or refract before reaching the corona. In addition, Alfven waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. One possible candidate to explain coronal heating is continuous flaring at small scales, but this remains an open topic of investigation.
Faint young sun problem
Theoretical models of the sun's development suggest that 3.8 to 2.5 billion years ago, during the Archean period, the Sun was only about 75% as bright as it is today. Such a weak star would not have been able to sustain liquid water on the Earth's surface, and thus life should not have been able to develop. However, the geological record demonstrates that the Earth has remained at a fairly constant temperature throughout its history, and in fact that the young Earth was somewhat warmer than it is today. The general consensus among scientists is that the young Earth's atmosphere contained much larger quantities of greenhouse gases (such as carbon dioxide and/or ammonia) than are present today, which trapped enough heat to compensate for the lesser amount of solar energy reaching the planet.
All matter in the Sun is in the form of gas and plasma due to its high temperatures. This makes it possible for the Sun to rotate faster at its equator (about 25 days) than it does at higher latitudes (about 35 days near its poles). The differential rotation of the Sun's latitudes causes its magnetic field lines to become twisted together over time, causing magnetic field loops to erupt from the Sun's surface and trigger the formation of the Sun's dramatic sunspots and solar prominences (see magnetic reconnection). This twisting action gives rise to the solar dynamo and an 11-year solar cycle of magnetic activity as the Sun's magnetic field reverses itself about every 11 years.
The influence of the Sun's rotating magnetic field on the plasma in the interplanetary medium creates the heliospheric current sheet, which separates regions with magnetic fields pointing in different directions. The plasma in the interplanetary medium is also responsible for the strength of the Sun's magnetic field at the orbit of the Earth. If space were a vacuum, then the Sun's 10-4 tesla magnetic dipole field would reduce with the cube of the distance to about 10-11 tesla. But satellite observations show that it is about 100 times greater at around 10-9 tesla. Magnetohydrodynamic (MHD) theory predicts that the motion of a conducting fluid (e.g., the interplanetary medium) in a magnetic field, induces electric currents which in turn generates magnetic fields, and in this respect it behaves like an MHD dynamo.
History of solar observation
Early understanding of the Sun
Humanity's most fundamental understanding of the Sun is as the luminous disk in the heavens, whose presence above the horizon creates day and whose absence causes night. In many prehistoric and ancient cultures, the Sun was thought to be a solar deity or other supernatural phenomenon, and worship of the Sun was central to civilizations such as the Inca of South America and the Aztecs of what is now Mexico. Many ancient monuments were constructed with solar phenomena in mind; for example, stone megaliths accurately mark the summer solstice (some of the most prominent megaliths are located in Nabta Playa, Egypt, and at Stonehenge in England); the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumn equinoxes. With respect to the fixed stars, the Sun appears from Earth to revolve once a year along the ecliptic through the zodiac, and so the Sun was considered by Greek astronomers to be one of the seven planets (Greek planetes, "wanderer"), after which the seven days of the week are named in some languages.
Development of modern scientific understanding
One of the first people in the Western world to offer a scientific explanation for the sun was the Greek philosopher Anaxagoras, who reasoned that it was a giant flaming ball of metal even larger than the Peloponnesus, and not the chariot of Helios. For teaching this heresy, he was imprisoned by the authorities and sentenced to death (though later released through the intervention of Pericles).
Another scientist to challenge the accepted view was Nicolaus Copernicus, who in the 16th century developed the theory that the Earth orbited the Sun, rather than the other way around. In the early 17th century, Galileo pioneered telescopic observations of the Sun, making some of the first known observations of sunspots and positing that they were on the surface of the Sun rather than small objects passing between the Earth and the Sun. Sir Isaac Newton observed the Sun's light using a prism, and showed that it was made up of light of many colors, while in 1800 William Herschel discovered infrared radiation beyond the red part of the solar spectrum. The 1800s saw spectroscopic studies of the Sun advance, and Joseph von Fraunhofer made the first observations of absorption lines in the spectrum, the strongest of which are still often referred to as Fraunhofer lines.
In the early years of the modern scientific era, the source of the Sun's energy was a significant puzzle. Among the proposals were that the Sun extracted its energy from friction of its gas masses, or that its energy was derived from gravitational potential energy released as it continuously contracted. Either of these sources of energy could only power the Sun for a few million years at most, but geologists were showing that the Earth's age was several billion years. Nuclear fusion was first proposed as the source of solar energy only in the 1930s, when Hans Bethe calculated the details of the two main energy-producing nuclear reactions that power the Sun.
Solar space missions
The first satellites designed to observe the Sun were NASA's Pioneers 5, 6, 7, 8 and 9, which were launched between 1959 and 1968. These probes orbited the Sun at a distance similar to that of the Earth's orbit, and made the first detailed measurements of the solar wind and the solar magnetic field. Pioneer 9 operated for a particularly long period of time, transmitting data until 1987.
In the 1970s, Helios 1 and the Skylab Apollo Telescope Mount provided scientists with significant new data on solar wind and the solar corona. The Helios 1 satellite was a joint U.S.-German probe that studied the solar wind from an orbit carrying the spacecraft inside Mercury's orbit at perihelion. The Skylab space station, launched by NASA in 1973, included a solar observatory module called the Apollo Telescope Mount that was operated by astronauts resident on the station. Skylab made the first time-resolved observations of the solar transition region and of ultraviolet emissions from the solar corona. Discoveries included the first observations of coronal mass ejections, then called "coronal transients," and of coronal holes, now known to be intimately associated with the solar wind.
In 1980, the Solar Maximum Mission was launched by NASA. This spacecraft was designed to observe gamma rays, X-rays and UV radiation from solar flares during a time of high solar activity. Just a few months after launch, however, an electronics failure caused the probe to go into standby mode, and it spent the next three years in this inactive state. In 1984 Space Shuttle Challenger mission STS-41C retrieved the satellite and repaired its electronics before re-releasing it into orbit. The Solar Maximum Mission subsequently acquired thousands of images of the solar corona before re-entering the Earth's atmosphere in June 1989.
Japan's Yohkoh (Sunbeam) satellite, launched in 1991, observed solar flares at X-ray wavelengths. Mission data allowed scientists to identify several different types of flares, and also demonstrated that the corona away from regions of peak activity was much more dynamic and active than had previously been supposed. Yohkoh observed an entire solar cycle but went into standby mode when an annular eclipse in 2001 caused it to lose its lock on the Sun. It was destroyed by atmospheric reentry in 2005.
One of the most important solar missions to date has been the Solar and Heliospheric Observatory, jointly built by the European Space Agency and NASA and launched on December 2, 1995. Originally a two-year mission, SOHO has now operated for over ten years (as of 2006). It has proved so useful that a follow-on mission, the Solar Dynamics Observatory, is planned for launch in 2008. Situated at the Lagrangian point between the Earth and the Sun (at which the gravitational pull from both is equal), SOHO has provided a constant view of the Sun at many wavelengths since its launch. In addition to its direct solar observation, SOHO has enabled the discovery of large numbers of comets, mostly very tiny sungrazing comets which incinerate as they pass the Sun.
All these satellites have observed the Sun from the plane of the ecliptic, and so have only observed its equatorial regions in detail. The Ulysses probe was launched in 1990 to study the Sun's polar regions. It first traveled to Jupiter, to 'slingshot' past the planet into an orbit which would take it far above the plane of the ecliptic. Serendipitously, it was well-placed to observe the collision of Comet Shoemaker-Levy 9 with Jupiter in 1994. Once Ulysses was in its scheduled orbit, it began observing the solar wind and magnetic field strength at high solar latitudes, finding that the solar wind from high latitudes was moving at about 750 km/s (slower than expected), and that there were large magnetic waves emerging from high latitudes which scattered galactic cosmic rays.
Elemental abundances in the photosphere are well known from spectroscopic studies, but the composition of the interior of the Sun is more poorly understood. A solar wind sample return mission, Genesis, was designed to allow astronomers to directly measure the composition of solar material. Genesis returned to Earth in 2004 but was damaged by a crash landing after its parachute failed to deploy on reentry into Earth's atmosphere. Despite severe damage, some usable samples have been recovered from the spacecraft's sample return module and are undergoing analysis.
Sun observation and eye damage
Sunlight is very bright, and looking directly at the Sun with the naked eye for brief periods can be painful, but is generally not hazardous. Looking directly at the Sun causes phosphene visual artifacts and temporary partial blindness. It also delivers about 4 milliwatts of sunlight to the retina, slightly heating it and potentially (though not normally) damaging it. UV exposure gradually yellows the lens of the eye over a period of years and can cause cataracts, but those depend on general exposure to solar UV, not on whether one looks directly at the Sun.
Viewing the Sun through light-concentrating optics such as binoculars is very hazardous without an attenuating (ND) filter to dim the sunlight. Using a proper filter is important as some improvised filters pass UV rays that can damage the eye at high brightness levels. Unfiltered binoculars can deliver over 500 times more sunlight to the retina than does the naked eye, killing retinal cells almost instantly. Even brief glances at the midday Sun through unfiltered binoculars can cause permanent blindness. One way to view the Sun safely is by projecting an image onto a screen using binoculars or a small telescope.
Partial solar eclipses are hazardous to view because the eye's pupil is not adapted to the unusually high visual contrast: the pupil dilates according to the total amount of light in the field of view, not by the brightest object in the field. During partial eclipses most sunlight is blocked by the Moon passing in front of the Sun, but the uncovered parts of the photosphere have the same surface brightness as during a normal day. In the overall gloom, the pupil expands from ~2 mm to ~6 mm, and each retinal cell exposed to the solar image receives about ten times more light than it would looking at the non-eclipsed sun. This can damage or kill those cells, resulting in small permanent blind spots for the viewer. The hazard is insidious for inexperienced observers and for children, because there is no perception of pain: it is not immediately obvious that one's vision is being destroyed.
During sunrise and sunset, sunlight is attenuated through rayleigh and mie scattering of light by a particularly long passage through Earth's atmosphere, and the direct Sun is sometimes faint enough to be viewed directly without discomfort or safely with binoculars. Hazy conditions, atmospheric dust, and high humidity contribute to this atmospheric attenuation.
- ↑ 1.0 1.1 P.K. Seidelmann, Abalakin, V.K., Bursa, M., Davies, M.E., de Bergh, C., Lieske, J.H., Oberst, J., Simon, J.L., Standish, E.M., Stooke, P., Thomas, P.C. (2000) Report Of The IAU/IAG Working Group On Cartographic Coordinates And Rotational Elements Of The Planets And Satellites: 2000. hnsky.com Accessed on 2006-03-22
- ↑ Astronomers Had it Wrong: Most Stars are Singlespace.com. Retrieved July 21, 2008.
- ↑ A. Bonanno, H. Schlattl, L. Paternò, (2002) The age of the Sun and the relativistic corrections in the EOS. Astronomy and Astrophysics 390: 1115-1118.
- ↑ F.J. Kerr, and D. Lynden-Bell, (1986) Review of galactic constants. Monthly Notices of the Royal Astronomical Society 221: 1023-1038.
- ↑ R. W. Pogge, (1997) The Once & Future Sun. New Vistas in Astronomy.
- ↑ I.-Juliana Sackmann, A. I. Boothroyd, K. E. Kraemer, (1993) Our Sun. III. Present and Future. Astrophysical Journal 418: 457.
- ↑ S. Godier, J.-P. Rozelot, (2000)The solar oblateness and its relationship with the structure of the tachocline and of the Sun's subsurface. Astronomy and Astrophysics 355: 365-374.
- ↑ R. Lewis, (1983) The Illustrated Encyclopedia of the Universe. (New York: Harmony Books).
- ↑ P. Plait, (1997) Bitesize Tour of the Solar System: The Long Climb from the Sun's Core badastronomy.com. Accessed on 2006-03-22
- ↑ Discovery of Helium Accessed on 2006-03-22
- ↑ The Distortion of the Heliosphere: Our Interstellar Magnetic Compass spaceref.com Accessed on 2006-03-22
- ↑ J. Lean, A. Skumanich, and O. White, (1992) Estimating the Sun's radiative output during the Maunder Minimum. Geophysical Research Letters 19: 1591-1594.
- ↑ B. Hapke, (2001) Space weathering from Mercury to the asteroid belt. J. Geophys. Res. 106: 10,039-10,073.
- ↑ W.C. Haxton, (1995) The Solar Neutrino Problem. Annual Review of Astronomy and Astrophysics 33: 459-504.
- ↑ H. Schlattl, (2001) Three-flavor oscillation solutions for the solar neutrino problem. Physical Review D 64 (1).
- ↑ H. Alfven, (1947) Magneto-hydrodynamic waves, and the heating of the solar corona. Monthly Notices of the Royal Astronomical Society 107: 211.
- ↑ P.A. Sturrock, and Y. Uchida, (1981) Coronal heating by stochastic magnetic pumping. Astrophysical Journal 246: 331.
- ↑ E.N. Parker, (1988) Nanoflares and the solar X-ray corona. Astrophysical Journal 330: 474.
- ↑ J.F. Kasting, and T.P. Ackerman, (1986) Climatic Consequences of Very High Carbon Dioxide Levels in the Earth’s Early Atmosphere. Science 234: 1383-1385.
- ↑ Galileo Galilei (1564 - 1642) BBC Accessed on 2006-03-22
- ↑ Sir Isaac Newton (1643 - 1727) BBC Accessed on 2006-03-22
- ↑ Herschel Discovers Infrared Light Cool Cosmos. Accessed on 2006-03-22
- ↑ H. Bethe, (1938) On the Formation of Deuterons by Proton Combination. Physical Review 54: 862-862.
- ↑ H. Bethe, (1939) Energy Production in Stars. Physical Review 55: 434-456.
- ↑ Pioneer 6-7-8-9-E Encyclopedia Astronautica Accessed on 2006-03-22
- ↑ Chris St. Cyr, and J. Burkepile, (1998)Solar Maximum Mission Overview. Accessed on 2006-03-22
- ↑ Japan Aerospace Exploration Agency (1995) Result of Re-entry of the Solar X-ray Observatory "Yohkoh" (SOLAR-A) to the Earth's Atmosphere. Accessed on 2006-03-22
- ↑ SOHO Comets. Accessed on 2006-03-22
- ↑ Ulysses - Science - Primary Mission Results. NASA Accessed on 2006-03-22
- ↑ J.C.D. Marsh, (1982) Observing the Sun in Safety. J. Brit. Ast. Assoc. 92: 6
- ↑ F. Espenak, (1996) Eye Safety During Solar Eclipses - adapted from NASA RP 1383 Total Solar Eclipse of 1998. February 26, 17. Accessed on 2006-03-22
- Current SOHO snapshots
- Far-Side Helioseismic Holography from Stanford
- NASA Eclipse homepage
- Nasa SOHO (Solar & Heliospheric Observatory) satellite FAQ
- Solar Sounds from Stanford
- Eric Weisstein's World of Astronomy - Sun
- The Position of the Sun
- A collection of solar movies
- The Institute for Solar Physics- Movies of Sunspots and spicules
- NASA/Marshall Solar Physics website
- Solar Position Algorithm and documentation from the National Renewable Energy Laboratory
- libnova - a celestial mechanics and astronomical calculation library
- NASA Podcast
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
<urn:uuid:d869cccd-a523-4feb-ae48-3bbfddef9331>
|
CC-MAIN-2013-20
|
http://www.newworldencyclopedia.org/entry/Sun
|
2013-05-26T09:41:02Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930802
| 9,315
|
The purpose of screening is early diagnosis and treatment. Screening tests are usually administered to people without current symptoms, but who may be at high risk for certain diseases or conditions.
In the case of chickenpox, screening can be done to see whether you’ve acquired an immunity to the disease.
Blood tests—A blood sample is taken and sent to a lab. Levels of antibodies are measured in the blood to see if you have developed immunity to chickenpox from an unrecognized previous infection (or a forgotten immunization).
People who have had chickenpox usually develop immunity to it. Since 1995, a chickenpox vaccine has been available. For this reason, the National Immunization Program recommends that if you are unsure if you’ve ever had chickenpox or been vaccinated, you should talk to your doctor about having a blood test to determine whether or not you have immunity. If the tests are negative, you are not immune. In most cases, you should then receive the chickenpox vaccine to protect you from getting chickenpox. Vaccination is particularly important for adolescents and adults, for whom infection with chickenpox may be severe or even life threatening.
- Reviewer: Michael Woods, MD
- Review Date: 10/2012 -
- Update Date: 10/11/2012 -
|
<urn:uuid:e9407f69-262e-4004-810e-9da50505dc56>
|
CC-MAIN-2013-20
|
http://doctors-hospital.net/your-health/?/19269/Diagnosis-of-Chickenpox~Screening
|
2013-05-23T04:35:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938422
| 263
|
Scott Backhaus – Capability Leader
A sound wave in a gas is usually regarded as consisting of coupled pressure and motion oscillations, but temperature oscillations are always present, too. When the sound travels in a gas in small channels, oscillating heat also flows to and from the channel walls. Combinations of all such oscillations in three dimensions produce a rich variety of “thermoacoustic” effects.
Our experiments usually involve pressurized inert gases, heat sources and sinks, high-power acoustic drivers, and sensors to measure pressures, temperatures, and sometimes mole fractions. Theory relies on the assumptions that the oscillations of pressure, temperature, density, velocity, and entropy are adequately represented as “small” sinusoidal functions of time. Surprisingly, the results of this approach are usefully accurate even for large oscillations with substantial harmonic content.
Thermoacoustic heat and temperature effects are too small to be obvious in the sound in air with which we communicate every day. However, in intense sound waves in pressurized gases, thermoacoustics can be harnessed to produce powerful engines, pulsating combustion, heat pumps, refrigerators, and mixture separators. Hence, much current thermoacoustics research is motivated by the desire to create new technology for the energy industry that is as simple and reliable as sound waves themselves.
More information about thermoacoustics at Los Alamos may be found at www.lanl.gov/thermoacoustics/
National High Magnetic Field Laboratory/NHMFL
Low Energy Spectroscopy
|
<urn:uuid:d98d6e17-92ef-40c6-9911-85038f7c4f4d>
|
CC-MAIN-2013-20
|
http://www.lanl.gov/orgs/mpa/cmms/ta.shtml
|
2013-05-23T04:35:32Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.907383
| 323
|
Science Fair Project Encyclopedia
Avro Lancaster, England, 2002
|Crew||7—pilot, flight engineer, navigator, bomb aimer, wireless operator, mid-, upper and rear gunners|
|First flight||January 9, 1941.|
|Length||69 ft 5 in||21.18 m|
|Wingspan||102 ft||31.09 m|
|Height||ft in||5.97 m|
|Wing area||ft²||120.8 m²|
|Loaded||63,000 lb||28,636 kg|
|Engines||4 Rolls-Royce Merlin XX piston engines|
|Power||1,280 hp||954 kW|
|Maximum speed||280 mph at 15,000 ft||448 km/h at 5,600 m|
|Combat range||2,700 miles with minimal bomb load||4,320 km with minimal bomb load|
|Service ceiling||23,500 ft||8,160 m|
|Rate of climb||ft/min||m/min|
|Guns||8 x Browning 0.303 in (7.62 mm) machine-guns in three turrets|
|Bombs||normal 14,000 lb (6,350 kg)|
special versions 22,000 lb (10,000 kg)
The Avro Lancaster was a four-engined World War II bomber aircraft made initially by Avro for the Royal Air Force. First used in 1942, together with the Handley-Page Halifax it was the main heavy bomber of the RAF, the Royal Canadian Air Force, and squadrons from other Commonwealth and European countries serving with RAF Bomber Command. The Lancaster was primarily a night-time bomber; unlike the Halifax, it was not used during the war for duties other than bombing.
The original design was for a twin-engined heavy bomber powered by Rolls-Royce Vulture engines. The resulting aircraft was the Avro Manchester, which proved a disappointment due to the unreliability of the Vulture. It was withdrawn from service in 1942 with only 200 aircraft built.
The chief designer of A. V. Roe, Roy Chadwick, switched to a design using four of the more reliable Rolls-Royce Merlin engines which resulted in an aircraft initially designated the Type 683 Manchester III. Renamed the Lancaster, it made its first test flight on January 9, 1941, and proved to be a great improvement on the Manchester. Most of the original Manchesters were rebuilt as Lancasters.
The majority of Lancasters during the war years were manufactured by Metropolitan-Vickers, Armstrong Whitworth and A.V. Roe. The Avro was also produced at the Austin motor works in Longbridge later in World War II. Only 300 of the Mk II with Bristol Hercules engines were made. The Mk III had newer Merlin engines but was otherwise identical to earlier versions; 3030 Mk IIIs were built, almost all at A.V. Roe's Newton Heath factory. Of later versions only the Canadian-built Mk X was produced in any numbers, built by Victory Aircraft in Malton, Ontario. 430 of this type were built. They differed little from earlier versions, except for using Packard-built Merlin engines and having a differently configured mid-upper turret. 7,377 Lancasters of all marks were built over the war; a 1943 Lancaster cost £45-50,000.
Lancasters from Bomber Command were to have formed the backbone of Tiger Force, the Commonwealth bomber contingent scheduled to take part in Operation Downfall, the codename for the planned invasion of Japan in late 1945, from bases on Okinawa.
In 1942-45, Lancasters flew 156,000 operations and dropped 608,612 tons of bombs. 3,249 Lancasters were lost in action. Only 35 Lancasters completed more than 100 successful operations. The greatest survivor completed 139 operations and survived the war, to be scrapped in 1947.
An important feature of the Lancaster was its extensive bomb bay , at 33 feet (10.05 m) long. Initially the heaviest bombs carried were 4,000 lb (1,818 kg) or for special targets the 21 feet (6.4 m) long 12,000 lb (5,448 kg) 'Tall Boy'. Towards the end of the war, attacking hardened targets, the 'Special B' Lancasters could carry a single 25.5 feet (7.77 m) long 22,000 lb (9,979 kg) 'Grand Slam' or 'Earthquake' bomb. This required modification to the bomb-bay doors. (Note: the exact weight in kg of 'Tall Boy' and 'Grand Slam' bombs differs according to source. The figures above are the most common.)
The Lancaster had a very advanced communications system for its time; the famous 1155 receiver and 1154 transmitter. These provided radio direction-finding, as well as voice and morse capabilities. Later Lancasters carried:
- Monica - a rearward looking radar to warn of night fighter approaches - a notable disaster, transmitting constant warnings of bombers in the same formation it was ignored by crews and instead served as a homing beacon for suitably equiped German night fighters.
- Fishpond - an add-on to H2S that provided additional (aerial) coverage of the underside of the aircraft to display attacking fighters on the main H2S screen.
- GEE - A receiver for a navigation system of synchronized pulses transmitted from the UK - aircraft calculated their position from the phase shift between pulses. The range of GEE was 3-400 miles.
- Oboe - a very accurate navigation system consisting of a receiver/transponder for two radar stations transmitting from the UK - one determining range and the other the bearing on the range. As the system could only handle one aircraft at a time it was only fitted to Pathfinder aircraft which marked the target for the main force. Later supplemented by GEE-H , similar to Oboe but with the transponder on the ground allowing more aircraft to use the system simultaneously. GEE-H aircraft were usually marked with two horizontal yellow stripes on the fins.
The most famous use of the Lancaster was probably the 1943 mission, codenamed Operation Chastise, to destroy the dams of the Ruhr Valley using special drum shaped bouncing bombs designed by Barnes Wallis, and carried by modified Mk IIIs. The story of the mission was later made into a film, The Dam Busters. Another famous action was a series of attacks against the German battleship Tirpitz with 'Tall Boy' bombs, ended with sinking 'Tirpitz'.
A development of the Lancaster was the Avro Lincoln bomber, initially known as the Lancaster IV and Lancaster V, these two marks became the Lincoln B1 and B2 respectively. There was also a civilian airliner based on the Lancaster, the Lancastrian. Other developments were the York, a square-bodied transport, and the Shackleton, which continued in airborne early warning service up to 1992.
Two Avro Lancasters remain in air-worthy condition, although few flying hours remain on their airframes and actual flying is carefully rationed. One is PA474 of the Battle of Britain Memorial Flight and the other is FM 213 of the Canadian Warplane Heritage museum.
Two Lancasters with extensive combat histories in Australian Squadrons have survived as static exhibits. S for Sugar of 463/467 Squadron RAAF flew 135 operational sorties, and is now on display at the RAF Museum, Hendon. G for George of 460 Squadron RAAF flew 90 operational sorties, and is now on display at the Australian War Memorial, Canberra.
- The Avro History
- Surviving Birmingham and Manchester made Avro Lancasters
- PA474 of the Battle of Britain Memorial Flight
- FM 213 of the Canadian Warplane Heritage Museum
- Lancaster FM159 - The Nanton Lancaster
- The Australian War Memorial G for George page
- R1155 radio receiver
|Related development||Avro York|
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
|
<urn:uuid:efd13c67-9bdd-4ef7-a7f5-b59d2258b867>
|
CC-MAIN-2013-20
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Avro_Lancaster
|
2013-05-22T07:19:39Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939999
| 1,690
|
(Page 2 of 2)
But is bullying—which the stopbullying.gov website of the Department of Health and Human Services defines as "teasing," "name-calling," "taunting," "leaving someone out on purpose," "telling other children not to be friends with someone," "spreading rumors about someone," "hitting/kicking/pinching," "spitting" and "making mean or rude hand gestures"—really a growing problem in America?
Despite the rare and tragic cases that rightly command our attention and outrage, the data show that things are, in fact, getting better for kids. When it comes to school violence, the numbers are particularly encouraging. According to the National Center for Education Statistics, between 1995 and 2009, the percentage of students who reported "being afraid of attack or harm at school" declined to 4 percent from 12 percent. Over the same period, the victimization rate per 1,000 students declined fivefold.
When it comes to bullying numbers, long-term trends are less clear. The makers of "Bully" say that "over 13 million American kids will be bullied this year," and estimates of the percentage of students who are bullied in a given year range from 20 percent to 70 percent. NCES changed the way it tabulated bullying incidents in 2005 and cautions against using earlier data. Its biennial reports find that 28 percent of students ages 12-18 reported being bullied in 2005; that percentage rose to 32 percent in 2007, before dropping back to 28 percent in 2009 (the most recent year for which data are available). Such numbers strongly suggest that there is no epidemic afoot (though one wonders if the new anti-bullying laws and media campaigns might lead to more reports going forward).
The most common bullying behaviors reported include being "made fun of, called names, or insulted" (reported by about 19 percent of victims in 2009) and being made the "subject of rumors" (16 percent). Nine percent of victims reported being "pushed, shoved, tripped, or spit on," and 6 percent reported being "threatened with harm." Though it may not be surprising that bullying mostly happens during the school day, it is stunning to learn that the most common locations for bullying are inside classrooms, in hallways and stairwells, and on playgrounds—areas ostensibly patrolled by teachers and administrators.
None of this is to be celebrated, of course, but it hardly paints a picture of contemporary American childhood as an unrestrained Hobbesian nightmare. Before more of our schools' money, time and personnel are diverted away from education in the name of this supposed crisis, we should make an effort to distinguish between the serious abuse suffered by the kids in "Bully" and the sort of lower-level harassment with which the Aaron Cheeses of the world have to deal.
In fact, Aaron Cheese, now a sophomore in high school with hopes of becoming a lawyer, provides a model in dealing with the sort of jerks who will always, unfortunately, be a presence in our schools. At the end of "Stop Bullying," he tells younger kids, "Just talk to somebody and I promise to you, it's going to get better." For Aaron, it plainly has: "It has been turned around actually. I am a generally liked guy. My last name has become something that's a little more liked. I have a friend named Mac and so together we are Mac and Cheese. That's cool."
Indeed, it is cool. And if we take a deep breath, we will realize that there are many more Aaron Cheeses walking the halls of today's schools than there are bullies. Our problem isn't a world where bullies are allowed to run rampant; it's a world where kids like Aaron are convinced that they are powerless victims.
Nick Gillespie is the editor in chief of Reason.com and Reason.tv and, with Matt Welch, co-author of The Declaration of Independents: How Libertarian Politics Can Fix What's Wrong with America. Note: This article originally appeared in the March 31, 2012 edition of the Wall Street Journal. Read it there.
|
<urn:uuid:0b3a0003-633c-4210-ba01-6d696b2eb03f>
|
CC-MAIN-2013-20
|
http://reason.com/archives/2012/04/04/is-there-a-bullying-epidemic/1
|
2013-05-18T18:33:23Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.969515
| 849
|
Thanks to the Westward Journey Nickel Series™, America's nickel changed for the first time in 66 years! Two new designs took their turns on the back of the nickel in 2004, while the image of President Thomas Jefferson on the front was the same as the image on earlier nickels. But the front of the 2005 and 2006 nickels showed new images of Jefferson as well.
The new designs celebrate two events of about 200 years before: the Louisiana Purchase and the westward journey of Lewis and Clark.
When Thomas Jefferson was President of the United States, he bought a piece of land from France called "Louisiana," an area much larger than the state of Louisiana today...so large, in fact, that buying it made the United States twice as large as it had been before. Since Thomas Jefferson was already on the nickel, it was the perfect coin on which to celebrate the Louisiana Purchase.
In 1804, President Jefferson sent a group led by Lewis and Clark to explore this land, to describe the flora (plants) and fauna (animals) they saw, and to find a water route to the Pacific Ocean if there was one.
Jefferson had a medal made as a token of peace, which we call his "Peace Medal." The explorers were to give the medals as gifts to the Native American chiefs they met as a sign of peace.
The design that was used on Jefferson's Peace Medal is used on the first of the new nickels, the Peace Medal Nickel. It shows the hand of a Native American and the hand of a European-American clasped in a friendly handshake below a crossed pipe and tomahawk. The words "Louisiana Purchase" are inscribed above the date of the purchase, 1803.
The second nickel of 2004 shows the keelboat that was part of the transportation for Lewis and Clark's expedition. In this Keelboat Nickel design, captains Meriwether Lewis and William Clark are standing on deck at the start of their famous trip.
The new design on the front of the 2005 nickels features a new image of Thomas Jefferson. The word "Liberty" appears in a style that is like Jefferson's own handwriting.
The first new 2005 design on the nickel's reverse (back) features the American bison, also called a buffalo. This animal used to roam the plains in such great numbers that the animal was noted often by Lewis and Clark in their journals. This buffalo also reminds us of the American Indians who counted on the animal for food, clothing, and shelter, and of all the wildlife that the explorers wrote about and brought back to the United States as a record for science.
The second reverse design shows a view of the Pacific Ocean, the goal that the Lewis and Clark Expedition reached after more than a year of hard travel. The scene surrounds a quote written by Captain Clark: "Ocean in view! O! The joy!"
Hopes were dashed when the Expedition proved that the Missouri River was not part of a Northwest Passage across the continent by water and that there were two mountain ranges to cross instead of one. Still, less than a century later, the continent was crossed by telegraph and railroad lines that brought the eastern and western coasts together in ways hard to imagine in Lewis and Clark's time. Today, with cars, airplanes, telephones, and computers, the distance between coasts seems even shorter... but the steps that Lewis and Clark took were among the first to bring them so close together.
So be on the lookout for these new nickels...they really are history in your pocket!
Just as Lewis and Clark came full circle, returning to the East and Jefferson's home in Monticello, the "Return to Monticello" Nickel brings the Westward Journey Nickel Series back to its beginnings: Thomas Jefferson on the front and his home, Monticello, on the back. And yet, how the coin has changed!
The final obverse design in the series features a new portrait of Jefferson. And, instead of the usual side view, Jefferson faces forward. This design marks the first time a presidential bust on a circulating American coin is not shown in profile.
The reverse design, although very much like the pre-2004 design, is actually very different. The new image takes advantage of the advances in coin-making technology to produce a crisper, more detailed Monticello than has ever been seen on the five-cent coin.
Travel with Lewis and Clark through the Time Machine.
Return to Coins and Medals
|
<urn:uuid:062f7fda-0c9c-4091-8fdd-9193bf97e0d6>
|
CC-MAIN-2013-20
|
http://www.usmint.gov/kids/coinNews/wjNickelSeries/
|
2013-05-22T08:32:48Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.974026
| 923
|
2011 Census: Cornish identity
Last updated: 18/02/2013
Add to My Bookmarks
73,200, or 14% of the total population, stated in the 2011
Census that they have Cornish national identity
Information on the Cornish National Identiy is included in
Census; An overview of the headline figures for
Background to Cornish and the Census
The Census in 2001 was the first to enable people to identify
themselves as Cornish under the White: British category, by writing
in the word Cornish. In 2001, some 37,000 people recorded their
ethnicity as Cornish.
There was no specific tick-box category for Cornish in either
the ethnic group or in the national identity questions in the 2011
Census, however, as in the 2001 Census, there were write-in options
which provided the opportunity for people to describe themselves as
Cornish, if they wished to do so. Statistics from the 2011 Census
will include analysis of written-in responses.
Anyone who recorded their national
identity as Cornish using the write-in option will be coded,
alongside anyone who recorded themselves as both British and
Cornish (the national identity question allows for multiple
The main language question, available for the
first time in the 2011 Census, also enabled Cornish people to
record their language for the first time, and will therefore
provide important statistics on the prevalence of the language in
In the 2011 Census the three questions that apply to
15. What is your national identity?
16. What is your ethnic group?
18 What is your main language ?
During the Census period the Council put together a poster
containing information on how people could complete their forms
with 'Cornish' should they wish to do so.
Call yourself Cornish? 2011 Census Poster
Cornish Census Release Update
Anyone who recorded themselves as Cornish
using the write-in option will be coded and for the first time, the
Census (ONS) will be publishing Cornish statistics as part of the
standard Census tables within the general release calendar.
There will be 4 releases of Census data between
16 July 2012 and October 2013 and the first release of data
on Cornish will be between November 2012 and February
2013. This will include those who identified themselves as
Cornish under both ethnic group and national identity down to the
lowest level of census geography, specifically:
- Key Statistics Table: National identity
- Quick Statistics Table: Ethnic (write-in)
groups (England and Wales)
ONS have also announced that they will be
producing a range of products designed around small
population groups. These will explore the characteristics
of some small population groups, and it has been confirmed that
Cornish is one of these groups, subject to meeting ONS determined
population thresholds. Small population data (Cornish) will only be
produced if there are 50 or more qualifying people in the given
middle layer super output area geography. The implication of this
is that possibly less information will be available for certain
geographies if this threshold is not met. Separate sets of outputs
are being developed for areas where there are 100 or more, and 200
or more, people from the same small population group.
More information on these products will be
made available at a later date, but data is unlikely to be
available before July 2013.
The Council will undertake analysis of these
figures once released. There may be a requirement for the Council
to commission tables from the Census (ONS), however, until the
details of the small population group data is made available it is
not possible to take a view as to what additional information is
|
<urn:uuid:792e7fd1-4514-4fed-963f-ab62cf9d5dfa>
|
CC-MAIN-2013-20
|
http://www.cornwall.gov.uk/default.aspx?page=26948
|
2013-05-21T18:27:58Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.934832
| 773
|
Galileo fixes Europe's position in history
Europe’s new age of satellite navigation has passed a historic milestone – the very first determination of a ground location using the four Galileo satellites currently in orbit together with their ground facilities.
This fundamental step confirms the Galileo system works as planned.
A minimum of four satellites is required to make a position fix in three dimensions. The first two were launched in October 2011, with two more following a year on.
“Once testing of the latest two satellites was complete, in recent weeks our effort focused on the generation of navigation messages and their dissemination to receivers on the ground,” explained Marco Falcone, ESA’s Galileo System Manager.
This first position fix of longitude, latitude and altitude took place at the Navigation Laboratory at ESA’s technical heart ESTEC, in Noordwijk, the Netherlands on the morning of 12 March, with an accuracy between 10 and 15 metres – which is expected taking into account the limited infrastructure deployed so far.
This fix relies on an entirely new European infrastructure, from the satellites in space to the two control centres in Italy and Germany linked to a global network of ground stations on European territory.
“The test of today has a dual significance: historical and technical,” notes Javier Benedicto, ESA’s Galileo Project Manager.
“From the historical perspective, this is the first time ever that Europe has been able to determine a position on the ground using only its own independent navigation system, Galileo.
“From the technical perspective, generation of the Galileo navigation messages is an essential step for beginning the full validation activities, before starting the full deployment of the system by the end of this year.”
With only four satellites for the time being, the present Galileo constellation is visible at the same time for a maximum two to three hours daily. This frequency will increase as more satellites join them, along with extra ground stations coming online, for Galileo’s early services to start at the end of 2014.
With the validation testing activities under way, users might experience breaks in the content of the navigation messages being broadcast. In the coming months the messages will be further elaborated to define the ‘offset’ between Galileo System Time and Coordinated Universal Time (UTC), enabling Galileo to be relied on for precision timing applications, as well as the Galileo to GPS Time Offset, ensuring interoperability with GPS. In addition, the ionospheric parameters for single frequency users will be broadcast at a later stage.
A European partnership
The definition phase and the development and in-orbit validation phase of the Galileo programme were carried out by ESA and co-funded by ESA and the European Community.
The Full Operational Capability phase is managed and fully funded by the European Commission. The Commission and ESA have signed a delegation agreement by which ESA acts as design and procurement agent on behalf of the Commission.
|
<urn:uuid:aaa34309-cb20-471f-a2da-a989fc400972>
|
CC-MAIN-2013-20
|
http://www.esa.int/Our_Activities/Navigation/Galileo_fixes_Europe_s_position_in_history
|
2013-05-25T12:54:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924775
| 595
|
A paper prepared for the Annual Meeting of the Association for Asian American Studies, April 19, 1997, Seattle, Washington. <http://www.uidaho.edu/special-collections/stepping.htm>
Head, Special Collections and Archives
University of Idaho Library
Moscow, ID 83843-2351
Many Chinese laborers in the American West used domestic service as an entry point to entrepreneurial opportunities. Following a brief description of the role of Chinese servants in the American West, we will examine case studies of individuals who used domestic service as an effective stepping stone to more entrepreneurial, higher-status activities. Since not all servants became entrepreneurs, we will look at characteristics of entrepreneurship for insight into the life decisions made by Chinese servants and laborers. "Stepping Stones to Empowerment: Chinese Servants in the American West" continues the author's earlier research on Chinese servants in the American West. Travel support to make this presentation was provided by the University of Idaho's John Calhoun Smith Memorial Fund.
After the discovery of gold in the West, labor was always scarce because every laborer mistakenly believed that work in the gold fields was more remunerative than any other kind of employment. At the very least, the gold rushes drained off large numbers of workers who otherwise would have been filling jobs and building communities. There was also a resulting imbalance between the number of males and females, with females in decidedly shorter supply. The larger society greatly felt the lack of lower-class women who could serve as domestics. At one point, San Francisco bachelors even shipped their dirty laundry to Hawaii to be washed. (For background on Chinese servants: Abraham, Terry. Class, Gender, and Race: Chinese Servants in the North American West. A paper presented at the Joint Regional Conference Hawai'i/Pacific and Pacific Northwest Association for Asian American Studies, Honolulu, March 26, 1996, see <http://www.uidaho.edu/special-collections/papers/chservnt.htm>. On laundry to Hawaii, see Bancroft, Hubert Howe. History of California. San Francisco, History Company, 1888. v.6, p. 236.)
The shortage of labor for such tasks as doing the laundry or building the transcontinental railroad meant that employers sought to import workers, either from the eastern states or from across the Pacific. Coupled with outward propelling forces such as war, famine, and floods, southern China responded to the pull of work by sending laborers to western ports.
In accordance with Chinese custom, where women were expected to stay at home and sustain the husband's family, these immigrant laborers were almost entirely male. The demand for domestic labor eventually met the supply of Chinese workers. As a result, male Chinese laborers assumed the usually female role of domestic servant on the West Coast of the United States and Canada, despite efforts to recruit from traditional sources in the eastern and southern states. (Katzman, David M. Seven days a week: Women and Domestic Service in Industrializing America. New York, Oxford University Press, 1978. p. 207.)
Domestic service involved cooking, cleaning, waiting table, laundry, child care, and the hundreds of other tasks that the primary caregiver in each home provided. Many households required servants simply because the amount of work was too much for any one person. In addition, social mores stressed the incapacity of adult women for domestic labor. The weak and wan dependent woman of popular literature could not be expected to carry and boil tubs of water to do the laundry every week. These kinds of jobs required sturdy immigrant women who didn't have fainting spells. (Katzman, David M. Seven days a week: Women and Domestic Service in Industrializing America. New York, Oxford University Press, 1978. pp. 111, 120, 149.) In addition, the rich social life of upper and middle-class women required more "free" time than continual house-cleaning and cooking provided. Afternoon social calls, teas, receptions, and expansive dinners were part of the life-style of the socially conscious. However, as one observer noted: "For what good purpose this assistance [of servants] sets the women free is not easy to guess; rocking the chairs seems the most arduous duty in many Californian homes, and it is one which is faithfully carried out." (Shepherd, William. Prairie experiences in handling cattle and sheep. Freeport, Books for Libraries Press, [1971 reprint] 1885. 116-117.)
Into this economic niche resulting from overwhelming demographic, social, and political factors stepped the Chinese laborer. The Chinese were no more suited for domestic service in the West than were the Basque fishermen who became sheepherders there; this was just an artificial economic niche that circumstances made it possible for them to fill.
While much of the late Victorian era social life existed only in the magazines and other taste-arbiters, it did seem that every home must have its Chinese servant. Not just in the provincial capitals such as San Francisco or Victoria, but even in remote mining towns in Idaho, and inland communities such as Boise, Walla Walla, and Lewiston. In mountainous Pierce, Idaho, for instance, in 1880, there were seven household cooks and three Chinese servants. (Stapp, Darby C. "The Documentary Record of an Overseas Chinese Mining Camp." in Hidden Heritage: Historical Archaeology of the Overseas Chinese, ed. by Priscilla Wegars. Amityville, Baywood, 1993. p. 15.) One woman remembered of Boise, "nearly everyone whom I knew had a Chinese cook, and usually he was not only the cook but generally house boy -- washing, ironing, and doing all of the heavy work." ("Boise in the Seventies was a Delightful, Gay City," Idaho Statesman, 23 July 1939, p. 6, as quoted in Yu, Li-hua. Chinese Immigrants in Idaho. Ph.D. dissertation, Bowling Green State University, 1991. pp. 128-129.) Another noted: "All the first families had them, and so did the young officers stationed at Boise Barracks." ("Dragon is Gone," undated Statesman clipping in ISHS vertical file, as quoted in Yu, Li-hua. Chinese Immigrants in Idaho. Ph.D. dissertation, Bowling Green State University, 1991. p. 129.) It was not uncommon for military officers to have Chinese servants in the western posts. (Photographs of General O. O. Howard's Chinese servants, as presented by Donna Wells of Howard University, Society of American Archivists' annual meeting, Washington, D.C., September 1995; Roe, Frances. Army letters from an officer's wife, 1871-1888. New York, D. Appleton & Co., 1909. passim.) In Walla Walla, in eastern Washington, "In those days anyone who aspired to be classed as one of the Nob Hill set simply had to have a Chinese cook." (Bennett, Robert. Walla Walla, a town built to be a city: 1900-1910. v. 2 (n.p. 1982) 159; as quoted by Jewell, James Robinson. "Straw hat work force: The Chinese role in small town economies." Pacific Northwest Forum, Second Series, 6:1(Winter-Spring 1993) 47.)
Domestic service provided a number of learning opportunities for the Chinese who chose this route. They learned how to cook "American-style," accomplished the rigors of house-cleaning and laundry, and even coped with child care. In addition, servants were in an excellent position to "get inside" the dominant culture. Unlike the railroad or cannery worker who was insulated from the Caucasians by the contractor, the servant was thrown into the midst of a "white" milieu. Learning some English was a requirement, since the lady of the house was certainly not going to learn Chinese.
In addition to domestic duties, many cooks were also the shoppers. They would go to market, interact with the shopkeepers, and select and pay for the food supplies. As butlers and while waiting table they interacted with the social and political elite of the community. A Lewiston, Idaho, resident recalled having a U.S. Senator as a houseguest. During dinner, the Chinese servant asked to be introduced to the assembled company and went around the table shaking hands. (Pfafflin, Grace. Pioneer Chinamen of Idaho. Seeing Idaho, 1:9(February 1938) p. 24.)
Others took advantage of their situation to learn business skills. Gee Sing asked his employer how to read the exchange rates in the newspaper; every night he would study the price of silver in Hong Kong. When it reached his target, he was off to the bank to buy or sell, in order to increase his stake being held for him in China. (Blythe, Samuel G. "Chinese cooks." Saturday Evening Post, 205:44(April 29, 1933) p. 67.)
Domestic service as a stepping stone to entrepreneurship has not, and perhaps can not, be proven. However there are numerous examples in the literature of Chinese men who began their American life as servants and moved out to establish businesses and other ventures. Among these are:
He reported in the 1930s that after his arrival in southern California he first washed dishes in a French restaurant, and then went into domestic service for six years. Following that period he became a gardener. Eventually he bought land and became a farmer. (Gin Chow. Gin Chow's First Annual Almanac. Los Angeles: Wetzel Publishing, 1932. p. 29.)
A bright, young go-getter, Gee Hing became a cook for a Californian after learning the trade in the Pacific Union Club in San Francisco. He also had experience driving a laundry truck. He mother called him back to China for an arranged marriage, after which returned to the States, as planned, to become a merchant. As a grocer in San Bernardino, he kept in touch with his previous Caucasian employer. (Blythe, Samuel G. "Chinese cooks." Saturday Evening Post, 205:44(April 29, 1933) p. 68.)
He came to the United States in 1877 and found his first job in San Francisco as a servant. Like many, he found this role too constricting and by 1882 he signed on as packer in an Alaskan salmon cannery, and was later promoted to foreman. Based on that experience, he set up his own labor contracting business in San Francisco. Forced to find alternate sources for the Chinese goods needed by his laborers, he opened his own import business. Unionization of the Alaskan canneries diminished Quong's role as a labor contractor and supplier of goods. He retired, and died soon after, in 1938. (Chinn, Thomas W. Bridging the Pacific: San Francisco Chinatown and its people. San Francisco, Chinese Historical Society of America, 1989. pp. 80-84.)
James was born in 1891 in Olympia, Washington. He started out as a houseboy and cook, and at age twelve or thirteen received three dollars a week. Later in life he was a Minneapolis restaurateur. (James, Walter. "Walter James: Reminiscences of my younger days," Interview by Him Mark Lai, Laura Lai, and Philip P. Choy; edited by Marlon K. Hom. in Chinese America: History and Perspectives 1995. San Francisco, Chinese Historical Society of America, 1995. pp. 75-86.)
A goldsmith by trade, Dong Tien Shong left Hong Kong in 1873 and found work in Gonzales in the Salinas Valley as a servant for a Spanish family. On his $20 per month salary, he saved $800. He then quit and started a small store offering Chinese goods to the laborers in the valley. He expanded into Salinas and then Pajaro, opening a restaurant as well as additional stores. He died in 1933 at the age of 78 after overexerting himself assisting the survivors of a Pajaro Chinatown fire. (Chinn, Thomas W. Bridging the Pacific: San Francisco Chinatown and its people. San Francisco, Chinese Historical Society of America, 1989. pp. 229-234.)
These brief biographical mentions can be supplemented by closer examination of three individuals whose rose from domestic service to positions of prominence and appreciation within their respective communities. These are Ted Loy and Gue Owen of Lewiston, Idaho and Goon Dip of Seattle, Washington.
Born in the Taishan district of China in 1879, Loy followed his parents to Seattle in 1891. (U.S. Census 1920: Idaho, Nez Perce County, Lewiston, Precinct 3, sheet 2A; Campbell, Thomas W. The Elders/ Ted Loy. Lewiston Morning Tribune, 11 December 1977, p. 5C; Campbell, Thomas W. Ted Loy, Chinese Pioneer, Is Dead at 101. Lewiston Morning Tribune, 21 March 1981, p. 2B; Loy's grandson stated that Ted Loy's name was actually Eng Moon Loy; Eng was his surname (Gorden Lee, personal communication to Priscilla Wegars, 1994). His gravestone in the Lewiston Normal Hill Cemetery gives his name as Eng Ted Loy. I appreciate Priscilla Wegars' provision of her notes on Lewiston pioneers Ted Loy and Gue Owen.) A few years later, possibly after working as a cook in Portland, he was employed on a steamboat traveling on the Columbia River and Snake Rivers, from Celilo Falls to Lewiston, Idaho. (Campbell, Thomas W. The Elders/ Ted Loy. Lewiston Morning Tribune, 11 December 1977, p. 5C; Campbell, Thomas W. Ted Loy, Chinese Pioneer, Is Dead at 101. Lewiston Morning Tribune, 21 March 1981, p. 2B. Loy's grandson, Gorden Lee, stated that Eng Moon Loy was "driven out of Portland for union activities" because he had "joined with Caucasian cooks trying to [work] fewer hours [in order] to spend more time with their families" (Gorden Lee, personal communication to Priscilla Wegars, 1993). For more on steamboats on the river, see Randall V. Mills, Stern-wheelers up the Columbia. Palo Alto, Pacific Books, 1947. pp. 83-84.)
In 1900, by some reports, he was noticed by the local agent for the steamship company, John P. Vollmer, and was offered a position in that household. (Campbell, Thomas W. The Elders/ Ted Loy. Lewiston Morning Tribune, 11 December 1977, p. 5C; Campbell, Thomas W. Ted Loy, Chinese Pioneer, Is Dead at 101. Lewiston Morning Tribune, 21 March 1981, p. 2B; Elsensohn, Sister M. Alfreda. Idaho Chinese Lore. Caldwell, Caxton, 1970. p. 22. Loy lived on the second floor in the Vollmer house.) Vollmer was a prominent businessman in Lewiston with interests in trade, banks, flour mills, electric power, telegraphs, telephones, and transportation. ("John P. Vollmer," in French, Hiram T. History of Idaho. Chicago, Lewis Publishing Company, 1914. v.3, pp. 1006-1007.) Later, Loy transferred his employment to the home of another Lewiston banker, William F. Kettenbach. (Lewiston Morning Tribune, 4 June 1962, p. 14.)
Leaving domestic service, Loy apprenticed under Louie Kim at the Portland Cafe and then moved on to the kitchen at the Bollinger Hotel. (Trull, Fern Coble. The history of the Chinese in Idaho from 1864 to 1910. MA Thesis, University of Oregon, June 1946. pp. 58, 60; Lewiston Morning Tribune, 6 October 1935, sect. 2, p. 6. The dates of Loy's Portland Cafe employment are not known. Lewiston Morning Tribune, 4 June 1962, p. 14.) Married in 1918, by 1920 he owned his own restaurant. (U.S. Census 1920: Idaho, Nez Perce County, Lewiston, Precinct 3, Sheet 2A. The name of the restaurant he owned at that time is not known.) He remained active in the restaurant business as cook, owner and manager of a variety of establishments until his retirement in 1968. (Campbell, Thomas W. The Elders/ Ted Loy. Lewiston Morning Tribune, 11 December 1977, p. 5C; Campbell, Thomas W. Ted Loy, Chinese Pioneer, Is Dead at 101. Lewiston Morning Tribune, 21 March 1981, p. 2B; Bailey, Robert G. and Paul B. Blake, compilers. Nez Perce County, Idaho and Asotin County, Washington 1927 Directory. Lewiston, ID: R. G. Bailey and P. B. Blake. . pp. 57, 60; Lewiston Morning Tribune, 4 June 1962, p. 14; Polk, R. L. and Company. Polk's Lewiston City and Nez Perce County (Idaho) Clarkston City and Asotin County (Washington) Directory 1931-32. Seattle: R. L. Polk and Co. 1931. p. 120; Elsensohn, Sister M. Alfreda. Idaho Chinese Lore. Caldwell, Caxton, 1970. p. 22; Bailey, Robert G., compiler. City of Lewiston and Nez Perce County, Idaho; City of Clarkston and Asotin County, Washington 1948 Directory. Lewiston, ID: R. G. Bailey. pp. 65, 79, 85, 96-A, 128-D.) He was a member of the local temple society, along with other Lewiston restaurateurs. (Idaho State Historical Society photograph, No. 2961.) He died in Lewiston in 1981, age 101. (Campbell, Thomas W. Ted Loy, Chinese Pioneer, Is Dead at 101. Lewiston Morning Tribune, 21 March 1981, p. 2B. Ted Loy's gravestone, in the Lewiston Normal Hill Cemetery, is engraved in both Chinese and English. The English reads, "Eng Ted Loy / July 3, 1880 / Mar. 19, 1981.")
Gue Owen arrived in Idaho around 1875 at about twelve years of age. He first worked in the mines at Elk City but soon quit that and dropped down to the Camas Prairie above Lewiston, Idaho. Here, he cooked for prominent landowner Loyal P. Brown in Mt. Idaho. (Lewiston Weekly Tribune, 2(16):1, 11 January 1894.) He apparently worked for a Mrs. Owen, from whom he derived his surname. She taught him to make bread, a skill he used to supply loaves to the Army troops defeated by the Nez Perce at White Bird Canyon in 1877. (Pfafflin, Grace. Pioneer Chinamen of Idaho. Seeing Idaho, 1:9(February 1938) p. 24.)
Gue Owen was employed by the Robinson family in Grangeville from 1875 to about 1885. While in Grangeville he attended school where he honed his English. He also worked for a Mr. John T. Brown and at a Grangeville laundry. (Lewiston Weekly Tribune, 2(16):1, 11 January 1894. Elsensohn, Sister M. Alfreda. Pioneer days in Idaho County, v.1. Caldwell, Caxton, 1947. p.136-137; in 1904 he is reported as having "lived in Lewiston and vicinity since 1877." Lewiston Morning Tribune, 14(285)2:3, September 1904.)
In 1887 or so, he returned to China to get married. After a year, and the birth of a boy, he returned to Idaho. (Lewiston Weekly Tribune, 2(16):1, 11 January 1894.) In 1889 he was apparently employed as cook and servant to anthropologist Alice Fletcher and her troupe who traveled throughout the Nez Perce Indian Reservation re-allotting Indian lands. (Gay, E. Jane. With the Nez Perces: Alice Fletcher in the field, 1889-92. Lincoln, University of Nebraska Press, 1981. p.12.)
Late in 1899 he ran the Kwong Lung Laundry in Lewiston. (Pfafflin, Grace. Pioneer Chinamen of Idaho. Seeing Idaho, 1:9(February 1938) p. 24; Lewiston Teller, 23(58):3, 17 May 1899. Owen apparently preceding Ted Loy in the position as servant to the Kettenbach family.) From there he moved back into domestic service, for a local banker. (Pfafflin, Grace. Pioneer Chinamen of Idaho. Seeing Idaho, 1:9(February 1938) p. 24.) About 1900 he worked for a year as a cook in the men's dorm at Lewiston Normal School. (Trull, Fern Coble. The history of the Chinese in Idaho from 1864 to 1910. MA Thesis, University of Oregon, June 1946. p. 57.) According to one local historian, "after he left the dormitory, he ran a hotel [and possibly a store] in downtown Lewiston. He eventually retired, went back to China, and was, according to rumor, robbed and murdered." (Elsensohn, Sister M. Alfreda. Pioneer days in Idaho County, v.1. Caldwell, Caxton, 1947. p.136; Elsensohn, Sister M. Alfreda. Idaho Chinese Lore. Caldwell, Caxton, 1970. p. 20.)
Goon Dip was born in 1862 in the Taishan district of China. In 1876, aged 14, he traveled from Hong Kong to Portland and on to Tacoma where he became a laborer for relative. In 1885 or 1886 he returned to China and married. (Information on Goon Dip has been extracted from Chew, Ron, ed. Reflections of Seattle's Chinese Americans: the first 100 years. Seattle, University of Washington press, 1994. pp. 141-142, and from Jue, William G. and Silas G. Jue. Goon Dip: Entrepreneur, diplomat, and community leader. Annals of the Chinese Historical Society of the Pacific Northwest. Bellingham, 1984. pp. 40-48.)
On his return to Portland, he despaired of employment in the face of the anti-Chinese sentiment in the air. He was taken in by Miss Ella McBride. Repeating the family story, Goon Dip's grandchildren reported: "She brought him home to meet her parents and they employed him as a houseboy. Ella taught the young Goon Dip English and introduced him to the customs of the new world. The bond between him and this young woman was so deep that in later years, Goon Dip would name his youngest daughter after his American friend. The gesture signified his gratitude for her assistance in helping him to adjust to American life." (Jue, William G. and Silas G. Jue. Goon Dip: Entrepreneur, diplomat, and community leader. Annals of the Chinese Historical Society of the Pacific Northwest. Bellingham, 1984. p. 42.)
Later in this account, it is noted that "within a short time, Goon yearned to advance himself above the level of being a servant." He left the McBride family and became the assistant of a Chinese labor contractor, Moy Bok-Hin, and the two remained partners in different ventures for many years. Although he reportedly worked on the railroads in Washington, Idaho, and Montana, he did not speak of it within the family. (Jue, William G. and Silas G. Jue. Goon Dip: Entrepreneur, diplomat, and community leader. Annals of the Chinese Historical Society of the Pacific Northwest. Bellingham, 1984. p. 42. Chew, Ron, Reflections of Seattle's Chinese Americans: the First 100 Years. Seattle, University of Washington Press, Wing Luke Asian Museum, 1994, p. 141, repeats the account of Goon Dip's labors in Montana and elsewhere.)
He initiated a program of retraining disabled Chinese workers as hemstitchers, thus establishing Portland's garment industry. About 1900 he and a cousin opened a store. Then his cousin took over the business and Goon Dip started his own dry goods and hemstitching operation.
By 1906, Goon Dip had expanded his activities to the Seattle area. There he was appointed honorary consul for China representing the interests of the Chinese government, and later made full consul. In that role he was an official representative to the 1906 Alaska-Yukon-Pacific Exposition in Seattle.
It was in that capacity that he met the owner of extensive Alaskan canning operations who needed a ready supply of laborers. Goon Dip became his labor contractor. Well respected and honored for his activities, Goon Dip died in 1933 at the age of 71.
Learning English, and, as a consequence, American ways aided the inclusion of the Chinese workers into American society. Washington State's new Governor, Gary Locke, reported that his grandfather learned English as a houseboy for the Yeagers of Olympia where he worked for free in exchange for the opportunity to learn English. (Locke, Gary. "Address to AAAS," Seattle, Washington, April 17, 1997; Locke, Gary. "Inaugural address," AsianWeek, January 24, 1997. p. 7.)
Then as now, immigrant workers sought help learning the dominant language. Missionaries were eager to teach English as a way of spreading the gospel. Employers also mistakenly believed that Christian teachings would make the Chinese better servants. The Chinese were accused of using the mission school solely as a "free day school" and as an employment service, rather than for religious purposes. (Vernon, Di. "The Chinese as house servants." Good Housekeeping, 12(January 1891)21.)
Newspaper articles complained that the only result of such education was that the pupil would just quit and go "elsewhere for higher wages." ("Chinese Domestic Servants," Idaho Signal (Lewiston), 1:49(February 8, 1873)1, reprinted from the "S.F. Chronicle.") As might be expected, this was the whole point of the effort. It was this kind of upward mobility towards entrepreneurship that the former servants desired.
Florence Grohman found herself acting as teacher to her servant; in exchange for home security she gave him lessons. She wrote: "...I disliked being alone in the house during the long November evenings. Although I had many kind friends who took pity on my loneliness, very often I felt it would be more canny if Gee could be induced to stay in the house till nine or ten o'clock. He did not seem to like the idea at all when I suggested it, and nothing more was said about it for a few days." Then he offered to stay in with her in the evening, giving up his free time in Chinatown, if she would teach him to read and write English. (Grohman, Florence. "The Yellow and White Agony: a chapter on Western Servants" in, Fifteen years' sport and life in the hunting grounds of western America and British Columbia. by W.A. Baillie-Grohman; with a chapter by Mrs. Baillie-Grohman. London: Horace Cox, 1900. 336-337.)
Learning English, as we have seen, became the predominant characteristic of the Chinese who successfully made for the transformation from laborer to domestic to entrepreneur. Proficiency in English placed the Chinese at a transfer point between the two cultures. Taking advantage of that juncture is one of the marks of the entrepreneur.
While there were many Chinese who found the rigors of domestic service (always on duty, managing the household and the household's relationships, dealing with the continual patronizing) so onerous that even work in the canneries might have been preferable; there were those who found great satisfaction in the job and were well treated by their employers. (In fact, of the examples reviewed here, none left service because of mistreatment.)
In the old days, when a Chinese servant became attached to a family, he stayed attached. There are plenty of instances where they have served three generations. I had a cook once - Wong Suey, ...who worked for one family for thirty-five years and then left only because the family had practically disappeared. There have been hundreds of families in California where these faithful, expert, skillful servants have come to be major-domos, have had complete control of the ménage, which, by the way, is an obligation a good Chinese cook of the old school takes upon himself whether his employer wants it so or not. And he is usually so competent the employer is glad to submit to his management. (Blythe, Samuel G. "Chinese cooks." Saturday Evening Post, 205:44(April 29, 1933)10.)
Others were drawn in alternate directions. Chin Quong, born in 1861, learned English in a mission school in China. Upon arrival in San Francisco he found his language skills and his mission training helpful in employment at the Chinese Congregational Church and as a domestic servant. Rather than following his initial dream to Gold Mountain, he remained in service to the Church for most of his life, while managing to send three of his six children on to college. (Chinn, Thomas W. Bridging the Pacific: San Francisco Chinatown and its people. San Francisco, Chinese Historical Society of America, 1989. pp. 84-87. This is a different individual than the previously mentioned labor contractor.)
Wing Yee, another example, began in California as a houseboy, then became a cook. He remained with the same family for many years, assuming greater responsibilities as general farm manager. He was encouraged to bring a wife from China, who became housekeeper in his place; and his employers built a home for his growing family next to the main house. (Wong, H.K. Gum Sahn Yun: Gold Mountain Men. n.p., n.p., 1987. 125-130.)
Defining and analyzing entrepreneurship has always been a puzzle to economists and sociologists. Economists complain that there are too many social characteristics to entrepreneurship while the social scientists found too many economic factors at work. One study found that certain non-economic factors proved to be nearly as important as economic circumstances in the emergence of entrepreneurship in a culture. Those identified as significant were, first, the legitimacy of entrepreneurship, or the cultural acceptance of the entrepreneurial role; second, social mobility, the fluidity of movement from one class to another; third, marginality, the mediating role of the entrepreneur on the margins of society. (Wilken, Paul H., Entrepreneurship: a comparative and historical study. Norwood, Ablex, 1979. pp. 8-13; 261-262.)
Chinese entrepreneurs in the West demonstrated the validity all of these characterizations. Frontier culture was socially and geographically mobile. The Chinese in particular, spread out from the port cities to the highest mountains and the deepest valleys. Their value as laborers placed them in the heart of the Midwest, eastern metropolitan areas, and the fisheries of the gulf states. With the increasing availability of the railroad, people of all backgrounds traversed the country. Robert Louis Stevenson was one of those who took an emigrant train across the U.S. in 1879; one car, set aside for them, carried only Chinese. (Stevenson, Robert Louis, From Scotland to Silverado, Edited by James D. Hart. Cambridge, Belknap Press of Harvard University Press, 1966. pp. 115, 117, 135.) Socially, the boundaries were more sharply drawn but in comparison with class structures in China, even Chinese laborers in the United States had greater social and economic mobility.
The dominating ideology of the respective cultures was favorable to entrepreneurship. Working hard and getting ahead was valued by both societies. The Anglo-Saxon ethic prized the "go-getters" who made things happen. The Chinese (or more precisely, Southern Chinese) characteristic that sustained the entrepreneur was acquisitiveness, where wealth accumulation was the means to status for one's family and lineage. (Hafner, James A. "Market gardening in Thailand: The origins of an ethnic Chinese monopoly." in The Chinese in Southeast Asia, v.1, edited by L.Y.C. Lin and L.A.P. Gosling. Singapore, Maruzen Asia, 1983. p. 41; see also: Pan, Lynn. Sons of the Yellow Emperor: A history of the Chinese diaspora. Boston, Little Brown, 1990. pp. 244-245.) In becoming merchants, it has been noted, the Chinese in America found a higher status than they would have had in the same role in China. This is often attributed to the importance of trade in the American scheme of things; but it appears to be a cultural signifier more common to South China. (Gosling, L. A. Peter, "Chinese crop dealers in Malaysia and Thailand: The myth of the merciless monopsonistic middleman." in The Chinese in Southeast Asia, v.1, edited by L.Y.C. Lin and L.A.P. Gosling. Singapore, Maruzen Asia, 1983. p. 151.)
Not only was the entrepreneurial role encouraged by the society; but the Chinese, as ethnic and racial minorities, found themselves at the very margins of the majority society. Truck gardeners were a prime example of how the Chinese assumed a mediating role between cultures. Growing vegetables for their own use, Chinese gardeners found their crops in high demand among the Caucasian population. In remote mining communities, they carved carefully sited garden terraces into south facing hillsides at lower elevations. They then provided early vegetables to the miners still locked in winter's snows at higher elevations. (Fee. Jeffrey M. "Idaho's Chinese Mountain gardens," in Hidden Heritage: Historical Archaeology of the Overseas Chinese, ed. by Priscilla Wegars. Amityville, Baywood, 1993. pp. 65-96.) Here the Chinese found an entrepreneurial niche that the dominant culture rewarded.
Studies in South-East Asia have identified other characteristics that fostered Chinese entrepreneurship. The Chinese had little incentive to invest in agricultural enterprises requiring extensive land holdings; they needed quick access to their capital both in response to anti-immigrant pressures and their own desires to cash out and return home. Newly developing market economies such as those in the West also offered increasing economic opportunities, often requiring little in the way of capital expenditures. (Lim, Linda Y.C. "Chinese economic activity in Southeast Asia: An Introductory Review." in The Chinese in Southeast Asia, v.1, edited by L.Y.C. Lin and L.A.P. Gosling. Singapore, Maruzen Asia, 1983. pp. 2-3.)
Entrepreneurship is the ability to see value where others do not. It is also the ability to "make lemonade when life hands you lemons." Living on the margins of the culture attunes one to the imbalance of goods and services. Domestic service provided the Chinese with an experience at the heart of the culture, within the Caucasian home, in the bosom of the family; an experience that offered glimpses of needs that could be fulfilled from the margin. Many seized the entrepreneurial moment and made a successful life for themselves in a strange land among a strange people.
Return to Selected Papers and PresentationsApril 1997 / stepping.htm / firstname.lastname@example.org
|
<urn:uuid:44dd9bca-855a-4bdb-aa7a-6ea6e4e38b20>
|
CC-MAIN-2013-20
|
http://www.uiweb.uidaho.edu/special-collections/papers/stepping.htm
|
2013-06-18T22:34:49Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.961406
| 7,445
|
On September 21 to 23, more than 90 Antarctic scientists, data experts, writers, and students gathered at Sylvan Dale Ranch, nestled in the rolling plains near Loveland, Colorado. This three-day retreat brought diverse research disciplines from around the world to focus on rapid changes in the West Antarctic Ice Sheet (WAIS) and related areas of other ice caps.
Unlike most of the ice in Antarctica, the West Antarctic Ice Sheet rests on bedrock below sea level. In the 1970s, theoretical research suggested this made it vulnerable to a rapid collapse, potentially as quickly as two centuries, that could add several meters to sea level. NASA researcher Bob Bindschadler said, “This research suggested that ice sheets were not stable if they sat on a bed below sea level.” Once the ice sheet began to retreat, water would flow under it, lifting the ice and floating it off its resting place.
Research on the WAIS started off with glacier studies, but researchers soon realized that there were more factors involved. “As the science evolved, we realized it wasn’t just a glaciological problem. There were additional aspects of the science that involved meteorology and oceanography and ice coring subglacial geology,” said Bindschadler. The researchers needed to measure and monitor such things as whether the ice sheet was gaining or losing mass, whether glaciers were speeding up their flows, and how much these changes were adding to sea level.
An interdisciplinary community
Each year, scientists meet at the workshop to share the latest WAIS research, and mull over difficult questions like how the valleys and lakes underneath the ice sheet affect how it moves, or what happens to glaciers when the ice shelves in front of them collapse. NSIDC Lead Scientist Ted Scambos organized the workshop this year along with NSIDC and CIRES staff. He said, “The WAIS meeting is the key meeting of the year for ice sheet dynamics, remote sensing, and the net growth or shrinkage—mass balance—of the WAIS and other major ice sheets.”
This year, researchers presented new data that helps better quantify just how much ice is trapped in the giant ice sheet, how fast it is moving, and where it is going. Researchers from UC Irvine presented a new, virtually complete map of ice velocity across Antarctica, showing where the ice is flowing fastest. Scientists also discussed how to best harness data from the NASA Gravity Recovery and Climate Experiment (GRACE) satellite to measure the mass of the ice sheet, and determine how that data might fit together with new data from the European Cryosat mission and the upcoming Ice, Cloud, and Land Elevation Satellite–2 (ICESat–2). New radar studies presented detailed maps of the lakes and rivers that flow beneath the ice sheet.
Christine LeDoux, a researcher at Portland State University, attended the workshop for the first time this year. She said, “The length of the meeting and social opportunities made it possible to talk with many people, working on different parts of problems similar to mine, or interesting in different ways.” LeDoux presented details of the history of ice flow and fracturing on the Ross Ice Shelf using MODIS mosaic images.
As observations of West Antarctica improve, the scientists are realizing that it’s more important than ever to continue the meeting. Bindschadler said, “We have been astonished at how rapidly changes can occur. We are no longer talking about hypothetical dynamics of the ice sheets, but we are perhaps witnessing the early stages of this.”
|
<urn:uuid:393bb895-5b6c-44bf-9184-a22b7451c993>
|
CC-MAIN-2013-20
|
http://nsidc.org/monthlyhighlights/2011/10/working-the-west-antarctic-ice-sheet/
|
2013-05-25T13:42:52Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705955434/warc/CC-MAIN-20130516120555-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949361
| 735
|
John Adams was born on October 30, 1735 on a small farm in Massachusetts. His parents John and Susanna, although not educated themselves, sent their son to Harvard where he graduated in 1755. After graduating he taught school and then went on to study law. In 1758 John Adams was admitted to the Boston Bar. While still studying law, Adams became interested in the fast growing movement of rebellion against the unfair treatment of the colonies by England. In October of 1764 Adams married Abigail Quincy Smith. This was a very unsettled time for the colonies. The French-Indian War was over and England was in serious debt. To relieve some of this financial stress Parliament passed a series of Resolutions to raise funds in America. These resolutions became known as the Stamp Act. Unlike his cousin, Samuel Adams, John didn't react aggressively toward the new taxes. Although not in favor of the taxes levied against the colonies, he did not support the brutal riots lead by the Sons of Liberty. Instead he pushed for retaliation on the courtroom floor rather than in the streets of Boston. His peaceful approach was not shared by the resentful colonists and Adams found himself fearfully supporting the inevitable bread from England.
Adams became an important leader in the fight for liberty. From 1774 to 1778 he was a member of the Continental Congress. He was also appointed to the committee to write the Declaration of Independence. Thomas Jefferson did most of the writing but it was Adams who debated and challenged Congress to approve this Declaration. After leaving Congress in October of 1777, Adams authored the constitution for Massachusetts. Adams' role during the revolution was that of a peace mediator. He was one of the men who drew up the final peace treaty with England. He then served as the United States Ambassador to England.
In 1789, when George Washington was elected President, Adams was elected Vice President. Once he wrote his wife that his office of V. P. was "the most insignificant office that ever the invention of man contrived or his imagination conceived." In 1797 Adams was elected President. The fledgling government was in turmoil. The Federalist Party, led by Alexander Hamilton, believed that government should by ruled by a small, powerful group of men. The Republican party believed that a system run by the mass of people would be best. Adams supported neither party but was elected by the Federalist.
During this time, France and England were at war. Adams did not want to involve the U.S. in this war and sent a delegation to France to mediate peace. France refused to talk unless the U.S. paid them a vast sum of money. Adams although anxious for peace was not going to pay France a bribe. Instead he commissioned the establishment of the First U.S. Navy. The U.S. was not directly involved in this war but many battles were fought between France and U.S. warships. Against the wishes of Hamilton and the Federalist Party, Adams sent another delegation in 1800 to talk peace with France. This time France was receptive and the war was soon over.
Adams left Washington in 1801 and returned home. He lived to see his son, John Quincy Adams, take the office in 1824 as the 6th President. On July 4, 1826, exactly fifty years after the Declaration of Independence was signed, John Adams died.
|
<urn:uuid:aadc0785-26f7-43f2-a6cb-4916d62abb1b>
|
CC-MAIN-2013-20
|
http://library.thinkquest.org/10966/data/bjnadams.shtml
|
2013-05-24T08:37:10Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.989299
| 680
|
Summary for HealthiNation's Autism
The following is an interview with Dr. Doreen Granpeesheh, founder of the Centers for Autism and Related Disorders (CARD).
Autism Therapy Options information also provided by Dr. Holly Atkinson after the Granpeesheh interview.
Special Guest Host: Lou Diamond Phillips
Doreen Granpeesheh, PhD: When I started working with children with autism back in 1978, it was such a rare disorder that nobody really knew what autism is. I would tell my friends I work with autistic children and they would say, "Oh, they're artistic, they draw well?" And it wasn't probably until the movie Rain Man came out when people started to recognize what autism is.
Autism is a childhood development disorder that is characterized by problems or delays in basically three areas of functioning.
The first area is social skills. Children with autism have pretty significant delayed social skills. They won't have full eye contact with anyone. They won't develop peer relationships. They won't develop friendships or want to play with another child.
The second area is communication, so children with autism will either have delayed language or language that's not appropriate to the context.
And the third area is what's called self-stimulatory or ritualistic behavior. These are behaviors like hand flapping or body rocking, lining up their toys instead of playing with them, those types of things.
If a child has a total of eight symptoms within these three areas, he will receive a diagnosis of autism. If there are fewer than that, but the child still has pervasive delays within those three areas, then he will get the diagnosis of Pervasive Developmental Disorder or PDD. Now, if a child doesn't have a language delay, but he still has those kinds of abhorrent behaviors and he still has the social delays, then he will get a diagnosis of Asperger's Syndrome. There are very subtle differences between them, and they are all part of the autism spectrum. And of course if a child has fewer delays or less pronounced delays, he or she is likely to do better.
The earliest signs that most parents notice is lack of language. So, most of the time when a parent comes to see me for the first time, they tell me that they started to get concerned when their child didn't start developing language at around age one and one- half or so. The other types of symptoms that parents tend to talk about are things like basic attachment behaviors. A lot of mothers will report that, "Even when I called his name or tried to console him when he was hurt, he did not react to me holding him, he didn't react to comfort, he's never came to me for comfort if he was hurt." Those types of attachment behaviors seem to be lacking. Those are the two areas that parents notice first.
Many years ago the belief was that autism was related to the "frigid mother." That's the idea that the mother had not provided enough of an interaction or reinforcement to the child and that's why the child had become self isolated and eventually autistic. It was around the 1960s when Dr. Bernard Rimlin wrote the book Infantile Autism and said "No, autism has nothing really to do with parenting styles rather it is a neurological disorder, and we have to be very careful to differentiate that."
Over the years we've learned that autism is a genetically-based disorder. First of all, there has to be a certain number of genes present for the child to develop this predisposition. And if you have these particular genes, then some environmental factor will set off the symptoms of autism.
Management of Autism: Therapy Options
Hosted by Dr. Holly Atkinson, MD
Autism is best managed when parents and doctors work together to recognize the warning signs and begin a treatment plan as early as possible. There are a number of different treatment philosophies. The main types include developmental and behavioral approaches and biomedical treatment. Talk to your doctor or specialist about what's right for your child and family. This partnership is the key to managing this condition. Many times families use a combination of treatments.
Developmental and Behavioral Management
The developmental approaches entail teaching autistic children certain skills to help them move forward in life. These skills include:
- Learning how to communicate better
- Creating their own ideas
- Understanding what is happening in the environment around them.
With these and other skills in place, autistic children can develop further and reach key milestones. For example, "floor time" is one of the types of the developmental approach. Under this technique, a parent or therapist will use a child's interest to advance his or her skills. In addition, behavioral approaches often use a "reward system" to produce positive actions from a child. The main type of behavioral treatment is called ABA, or Applied Behavior Analysis.
Biomedical treatments focus on medication and diet. The diet aspect of biomedical treatment is controversial and not all researchers agree on how well it works.
Medication can be used to improve behaviors that interfere with learning. These include hyperactivity, violence and obsessive-compulsive behavior. However, the medications do not cure the core symptoms or condition of autism.
There are other therapies that may be part of an overall treatment plan.
- PECS, or the Picture Exchange Communication System. This uses pictures to teach children to communicate. Through the telling of social stories, children learn about different life situations and how to handle them.
- Sensory Integration. This helps children learn to manage their sensitivity to light sound and touch.
No matter what approach you choose, the key to treating autism is early intervention. Continuing research may help us learn more about the cause of autism and how to treat it.
HealthiNation offers health information for educational purposes only; this information is not meant as medical advice. Always consult your doctor about your specific health condition.
|
<urn:uuid:05af1f6d-a96c-4e23-bcdf-06bd4eb2489c>
|
CC-MAIN-2013-20
|
http://hamariweb.com/healthvideos/allvideos.aspx?hdid=19
|
2013-06-20T02:33:22Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710115542/warc/CC-MAIN-20130516131515-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.965861
| 1,206
|
In 1948, based on statistics gathered by the British Columbia Beef Cattle Growers' Association, there were 98,000 head of beef cattle in the southern Interior Plateau, not including the Okanagan and Boundary districts. Over the next ten years, there was a steady increase in the number of cattle raised in the area. By 1960, it was estimated that this number had increased to approximately 130,000 head. The dominant breed was Hereford -- fully two-thirds of all beef cattle were of this breed. The remainder were pure- or cross-bred Shorthorns, since it was generally felt that the Shorthorn cows provided better milk and faster weight-gain for calves. There were also scattered herds of Aberdeen Angus.
Until the mid-1950s, the typical herd in the British Columbia Interior consisted of breeding stock, yearlings, and two-year olds. Occasional three-year olds were marketed off the more isolated ranges. These cattle were marketed as two-year olds finished on grass and were shipped directly from the ranch. Most of the cattle went by rail to the Greater Vancouver area from stockyards like those at Williams Lake.
Another significant aspect of British Columbia ranching during the post-war years was the large number of ranches that were purchased by Americans. Beginning shortly after the war and peaking in the 1960s, Americans, driven from their native land by high taxes and land prices, purchased many of British Columbia’s ranches.
|
<urn:uuid:f1102551-2a09-4347-bc2a-66f2df84bf4c>
|
CC-MAIN-2013-20
|
http://www.museevirtuel-virtualmuseum.ca/sgc-cms/expositions-exhibitions/buckaroos/english/modern-times.html
|
2013-05-19T18:43:50Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.991735
| 301
|
Energy comes in two basic forms: potential and kinetic.
Potential Energy is any type of stored energy; it isn’t shown through movement. Potential energy can be chemical, nuclear, gravitational, or mechanical.
Kinetic Energy is the energy of movements: the motion of objects (from people to planets), the vibrations of atoms by sound waves or in thermal energy (heat), the electromagnetic energy of the movements of light waves, and the motion of electrons in electricity.
Each form of energy can be transformed into any of the other forms, but energy isn’t destroyed or created. Losses of energy can always be accounted for by small transformations to other types of energy, like sound and heat. Power plants convert potential energy or kinetic energy into electricity, a type of kinetic energy, and electricity in turn can be converted back into other forms of energy, like heat in an oven or light from a lamp.
Forms of Potential Energy
Chemical energy is stored in the bonds between atoms. (See here for more about atoms.) This stored energy is released and absorbed when bonds are broken and new bonds are formed – chemical reactions. Chemical reactions change the way atoms are arranged. Like letters of the alphabet that can be rearranged to form new words with very different meanings, atoms go through chemical reactions to be reorganized to form new compounds with vastly different properties. Each compound has its own chemical energy associated with the bonds between the atoms it contains.
When we burn sugar (a compound made of hydrogen, oxygen, and carbon) during exercise, it’s components are reorganized into water (H2O) and carbon dioxide (CO2). These reactions both absorb and release energy, but the net reaction releases energy.
Chemical reactions that produce net energy are called exothermic. When gasoline is burned, the reactions taking place are exothermic and thermal energy is released, which can be used to power an engine. Meanwhile, chemical reactions that absorb net energy are called endothermic.
Nuclear energy is the stored potential of the nucleus, or center, of an individual atom. Most atoms are stable on Earth; they retain their identities as particular elements, like hydrogen, helium, iron, and carbon, as identified in the Periodic Table of Elements. Nuclear reactions change the fundamental identity of elements.
Unlike everyday chemical reactions that change how atoms are stuck together (rearranging the letters of a word), nuclear reactions change the name of the atoms themselves. (Sort of as if the letter “m” was split into the letters “r” and “n,” or the letters “l” and “o” combined to make the letter “b”). In nuclear reactions, atoms split apart or join together to form new kinds of atoms, called fission and fusion, respectively.
When atoms split apart or fuse together, they release stored nuclear energy, sometimes in huge quantities.
Today’s nuclear power plants are fueled by fission, a breaking apart of uranium or plutonium atoms that releases lots of energy. Hydrogen atoms in the sun experience nuclear fusion, combining to form helium and subsequently releasing large amounts of kinetic energy in the form of electromagnetic radiation and heat.
Elastic energy can be stored mechanically in a compressed gas or liquid, a coiled spring, or a stretched elastic band. On an atomic scale, the basis for the energy is a reversible strain placed on the bonds between atoms, meaning there’s no permanent change to the material.
These bonds absorb energy as they are stressed, and release that energy as they are relaxed.
Systems can build up gravitational energy as mass moves away from the center of Earth or other objects that are large enough to generate significant gravity (the sun, other planets and stars).
For example, the farther you lift an anvil away from the ground, the more potential energy it gains. The energy used to lift the anvil is called work, and the more work performed, the more potential energy the anvil gains. If the anvil is dropped, that potential energy becomes kinetic energy as the anvil moves faster and faster toward Earth.
Forms of Kinetic Energy
A moving object has kinetic energy. A basketball passed between players shows translational energy in the motion that gets the ball from player A to player B. That kinetic energy is proportional to the ball’s mass and the square of its velocity. To throw the same ball twice as fast, a player uses four times the energy.
If a player shoots a basketball with backspin or topspin, the basketball will also have rotational energy as it spins through the air. Rotational energy is proportional to how quickly the ball spins, as well as the ball’s mass, and the size and shape of the ball. A hollow ball needs more energy than a solid ball of equal mass to spin at the same rate. The hollow ball requires more energy because it’s mass is farther from its center.
In shooting a basketball, players often try to add rotational energy as backspin, because it results in the greatest slowdown in speed when the basketball hits the rim or the backboard, increasing the chance that the ball stays near the basket. The opposite direction of spin, a topspin, can be used in games like tennis, because it will help speed up a ball after impact and lowers the angle it travels after the bounce.
THERMAL ENERGY AND TEMPERATURE
Heat and thermal energy are directly related to temperature. We can’t see individual atoms vibrating, but we can feel their kinetic energies as temperature, which is a reflection of the energy with which atoms vibrate. When there’s a difference between the temperature of the environment and a system within it, thermal energy is transferred between them as heat.
A hot cup of tea in a cool room loses some of its thermal energy as heat flows from the tea to the room. The atoms in the hot tea slow their vibrating as the tea loses heat, and over a few hours the tea cools to the same temperature as the room. At the same time, the room gains the lost thermal energy from the tea, but because the room is much larger than the tea, the temperature of the room increases by so little a person wouldn’t notice it.
Adjacent objects that are different temperatures will spontaneously transfer heat to try to come to the same temperature. However, how much energy it takes to change the temperature of an object is based on what its made of, a principle called heat capacity or thermal capacity. Water has a higher heat capacity than steel, for example. An empty pot on the stove takes almost no time to get to 212 degrees Fahrenheit (the boiling temperature of water). A pot half-full of water will take much longer to reach the same temperature, because water needs to absorb more energy — per weight, per degree — to get as hot as metal.
Sound waves are made through the transmitted vibration of atoms in bulk — though atoms can also vibrate through heat — and sound can travel by the motion of atoms regardless of whether they are in liquid, solid, or gaseous states. Sound cannot travel in a vacuum because a vacuum has no atoms to transmit the vibration.
Solids, liquids, and gases transmit sounds as waves, but the atoms that pass along the sound don’t travel (unlike the photons in light). The sound wave travels between atoms, like people passing along a “wave” in a sports stadium. Sounds have different frequencies and wavelengths (related to pitch) and different magnitudes (related to how loud).
Even though radio waves can transmit information about sound, they are a completely different kind of energy, called electromagnetic.
Electromagnetic energy is the same as radiation or light energy. This type of kinetic energy can take the form of visible light waves, like the light from a candle or a light bulb, or invisible waves, like radio waves, microwaves, x-rays and gamma rays. Radiation — whether it’s coming from a candle or nuclear fission of uranium — can travel in a vacuum, and physicists like to think of electromagnetic radiation as divided into tiny energy packets called photons. Each photon has a characteristic frequency, wavelength, and energy, but all photons travel at the same speed, the speed of light, or nearly 1 billion feet per second.
Electromagnetic energy can be converted to stored chemical energy by plants during photosynthesis, the process by which plants, algae, and some other small organisms use the sun’s electromagnetic radiation to turn carbon dioxide gas into sugar and carbohydrates.
Electric energy is to the kinetic energy of moving electrons, the negatively-charged particles in atoms. For more information about electricity, see Basics of Electricity.
|
<urn:uuid:c5f6ea06-ea00-4a10-886c-6ddb3a1b577d>
|
CC-MAIN-2013-20
|
http://burnanenergyjournal.com/tag/molecules/
|
2013-05-25T06:11:43Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939305
| 1,801
|
College Of Arts And Sciences | Mind, Values, and Our World
E103 | 4002 | Eisenberg, P.
The fundamental topic of this course is the way in which careful
philosophical consideration of one question leads on to consideration
of many other issues related to the first one. The questions we
shall examine in this course include: the meaning of life; free will
and determinism; the nature of mind and mind's relation to the body;
the nature of knowledge as distinct from (true) belief; and the
possibility of proving God's existence. Throughout most of the
course the brief essays to be read, which were written relatively
recently, will be arranged in pairs, pro and con, so that in-class
discussion and debate can be facilitated. At the end of the course,
however, we shall look at somehwhat longer writings by some very
great philosophers of the past.
The course grade will be based on class participation, on quizzes,
and on several short essays in which the students will be asked to
offer assessments of particular philosophical views or of arguments
presented in the assigned readings for or against those views.
The goals of the course are to introduce students to philosophical
thinking, and to develop the skills needed for it. These skills
include careful reading (and listening) and critical thinking. The
philosophical issues to be discussed are ones which people throughout
the ages have found to be important to them; and all college students
can benefit in all of their courses from having mastered some skills
involved in critical thinking.
|
<urn:uuid:6068c3a1-0ba4-4e71-80a3-d7ae4019bc7f>
|
CC-MAIN-2013-20
|
http://www.indiana.edu/~deanfac/blsu201/coas/coas_e103_4002.html
|
2013-05-22T22:06:42Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955419
| 333
|
Conflict and Compromise in History: Margaret Sanger Papers Project Celebrates National History Day 2008
"I merely want to point out the situtaion I found when I entered the battle. One the one hand, I found the wise men, sages, scientists, discussing birth control among themselves. But their ideas were sterile.... I might have taken up a policy of safety, sanity and conservatism--but would I have got a hearing? And as I became more conscious of the vital importance of the idea, I felt myself in the position of one who has discovered that a house is on fire; and I found that it was up to me to shout out the warning. The tone of the voice may have been indelicate and unladylike...but this very gathering...is ample proof that intelligent and constructive thought has been aroused."
Margaret Sanger, "Hotel Brevoort Speech," January 17, 1916
The theme for this year is “Conflict and Compromise in History.” Throughout her career, Margaret Sanger used conflict as a means of gaining publicity and a forum for her ideas. She fought with the government, whose laws banned birth control information from the mails. She fought with doctors, who refused to be associated with a topic many found immoral and even once they did accept some responsibility for birth control, did not want to work with the lay women with years of experience. But most famously, she fought with the Catholic Church, whose opposition to the legalization of birth control and the creation of birth control services, formed the backdrop for much of her career.
Sanger compromised when she could, to gain greater public acceptance and win small victories along the road to greater and greater successes. But when compromise was not an option, she was willing "to wage a hard and bitter fight to the highest tribunal in the land." (Police Can't Stop Me' Says Margaret Sanger, New York Call, October 22, 1916.)
Sanger and the Comstock Act
The Federal Comstock Act of 1873, and associated laws, barred the mailing and distribution of contraceptives and contraceptive information. Starting in 1914, Sanger launched a long battle to overturn these laws, first by direct-action, or law-breaking, and then by lobbying Congress directly, and finally by seeking a new interpretation of these laws from the courts. Explain how she used conflict to gain press and public notice and whether her compromises enabled her to succeed in her goals.
Sanger and Medical Profession
In order to offer birth control services, Sanger needed to cooperate with the medical profession. But at the start of her campaign, few doctors wanted anything to do with birth control. Trace how Sanger used compromise and the judicious use of conflict to win individual physicans to her cause and eventually secure the support of the American Medical Association in 1937. One place on which to focus is on Sanger's creation of a "doctor's only" legislative bill which exempted doctors from birth control prosecution, while technically leaving the subject under the obscenity statute. How did this compromise affect both the legal status and public opinion on birth control.
Sanger and Catholic Church
For much of her career, Sanger's strongest organized opponent was the Catholic Church. She fought against their interpretations of birth control, their calls to boycott her and block her public appearances, and their efforts to quash birth control organizations, laws, and clinics. Sanger used the conflicts to get favorable press, as in 1921, when her Town Hall meeting was disrupted by the police at the behest of local Catholic officials, or in 1940, when she was banned from speaking in Holyoke, Mass. How did these conflicts advance her aims? Was compromise possible, and how might it have been effected?
MSPP Sources for Your Research:
There are a number of materials available on this web site.
Our biography of Margaret Sanger gives an outline of her life and career.
Our newsletter articles highlight some of the more interesting or unusual aspects of Sanger’s life and interests.
Sanger Documents on the World Wide Web offers links to primary source material on the internet. These documents are mounted by libraries and other scholarly institutions.
The Model Editions Partnership sponsors an online edition of The Woman Rebel, which contains all the issues of the journal as well as a wealth of information about its publication and impact.
Margaret Sanger Papers on Microfilm:
The Margaret Sanger Papers Project has compiled a microfilm edition of more than 120,000 documents related to Sanger and the birth control movement. You will need to narrow your search to identify only those that relate most closely to your topic.
The two-series microfilm published by the Sanger Project is organized in chronological order; its reel guide provides access to documents by personal and organizational name. If your topic focuses on a specific person or event, you’ll be able to locate relevant documents very quickly.
Sanger’s speeches and articles are also available on microfilm. They provide good autobiographical information as well as arguments for birth control. See our list of libraries that hold copies of the microfilm, or you may order one through interlibrary loan. Speeches and articles are located on Library of Congress reels 128-131, Smith College Collection Series reels S70-S73, and Collected Documents Series reel C16.
Sanger’s journal The Woman Rebel (1914) and the Birth Control Review (1917-1940) are also available on microfilm. The Woman Rebel is available on the Margaret Sanger Papers Project Microfilm, Collected Documents Series; the Birth Control Review was re-issued by De Capo Press in 1970 and is also available on microfilm as part of Research Publication's History of Women Collection, Reels 14-15.
Primary Published Sources:
The Project has published two of four volumes of our selected edition of Sanger’s papers. The book’s collection of letters, journal entries, speeches and other documents is accompanied by an introduction, an index, annotation and a bibliography and presents Sanger’s life and work in her own words. The second volume will appear shortly.
The Selected Papers of Margaret Sanger, Volume I: The Woman Rebel, 1900-1928. Edited by Esther Katz, with Cathy Moran Hajo and Peter C. Engelman. (Urbana, Ill.: University of Illinois Press, 2002).
The Selected Papers of Margaret Sanger, Volume II: Birth Control Comes of Age, 1928-1939. Edited by Esther Katz, with Peter C. Engelman, Cathy Moran Hajo and Amy Flanders. (Urbana, Ill.: University of Illinois Press, 2007)
Transcripts of the Congressional birth control hearings have also been published, though they may be somewhat difficult to obtain. They will provide both pro- and anti-birth control arguments.
Birth Control Hearings Before a Subcommittee of the Committee on the Judiciary, United States Senate on S.4582, Feb. 13-14, 1931 (Washington, 1931).
Birth Control Hearings Before the Committee of Ways and Means, House of Representatives on H.R. 11082, ,May 19-20, 1932 (Washington, 1932).
Birth Control Hearings Before the Committee on the Judiciary, House of Representatives on H.R. 5978, Jan. 18-19, 1934 (Washington, 1934).
Birth Control Hearings Before a Subcommittee of the Committee on the Judiciary, United States Senate on S.1842, Mar. 1, 20, 27, 1934 (Washington, 1934).
Newspapers such as the New York Times, the Washington Post and the Chicago Tribune all carried reports of the birth control campaign and its leaders. Check the archives of your local paper for stories about birth control activism in your area.
Selected Secondary Sources:
In addition to the general birth control histories listed on our bibliography on Margaret Sanger, some of these works will be helpful.
Janet Farrell Brodie, Contraception and Abortion in 19th Century America, 1994.
Linda Gordon, The Moral Property of Women: A History of Birth Control Politics in America, 2002.
Carol McCann, Birth Control Politics in the United States, 1916-1945, 1994.
James Reed, The Birth Control Movement in America: From Private Vice to Public Virtue, 1978.
Nancy E. McGlen, Women, politics, and American Society, 2002.
Virginia Sapiro, Women in American society: an introduction to women's studies, 2nd edition, 1990.
And take a look at these web sites, too:
Revised: May 3, 2010
|
<urn:uuid:4b1f989e-9998-4e05-ada8-941b935c5ab5>
|
CC-MAIN-2013-20
|
http://www.nyu.edu/projects/sanger/research/topicguide2008.html
|
2013-05-23T12:02:34Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930097
| 1,780
|
(Phys.org)—Children need access to technology if they want to succeed in the 21st century with so many of the world's transactions done over the internet, says Massey Professor Mark Brown.
This time of year parents flock to stationery stores to purchase items required for the school year, and some will be asked to buy a laptop, tablet or smart device for their child.
This may seem like a lot of money to spend on a child, but Professor Brown, director of Massey's National Centre for Teaching and Learning, says it is an investment in their future.
He says purchasing a computer or tablet is important for developing your child's technology skills for future employment.
"In less than a decade people have become accustomed to downloading their music from the web, reading electronic books from Kindle and iPad-like devices, and accessing the latest news and events through online sources," he says.
"If our children are to take full advantage of the potential benefits offered by new forms of digital learning, then access to appropriate technology is essential."
A recent parliamentary inquiry into digital learning recommended that all children and teachers have appropriate access to technology.
"We have a responsibility to address the growing problem of digital exclusion. Learning through technology is one way of ensuring that we develop a more inclusive society where children develop appropriate 21st century skills."
He says both parents and teachers play an important role in ensuring children make the most of the technology available. "It's important to acknowledge that digital technology does not replace the best of conventional learning that occurs in the classroom or at home.
"The benefits of technology depends on the way children, parents and teachers choose to use it to enhance learning. When used well for educational purposes, the latest technology can help create opportunities for more active and meaningful learning experiences."
However it doesn't mean everyone in a household needs their own device.
"It's unrealistic to think that all parents and caregivers can afford the cost of the latest iPad-like device. There are benefits of sharing a common device as rich conversations can take place around the technology.
"However, there are times when you need some type of computing device to complete a piece of individual work. This is why parents and teachers are important in ensuring the best and most equitable use of the technology."
He says parents concerned about their children using social networks such as Facebook need to appreciate the role technology now plays in supporting friendships and encourage their children to include them in their network.
"Learning is inherently a social activity and rather than trying to ban children from joining such networks and playing online games where they collaborate with other players from around the world, we need to educate them, and many adults, on appropriate usage.
"Digital literacy is here to stay and if we are serious about taking advantage of the potential benefits of digital learning then we need to appropriately resource our schools and teachers."
Explore further: The digital student: E-books, tablets and even smartphones becoming classroom staples
|
<urn:uuid:8d3663c5-cb8f-4ace-bc73-2504473b4dab>
|
CC-MAIN-2013-20
|
http://phys.org/news/2013-01-technology-essential-children-success-professor.html
|
2013-05-26T02:35:53Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963266
| 602
|
In addition to being extremely strong and stretchy, spider silk
conducts heat better than most materials, including silicon,
aluminium and pure iron, mechanical engineers at Iowa State
University have discovered.
Spider silk has long been the subject of scientific scrutiny,
but mostly for its
impressive strength. Xinwei Wang, lead researcher on the study,
was keen to put speculation that spider silk would be a good
thermal conductor to the test as part of a search for organic
materials that can effectively transfer heat; most materials from
living things are very bad at conducting heat. Wang enlisted the
help of eight golden silk orbweaver spiders. They were given
lodgings in an Iowa State University greenhouse and fed crickets to
fuel their web-spinning.
Wang, along with colleagues Xiaopeng Huang and Guoging Liu,
found that spider silk conducts heat 1,000 times better than woven
silkworm silk and 800 times better than other organic tissues.
Spider silk conducts heat at a rate of 416 watts per metre Kelvin,
compared with copper at 401 and skin at 0.6 watts per metre
Wang said: "This is very surprising because spider silk is
organic material. For organic material, this is the highest ever.
There are only a few materials higher -- silver and
The thermal conductivity of the spider silk also increased by 20
percent when it was stretched to its 20 percent limit. Most
materials lose thermal conductivity when stretched.
These unusual properties are down to the defect-free molecular
structure of spider silk, including proteins that contain
nanocrystals and the spring-shaped structures connecting the
proteins. However, more research is needed to fully understand why
spider silk is so good at conducting heat.
This discovery could open a door to using spider silk to create
flexible, heat-dissipating parts for electronics, better clothes
for hot weather and bandages that don't trap heat.
The research is detailed in a paper in Advanced
New Secrets of Spider Silk: Exceptionally High Thermal Conductivity
and its Abnormal Change under Stretching.
/ Hunter Desportes /
|
<urn:uuid:f814567e-37e4-4d46-bfa8-50472d0cf8dd>
|
CC-MAIN-2013-20
|
http://www.wired.co.uk/news/archive/2012-03/07/spider-silk-conducts-heat
|
2013-05-20T11:46:29Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925064
| 458
|
You are here
The Anthropology Collections require constant vigilance and care in order to preserve them for the future. The collections consist of objects made of organic materials such as bone, ceramic, plants and leather. The Anthropology Lab enables the Museum staff to properly care for the collection, including the oldest and most fragile objects.
The Anthropology Lab is designed to help preserve and protect objects with facilities and equipment including a fume hood, movable air exhausts, compact shelving, and work space for faculty, students, staff, interns, volunteers and outside researchers. The lab keeps records dating to the early 1900s.
Researchers are currently working with anthropology collections to better understand:
- Past climates in the Great Basin
- Origins and behaviors of prehistoric people in the region
- Labeling, photographing and cataloging incoming objects
- Stabalizing and conserving colleciton objects
- Maintaining records associated with objects including loans to other researchers and institutions
- Assisting researchers studying objects in the collections
- Advising and teaching University students studying Anthropology with interests in Museum Sciences
|
<urn:uuid:e8e832b8-c08c-42be-9ec8-6f5e964296f5>
|
CC-MAIN-2013-20
|
http://nhmu.utah.edu/anthropology-lab?mini=2012-11
|
2013-05-24T02:05:01Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.930764
| 218
|
Symptoms of Celiac Disease
How can you recognize the symptoms of celiac disease? Find out how to spot the prominent as well as less-noticeable signs in children and adults.
Celiac disease, one of the most common genetic diseases worldwide, was once thought to be a rare disease seen mainly in children. We now know that celiac disease can be seen at any age, and can have many different symptoms affecting different parts of your body.
Why do symptoms vary from person to person? This is a question that researchers are studying. Some studies suggest that the length of time you were breastfed is a factor. The age at which you are exposed to gluten and the amount of gluten you are exposed to may play a role. We know that people can have different degrees of celiac disease. Damage to the small intestines differs from person to person. Many adults have celiac disease years before they are diagnosed. According to the Celiac Disease Foundation, 97 percent of people who have celiac disease have never been diagnosed.
Celiac Disease: Common Symptoms in Children
"The classic presentation of celiac disease is a 12- to 24-month-old child who has a big belly, is failing to thrive, and is extremely irritable," says Benjamin Gold, MD, a professor of pediatrics and microbiology and director of pediatric gastroenterology at the Emory University School of Medicine in Atlanta.
Discover How Celiac Disease is Diagnosed
Other symptoms can include:
- Diarrhea and/or constipation
- Vomiting and bloating
- Loss of appetite and weight loss
- Delayed growth and muscle loss
"We don't know why the more classic symptoms present at the youngest age groups. As blood tests have become available and we have been able to identify celiac disease in older children, adolescents, and adults, we have learned that these symptoms are not the most common way that celiac disease presents," notes Dr. Gold.
Celiac Disease: Symptoms in Adults
Adults are less likely to have the classic digestive symptoms seen in children, making the diagnosis of celiac disease more difficult. The failure to absorb important nutrients over time can cause many different symptoms in adults, including:
- Anemia from loss of iron
Osteoporosis from loss of calcium
- Fatigue and lack of energy
- Bone and joint pain
- Anxiety and depression
- Tingling or numbness of your hands or feet
- Skin rash
- Canker sores in your mouth
Celiac Disease: Relieving Symptoms
"Although celiac disease is a lifelong condition, we see a remarkable response once we start children on a gluten-free diet," says Gold. "The extreme irritability goes away and children begin to thrive again." If you have celiac disease, the good news is that a vigilant, gluten-free diet will stop celiac symptoms in most children and adults. Improvement begins within a few days. Damage to the small intestine can start to heal within three to six months in children but may take longer in adults.
Reverse Celiac Disease Symptoms by Going Gluten-Free
Because the symptoms are so different, celiac disease can seem like a different disease in children than in adults. If you have been diagnosed with celiac disease, a gluten-free diet is the treatment. If you have symptoms that may be celiac disease, see your doctor. The sooner you start treatment, the better chance you have of avoiding future complications.
|
<urn:uuid:0e7ba5e5-ecae-4ec7-b110-c791c043fd90>
|
CC-MAIN-2013-20
|
http://www.everydayhealth.com/celiac-disease/celiac-disease-symptoms.aspx
|
2013-05-21T10:31:22Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.95942
| 721
|
|Home > Technology > IT News > Article|
Emergency Management Australia advises that you prepare early for the season by removing rubbish and forming a firebreak around the home, storing combustibles such as leaves and twigs clear of the house, fitting wire screens to doors, windows and vents and checking that you have appropriate insurance cover in case things do turn disastrous. There is also advice on how to cope if you are caught by bushfire while in your car or on foot.
The NSW Rural Fire Service, which worked so hard over Christmas and New Year to control ferocious bushfires burning across the state, has a wealth of information on its community safety page. Even if your property is not threatened but your district is under a total fire ban, you can find out what the restrictions are and how long they last.
The RFS has a link to the Commonwealth Scientific and Industrial Research Organisation, which also has some great information on fire and related behaviour. Then, as part of its building innovation and construction technology section CSIRO gives advice on fireproofing your house. The organisation gives an example of a "bad candidate for survival", which has dangers such as overhanging trees, leaves in the gutter, a combustible doormat and a woodheap under the house. You can also see what research CSIRO is undertaking into bushfire management and read its fire fact of the month - from "the fire triangle" (fire, air, fuel) to the characteristics of a fire front.
The State Forests of NSW has a simple summary of dangers to look out for when buying land, ways of planning a garden to minimise bushfire impact and the types of trees that are less likely to fuel a fire.
An old but good article on bushfire in Australia, including its history before white settlement and of forestry management entitled Special Article - Bushfires - An Integral Part of Australia's Environment, can be found on the Australian Bureau of Statistics Web site - go there and then search for "bushfire"). The final chapter, The Nature of Bushfire Disasters - Past and Future, summarises the social impact of bushfire and has a table on the most significant single fires in Australia before 1995.
The NSW National Parks & Wildlife Service has a neat section headed "Living with fire" which has information on why fire can be important for the Australian landscape. You'll also find updates on park closures and fire warnings on the site, while another section on the often controversial issue of bushfire and land management explains why planned fires "are an important weapon against fire hazards".
For more information on the pros and cons of back-burning, the Nature Conservation Council of NSW has an index dedicated to ensuring "all Bush Fire Management activity is ecologically sustainable while protecting life and property".
For volunteer firefighters one of the best Australian sites is Firebreak, which began as a communication tool for conveying information to bushfire fighters in the Australian Capital Territory. It explains the basics of bushfire fighting, control and command issues and has many photographs of past bushfires in the ACT region. Under "Resources" there is a link to weather calculators and a way of assessing if your house will survive.
A stark reminder of the destructive bushfires last Christmas are satellite photographs on the NASA site, showing plumes of smoke rising from the east coast.
Printer friendly version Email to a friend React to this article/Submit a news tip
In this section
The CD goes platinum
A sticky situation
Oakton reports best result in 14 years
Net attacks: Internet pioneer predicted outages in 2000
The ugly side of SMS
Bid to outlaw GPL
British ISPs refuse to toe government line
More aggressive Net attacks feared
Australian Net economy worth $43b: Cisco study
Web racial hatred case heads back to court
Samsung Contact adds support for Outlook XP
AAPT claims a first
Greater scrutiny of IT outsourcing needed: report
Watchdog wants more coherent laws on Net race hate
|text | handheld (how to)||
Copyright © 2002. The Sydney Morning Herald.
|advertise | contact us|
|
<urn:uuid:81b8b731-43f3-46a3-96c7-b6117b966c04>
|
CC-MAIN-2013-20
|
http://www.smh.com.au/articles/2002/10/18/1034561309332.html
|
2013-06-18T23:06:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.918835
| 837
|
|See the world (and its fossils) with UCMP's field notes.
|SEARCH | GLOSSARY | SITE MAP|
Online exhibits : Geologic time scale : Paleozoic Era
The Ordovician Period
The Ordovician Period lasted almost 45 million years, beginning 488.3 million years ago and ending 443.7 million years ago.* During this period, the area north of the tropics was almost entirely ocean, and most of the world's land was collected into the southern supercontinent Gondwana. Throughout the Ordovician, Gondwana shifted towards the South Pole and much of it was submerged underwater.
The Ordovician is best known for its diverse marine invertebrates, including graptolites, trilobites, brachiopods, and the conodonts (early vertebrates). A typical marine community consisted of these animals, plus red and green algae, primitive fish, cephalopods, corals, crinoids, and gastropods. More recently, tetrahedral spores that are similar to those of primitive land plants have been found, suggesting that plants invaded the land at this time.
From the Lower to Middle Ordovician, the Earth experienced a milder climate the weather was warm and the atmosphere contained a lot of moisture. However, when Gondwana finally settled on the South Pole during the Upper Ordovician, massive glaciers formed, causing shallow seas to drain and sea levels to drop. This likely caused the mass extinctions that characterize the end of the Ordovician in which 60% of all marine invertebrate genera and 25% of all families went extinct.
Ordovician strata are characterized by numerous and diverse trilobites and conodonts (phosphatic fossils with a tooth-like appearance) found in sequences of shale, limestone, dolostone, and sandstone. In addition, blastoids, bryozoans, corals, crinoids, as well as many kinds of brachiopods, snails, clams, and cephalopods appeared for the first time in the geologic record in tropical Ordovician environments. Remains of ostracoderms (jawless, armored fish) from Ordovician rocks comprise some of the oldest vertebrate fossils.
Despite the appearance of coral fossils during this time, reef ecosystems continued to be dominated by algae and sponges, and in some cases by bryozoans. However, there apparently were also periods of complete reef collapse due to global disturbances.
The major global patterns of life underwent tremendous change during the Ordovician. Shallow seas covering much of Gondwana became breeding grounds for new forms of trilobites. Many species of graptolites went extinct by the close of the period, but the first planktonic graptolites appeared.
In the late Lower Ordovician, the diversity of conodonts decreased in the North Atlantic Realm, but new lineages appeared in other regions. Seven major conodont lineages went extinct, but were replaced by nine new lineages that resulted from a major evolutionary radiation. These lineages included many new and morphologically different taxa. Sea level transgression persisted causing the drowning of almost the entire Gondwana craton. By this time, conodonts had reached their peak development.
Although fragments of vertebrate bone and even some soft-bodied vertebrate relatives are now known from the Cambrian, the Ordovician is marked by the appearance of the oldest complete vertebrate fossils. These were jawless, armored fish informally called ostracoderms, but more correctly placed in the taxon Pteraspidomorphi. Typical Ordovician fish had large bony shields on the head, small, rod-shaped or platelike scales covering the tail, and a slitlike mouth at the anterior end of the animal. Such fossils come from nearshore marine strata of Ordovician age in Australia, South America, and western North America.
Perhaps the most "groundbreaking" occurrence of the Ordovician was the colonization of the land. Remains of early terrestrial arthropods are known from this time, as are microfossils of the cells, cuticle, and spores of early land plants.
The Ordovician was named by the British geologist Charles Lapworth in 1879. He took the name from an ancient Celtic tribe, the Ordovices, renowned for its resistance to Roman domination. For decades, the epochs and series of the Ordovician each had a type location in Britain, where their characteristic faunas could be found, but in recent years, the stratigraphy of the Ordovician has been completely reworked. Graptolites, extinct planktonic organisms, have been and still are used to correlate Ordovician strata.
Particularly good examples of Ordovician sequences are found in China (Yangtze Gorge area, Hubei Province), Western Australia (Emanuel Formation, Canning Basin), Argentina (La Chilca Formation, San Juan Province), the United States (Bear River Range, Utah), and Canada (Survey Peak Formation, Alberta). Ordovician rocks over much of these areas are typified by a considerable thickness of lime and other carbonate rocks that accumulated in shallow subtidal and intertidal environments. Quartzites are also present. Rocks formed from sediments deposited on the margins of Ordovician shelves are commonly dark, organic-rich mudstones which bear the remains of graptolites and may have thin seams of iron sulfide.
Tectonics and paleoclimate
During the Ordovician, most of the world's land southern Europe, Africa, South America, Antarctica, and Australia was collected together in the super-continent Gondwana. Throughout the Ordovician, Gondwana moved towards the South Pole where it finally came to rest by the end of the period. In the Lower Ordovician, North America roughly straddled the equator and almost all of that continent lay underwater. By the Middle Ordovician North America had shed its seas and a tectonic highland, roughly corresponding to the later Appalachian Mountains, formed along the eastern margin of the continent. Also at this time, western and central Europe were separated and located in the southern tropics; Europe shifted towards North America from higher to lower latitudes.
During the Middle Ordovician, uplifts took place in most of the areas that had been under shallow shelf seas. These uplifts are seen as the precursor to glaciation. Also during the Middle Ordovician, latitudinal plate motions appear to have taken place, including the northward drift of the Baltoscandian Plate (northern Europe). Increased sea floor spreading accompanied by volcanic activity occurred in the early Middle Ordovician. Ocean currents changed as a result of lateral continental plate motions causing the opening of the Atlantic Ocean. Sea levels underwent regression and transgression globally. Because of sea level transgression, flooding of the Gondwana craton occurred as well as regional drowning which caused carbonate sedimentation to stop.
During the Upper Ordovician, a major glaciation centered in Africa occurred resulting in a severe drop in sea level which drained nearly all craton platforms. This glaciation contributed to ecological disruption and mass extinctions. Nearly all conodonts disappeared in the North Atlantic Realm while only certain lineages became extinct in the Midcontinental Realm. Some trilobites, echinoderms, brachiopods, bryozoans, graptolites, and chitinozoans also became extinct. The Atlantic Ocean closed as Europe moved towards North America. Climatic fluctuations were extreme as glaciation continued and became more extensive. Cold climates with floating marine ice developed as the maximum glaciation was reached.
Canning Basin, Australia: A great diversity of fossil gastropods has been uncovered in the Canning Basin.
* Dates from the International Commission on Stratigraphy's International Stratigraphic Chart, 2009.
HOME | SEARCH | GLOSSARY | SITE MAP | FREQUENTLY-ASKED QUESTIONS
|
<urn:uuid:9c52516e-032d-4730-8eec-4cabb49680e0>
|
CC-MAIN-2013-20
|
http://www.ucmp.berkeley.edu/ordovician/ordovician.php
|
2013-05-24T08:30:46Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.953293
| 1,686
|
RTMS: Repetitive transcranial magnetic stimulation (rTMS) is a noninvasive technique that uses electromagnets to create localized electrical currents in the brain. Image: IMAGE COURTESY OF NEURONETICS, INC.
Treatment of severe depression with magnetic stimulation is moving beyond large mental health centers and into private practices nationwide, following more than two decades of research on the treatment. Yet even as concern about its efficacy fades, one potential side effect—seizures—continues to shadow the technology.
Called repetitive transcranial magnetic stimulation (rTMS), the noninvasive technique uses electromagnets to create localized electrical currents in the brain. The gentle jolts activate certain neurons, reducing symptoms in some patients. Eight psychiatrists contacted for this article, all of whom use rTMS to treat depression, say it is the most significant development in the field since the advent of antidepressant medications. The prevailing theory is that people with depression do not produce enough of certain neurotransmitters, which include serotonin and dopamine. Electricity (administered in combination with antidepressants) stimulates production of those neurotransmitters.
Scope of the problem
A National Institute of Mental Health (NIMH) study released this spring shows that 14 percent of patients with drug-resistant major depressive disorder experience a remission of symptoms after rTMS treatment compared with a control group, which reported a 5 percent rate of remission. Physicians and researchers say those results are similar to the success rate of antidepressants. No notable side effects occurred during the study, according to its authors, who include Mark George, an early rTMS researcher and a professor of psychiatry, radiology and neurosciences at the Medical University of South Carolina in Charleston. They have suggested that higher levels of electrical stimulation might attain better results.
At the heart of this interest in rTMS treatment is the only such device cleared by the U.S. Food and Drug Administration (FDA). In October 2008 the government specified that Neuronetics, Inc.'s NeuroStar could be used to treat major depressive disorder that is resistant to at least one antidepressant medication. Since then, about 200 centers and clinics in the U.S. have purchased the $60,000 system, which resembles a contemporary dentist's chair with an electronics console.
The treatment joins talk, pharmaceutical and electroconvulsive therapies (the latter of which rTMS is an offshoot) as the only known methods of alleviating the debilitating symptoms of depression. Nearly 7 percent of U.S. adults, or 14.8 million people (predominantly women), are afflicted by major depressive disorder each year, according to the NIMH. In fact, the NIMH says the disorder is the leading cause of disability in the U.S. for people aged 15 to 44. George says that about half of all patients suffering from serious depression resist at least one antidepressant.
Changing brain chemistry
Unlike with electroconvulsive, or electroshock, therapy, where patients must be unconscious and administered muscle relaxants in order to prevent seizures, patients receiving rTMS (which involves trains of pulses during each session, hence the "repetitive" modifier) remain conscious and seated in outpatient settings. Highly focused magnetic pulses of up to 1.5 teslas induce an electrical current two to three centimeters deep in the left prefrontal section of the cerebral cortex. That region, which acts as an emotion modulator, appears to be underproducing neurotransmitters in depression sufferers. The rTMS pulses directly stimulate an area about the size of a quarter, although scientists are examining whether they affect other parts of the brain, too.
As with antidepressants, the electricity likely is changing the brain's chemistry, says rTMS pioneer Eric Wassermann, chief of the Brain Stimulation Unit at the National Institute of Neurological Disorders and Stroke in Bethesda, Md. He was among the first U.S. researchers to investigate rTMS as a way to alter mood.
Treatments typically occur five days a week for four to six weeks. FDA guidelines for first-time NeuroStar treatments call for 3,000 magnetic pulses delivered over 37.5 minutes (a rate considered low-frequency) by a C-shaped ferromagnetic coil held to the patient's scalp.*
*Correction (9/10/10): This sentence was edited after posting. It originally stated that the NeuroStar TMS Therapy System uses a figure 8–shaped magnetic coil.
|
<urn:uuid:08e0f386-73d1-4f18-9c8c-0802cd30dcc2>
|
CC-MAIN-2013-20
|
http://www.scientificamerican.com/article.cfm?id=transcranial-magnetic-stimulation-rtms
|
2013-05-23T19:35:56Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.92669
| 912
|
Valentine’s Day – The Full Circle Of Traditions
by John Culbertson
The history and importance of Valentine’s Day will become more than evident this February 14th, with widespread gestures of affection, hope and love. With so much pessimism and bad news in the world today, Valentine’s Day is a welcome opportunity to bring optimism, hope and love to the forefront of our minds. But with so much commercialism attached to many religious festivals and national days of importance, it is easy to overlook the history, meaning and importance of Valentine’s Day. How did this day originate, and why are Roman images of Cupid associated strongly with a Christian Saint, at a time of importance to Pagan traditions?
Of the many millions who will be celebrating, or taking the opportunity to make tentative moves, or even propose this February 14th, there is no doubt that many faiths will be represented, and many who have no particular faith at all. Perhaps because the concept of love is at the heart of this occasion, there is barely a soul on earth who cannot appreciate its relevance. But beyond the commercialism, the expensive cards, the hand-tied bouquets and the luxury boxes of chocolates, Valentine’s has a mixed and fascinating history, and even the truth about who Valentine really was is shrouded in much speculation and myth.
The origins of this day lie in pagan times, although Valentine’s Day has seen a remarkable transition from pagan tradition to religious festival, and from religious festival to a day of national celebration – some might argue, of commercial celebration. For most of us, wherever we live in the world, and whatever our beliefs, February 14th has a long standing reputation for being a day of romance and of love. In some cases this is seen as being the love of one person for another in a romantic way, but elsewhere Valentine’s Day is seen more as an opportunity to make small gestures towards friends.
February 14th used to be a Roman custom, very much within the pagan tradition. It was the Eve of Lupercalia, and was primarily a festival of fertility. The tradition in Rome was to hold what was effectively a lottery, where boys and girls were paired off with each other at random. Many of these pairings evolved into firm friendships and even marriages. Throughout the following days of the feast other pagan traditions of a less romantic bent would be carried out, including ritual sacrifices, bathing in blood, and whipping in the streets.
Needless to say, when the Catholic Church stepped in, many of these traditions were stopped, and although the love lottery survived, boys were paired with saints rather than girls, and this is when Saint Valentine became synonymous with the date.
It isn’t easy to be clear who exactly Valentine was, since there are eleven different Saint Valentines recorded by the Catholic Church. However, three in particular stand out, two of which are thought to be the most likely contenders. The first of these St Valentines was a Christian Priest who was executed on February 14th in 269 AD. The emperor of Rome at the time, Claudius II had passed a law banning marriage, because a significant number of men were getting married as a way to avoid the obligation which they would otherwise face of joining the army. St Valentine carried out many secret marriages however, and during the time when he was waiting to be executed, many romantic couples sent him letters in support of love over war. It was in 496 AD that Pope Gelasius decreed that February 14th should be set as a day to honour him – neatly converting the pagan traditions into a Catholic celebration.
The other possible St Valentine was also a priest who was imprisoned for helping Christians. Whilst imprisoned he fell in love with the daughter of his jailer, with his many secret notes and letters being signed ‘from your Valentine’. He was eventually beheaded.
But how did this religious celebration become so associated with romantic love, and far less associated with religion, or Saint Valentine himself? The most likely moment when romance became associated is in 1381 when Geoffrey Chaucer wrote a poem in honour of the engagement between Richard II of England and Anne of Bohemia. This poem, entitled ‘The Parliament of Fowls’ is the first known case of linking engagement, the mating season and Saint Valentine’s together, and may well have caused the increased importance and romantic connotations we see today.
Roughly a billion Valentines cards are sent each year – the vast majority of these being sent by women. The first known Valentine card was sent about 35 years after Chaucer wrote his poem. This card was sent by the Duke of Orleans to his wife during his imprisonment at the Tower of London in 1415. Another 80 years later King Henry VIII declared February 14th to be St Valentine’s Day by Royal Charter.
However, today the only religious associations still upheld are the original pagan ones, with modern day Wicca still enjoying the opportunity to celebrate love and fertility, although with rather less blood and whipping than in Roman times! The Feast of Lupercalia is still celebrated by many modern followers of Wicca, and although the sending of Valentine’s cards may not be a traditional pagan tradition, many followers of various branches of modern Wicca do send gifts, cards or tokens of affection. However, it is viewed very much as being of greater value if sent anonymously, as with charitable donations.
The Catholic church decided to remove Valentine’s Day from its liturgical calendar in 1969, since when it has no longer been recognised as a religious celebration at all – the occasion has come full circle it seems, though not without a generous helping of fascinating traditions and associations – not to mention much commercialism too.
Many of the traditions we associate with February 14th may well be pagan in origin, but there is much variety nationally, and even regionally. For example, in some parts of Europe, such as the United Kingdom and Italy, the tradition is for women to get up early, before sunrise, and watch through the window. The belief, or at least the tradition, is that the first man they see will be the man they will marry within the year. In some regions this idea has been watered down slightly, possibly due to the limit in the number of available milkmen and postmen, with the belief that the first man seen through the window will look similar to the man they will marry.
Elsewhere in Europe, such as in Denmark, snowdrops are pressed and sent to close friends. Another tradition is to send a Valentine’s Day card, but to replace the letters of the sender’s name with dots – one dot per letter of the name. Should the recipient correctly guess who the sender is from this clue, then the sender should reward their Valentine with an egg at Easter.
Surprisingly, Valentine’s Day is now widely celebrated in many parts of Asia, such as Singapore, China and South Korea. This has largely been achieved not through religious tradition, but through marketing and commercialisation, and it is of little surprise that Asians tend to spend far more on average when it comes to Valentine’s gifts than anywhere else.
Japan has developed yet another variation on the theme of Valentine’s Day, with women being expected to give chocolates to all of the men they work with. The favour is then returned a month later with another variation of Valentine’s Day being celebrated on March 14th, though this time it is the turn of the men to give chocolate to all of the women who gave them chocolate the previous month. More recently chocolate has been replaced with other less unhealthy alternatives, such as jewellery and even lingerie.
Another month later, on April 14th, a third variation is to be witnessed in South Korea, where a similar tradition to Japan has taken place in February and March, but in April anyone was unlucky enough not to receive anything in either February or March goes to a restaurant to eat Chinese black noodles, as a way of mourning their single status! South Korea is perhaps where the concept of Valentines has been embraced most widely, with the 14th of every month of the year having some love related significance.
So from pagan feast, to a religious celebration of a Christian martyr, to an international day when love and friendship are celebrated in style, Valentine’s Day reminds us that spring is just around the corner, good things are starting to happen, the world is becoming a warmer place, and romantic ideas blossom as optimistically as the cherry trees under which misty eyed lovers ponder their fortunes. Love may not make the world go round, but it at least makes the journey more worthwhile.
John Culbertson is a new age teacher, speaker and lecturer. He teaches and speaks on psychic development (a six-month course), psychic protection, numerology, astrology, angels, tarot and almost anything else relative to the new age field.
He is available for private psychic channeled, reiki, and spiritual coaching sessions through his web site www.mysticjohnculbertson.com
He owns the new age store Starchild: www.starchildbooks.com
He also enjoys ghost writing
Article Source: http://newagearticles.com
|
<urn:uuid:604d2a59-898a-4095-a155-bb7d7bd6fd36>
|
CC-MAIN-2013-20
|
http://www.occult-underground.com/forums/viewtopic.php?f=20&t=652&p=1029
|
2013-06-18T23:38:51Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.974041
| 1,913
|
The Positive Images toolkit is an educational resource for teachers, youth workers and other educators to teach young people about migration and development.
The toolkit includes ten innovative educational activities for young people aged 12 and over. It also includes a wealth of activities and case studies of actions, enabling young people through their community to make a difference to global issues.
The toolkit incorporates four short films based on the stories of contemporary migrants and supporting powerpoint presentations.
Download the complete Positive Images toolkit here or download individual sections of the resource below.
Complete Positive Images toolkit (PDF)
Complete Positive Images toolkit (Microsoft Word)
Complete Positive Images toolkit (Microsoft Powerpoint)
Introduction and how to use the toolkit
Introduction and outline of the toolkit aims and structure.
Introduction and outine (PDF)
Introduction and outline (Microsoft Word)
What do I need to consider?
This section includes guidance for educators when delivering the activities in the educator’s guide. This includes guidance on creating a safe environment for young people and developing an awareness of migrant children in the group. "
What do I need to consider (PDF)
What do I need to consider (Microsoft Word)
This section includes definitions related to migration and development, which are provided for the educator’s reference. It includes definitions of terms such as migrant, refugee, asylum seeker, migrant worker and development. "
Definitions (Microsoft Word)
Theme 1: Why do people migrate?
Teaching activities for students to learn about why people migrate, poverty and development and how these concepts link to migration.
Theme 1 (PDF)
Theme 1 (Microsoft Word)
Theme 1 (Microsoft Powerpoint)
Theme 2: Who are migrants?
Teaching activities for students to learn the meaning of key terms such as refugee, asylum seeker and migrant worker. They can also learn more about who migrants are.
Theme 2 (PDF)
Theme 2 (Microsoft Word)
Theme 2 (Microsoft Powerpoint)
Theme 3: Migrant journeys
Teaching activities for students to learn about the situations that people face on their journeys and the experience of arriving in a new place.
Theme 3 (PDF)
Theme 3 (Microsoft Word)
Theme 3 (Microsoft Powerpoint)
Theme 4: Positive Images
Teaching activites for students to learn about how to recognise different perspectives on migration in the media and the positive contributions of people who migrate to their new communities.
Theme 4 (PDF)
Theme 4 (Microsoft Word)
Theme 4 (Microsoft Powerpoint)
Action planning worksheets
A set of six action planning activities for young people, aiming to support them in taking action on migration and development.
Action planning worksheets (PDF)
Action planning worksheets (Microsoft Word)
|
<urn:uuid:cba5f345-4262-44fa-8274-fb6326a19ef4>
|
CC-MAIN-2013-20
|
http://www.redcross.org.uk/What-we-do/Teaching-resources/Teaching-packages/Positive-Images
|
2013-05-22T00:09:13Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.883066
| 576
|
Part 1 of our journey from today’s New Testament back in time to Jesus looked at the problems of translations, canonicity, and finding the best copies. The next problem to crossing this gulf is textual variants. There are 400,000 differences between the thousands of New Testament copies—more differences than there are words in the New Testament. Almost all are insignificant, but thousands of meaningful differences remain.
Historians use several tools to resolve these differences:
- Criterion of Embarrassment. Of two passages, which one is more embarrassing? We can easily imagine scribes toning down a passage, but it doesn’t make sense for them to make it more embarrassing. The passage that is more embarrassing is likelier to be more authentic. For example, different copies of Mark 1:40–41 has Jesus either “moved with compassion” or “moved with anger” (for more, see the NET Bible comment on this phrase). A copyist changing compassion to anger is hard to imagine, but the opposite is quite plausible. The Criterion of Embarrassment would conclude that “moved with anger” is the likelier original reading.
- Criterion of Multiple Attestation. A claim made by multiple independent sources is preferred over one in a single source.
In addition, a contested passage in an older manuscript is preferred, the one contained in more manuscripts is preferred, and so on.
Notice that these tools need multiple manuscripts to work. They ask: given two manuscripts with different versions of a particular passage, which is the more authentic one?
Consider the long ending of Mark, for example. Given a manuscript of Mark ending with verse 16:20 (version A) and a manuscript ending with 16:8 (version B), the historians’ tools can be applied to determine which is the likely older and more authentic version. But what if you don’t have multiple versions? Suppose we only had Mark version A, with no copies of B and no references to it. Scholars wouldn’t even know to ask the question!
Consider the three most famous of these embarrassing scribal additions: the long ending of Mark, the Comma Johanneum (the only explicit reference to the Trinity in the Bible), and the story of Jesus and the woman taken in adultery. Apologists will argue that these are neither embarrassing nor problems because they’ve been resolved. We know that they weren’t original. But this is true only because historians happen to be lucky enough to have competing manuscripts without these additions. For what added biblical passages do we not have correct manuscripts to make us aware of the problem?
There are consequences. Pentecostal snake handlers trust in the long ending tacked onto Mark (“In my name they will drive out demons; they will speak in new languages; they will pick up snakes with their hands, and whatever poison they drink will not harm them”). What additional nutty demands in our New Testament do we not know are inauthentic?
Of several manuscript categories, our oldest complete copies are Alexandrian manuscripts, including the Codex Sinaiticus and Codex Vaticanus mentioned in the last post. That’s not because they’re necessarily better copies but because they were preserved better. The dry conditions of Alexandria, Egypt preserved manuscripts better than many other places where New Testament documents were kept—Asia Minor, Greece, or Italy, for example. We accept these manuscripts simply because anything that might refute them has crumbled to dust, which is not a particularly reliable foundation on which to build a portrait of the truth.
Read the first post in the series here: What Did the Original Books of the Bible Say?
Next time: The Bible’s Dark Ages
Photo credit: Wikipedia
|
<urn:uuid:6b594fe3-8fc2-41d6-88ae-8a7e12f8c9cb>
|
CC-MAIN-2013-20
|
http://crossexaminedblog.com/2012/04/19/what-did-the-original-books-of-the-bible-say-part-2/
|
2013-05-21T17:31:01Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938589
| 775
|
Stochastic Pattern Computing: A New Computing Paradigm for AI
Pentti Kanerva, SICS
Computers have always been thought of as some kinds of brains and have even been called electronic brains. However, the first ones were made for NUMERIC COMPUTING; they were fast automatic calculators or number-crunchers. Then it was realized that computers actually manipulate symbols, giving birth to programming languages. It was also realized that the objects of computing could be not only numbers but also data structures, and so SYMBOLIC COMPUTING was born and has dominated AI ever since. In the last 10-15 years NEUROCOMPUTING has come to rival symbolic computing in cognitive science, spurring the Second AI Debate. The debate is about whether connectionist neurocomputing is sufficient for machine intelligence and cognition, and whether we need anything more than symbolic computing.
Stochastic Pattern Computing combines robustness and learning of neurocomputing with the compositionality of symbolic computing into a system that resembles numeric computing. The "numbers" that we compute with are large patterns; they are random vectors with thousands of dimensions. Such a vector can represent an object or a property or a relation or a function or a composed structure or a mapping between structures. The key ideas are that all things are represented in the same mathematical space. and that new things, or concepts, are composed recursively from existing ones. The vectors are not sectioned into fields. Instead, every component of a composed vector holds some information about every one of the constituent vectors, so that the vectors are holographic or holistic--the representation is brain-like. Computing with such vectors relies on the statistical law of large numbers, which is why the vectors are very high dimensional. The idea of stochastic pattern computing will be demonstrated with simple examples.
|
<urn:uuid:94084ec8-5135-4e3b-af87-6b81f14fe2bb>
|
CC-MAIN-2013-20
|
http://www.ercim.eu/10years/kanerva.html
|
2013-06-19T12:41:17Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.954238
| 375
|
This is the best way for you and your students to share great books with each other. You can use it as a part of a reading workshop, or just implement random book talks throughout the school year to help your students decide what to read. Below are some instructions an examples of book talks to give you an idea.
When to do a Book Talk...
How to do a Book Talk...
What it sounds like...
Book Talk of Speak by Laurie Halse Anderson
"This is one of my FAVORITE books! I would definitely give it five-stars. It's a young adult book, and it's about this freshman in high school named Melinda. She starts off school with no friends, because all of her friends in Middle School won't talk to her; they are mad at her for calling the cops on a party over the summer. So, she struggles through school, dealing with mean teachers, her annoying parents, and being an outcast for most of the year. But Melinda has a secret that she can't tell anyone... she won't even admit it to herself. It is the story of Melinda struggling to find her voice and speak out against all those who would do her harm, and those who already have.
This book is rally a page-turner. I kept reading because I wanted to know what happened to her. Also, it is told from Melinda's point of view in a very authentic voice. There were some very funny parts, like when her dad ruins the turkey on Thanksgiving and when she is talking about the Marthas. I would definitely recommend reading this book because it is so inspiring and really entertaining."
As you can see, this is not too much information about the book, but it is just enough that other students interested in the topic may decide to pick it up. You may choose to do book talks in groups, for example, book talking four or five science fiction books, or a few books that relate to something you've read as a class. Or, you can have students book talk their independent reading books rather than writing a boring book report. This will buff-up their public-speaking skills AND allow them to share great literature with their peers.
"A book is like a garden carried in the pocket." -- Chinese Proverb
|
<urn:uuid:05255266-b2ec-4076-b527-304bfb899312>
|
CC-MAIN-2013-20
|
http://www.tcnj.edu/~vander5/booktalks.htm
|
2013-05-24T15:43:27Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.982402
| 470
|
|You Are Here: Burke Museum : Spider Myths : Weird : Black Widow|
Myth: When black widow spiders mate, the female always kills and eats the male.
Western Black Widows
Females weigh 10-160 times as much as males, lending "weight" to the myth.
(from B.J. Kaston photo)
Click image to enlarge
Fact: This myth (which is not totally false, but
very far from true) is believed even by scientists, and can be found in many
ecology textbooks! It's depressing; the authors are obviously copying each other
and have never actually watched black widows mate in the field.
To understand the facts about black widow mating, you must first understand that there are many different species worldwide in the black-widow group (the genus Latrodectus), and three different black widow species in the United States alone, two in the east and one in the west. These species do not all behave alike. Moreover, in the past most observations of mating took place in laboratory cages, where males could not escape.
The only known Latrodectus species in which mate cannibalism in nature is the rule, not the exception, are in the Southern Hemisphere. Of U.S. species, mate cannibalism occurs sometimes in Latrodectus mactans, the eastern (southern) black widow, but most males survive to mate another day. In the other two black species, including the western black widow L. hesperus (only species west of Kansas), mate cannibalism has never been observed in the wild!
Myth: Spiders (often deadly ones) or their eggs may lurk in human hairstyles or in bubble gum.
Fact: These older urban legends don't seem to
be in wide circulation today. One dating from the days of beehive hair-dos relates
that a young woman died from the bites of black widow spiderlings that had hatched
inside her bouffant. There are a number of variants, including a common one
where the victim is a man with an Afro hair-do. In an Australian version, the
spider was a red-back rather than a black widow. A late 20th century rumor concerned
a popular brand of ultra-soft bubble gum which, it was alleged, was manufactured
from spider eggs.
Spiders, need I say, do not find the human body or hair a favorable site for egglaying, and spider eggs are not so easy to harvest that any mass consumer product could be made from them. Even if black widow hatchlings could bite, the minute amount of venom they carry would not likely be very harmful; even the bites of adult females are very rarely fatal if properly treated.
More details on these tales may be found at the Urban Legends Reference Pages (which perpetuates one myth while debunking others, by listing spider legends on their insect page!).
|Previous Myth||Myths Home||Web Resources||Next Myth|
| Text ©
2003, Burke Museum of Natural History & Culture,
University of Washington, Box 353010, Seattle, WA 98195, USA
Photos © as credited
to Spider Myths author, Rod Crawford
This page last updated 1 September, 2010
This site best viewed at 800 x 600
using IE 5.0 or above.
|
<urn:uuid:55f2b556-175e-400d-b75d-d093af1298fe>
|
CC-MAIN-2013-20
|
http://www.burkemuseum.org/spidermyth/myths/blackwidow.html
|
2013-05-22T08:27:59Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925751
| 690
|
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
December 19, 1997
Explanation: The colorful planetary nebula phase of a sun-like star's life is brief. Almost in the "blink of an eye" - cosmically speaking - the star's outer layers are cast off, forming an expanding emission nebula. This nebula lasts perhaps 10 thousand years compared to a 10 billion year stellar life span. Spectacular planetary nebulae are familiar objects to both professional and amateur astronomers, but they still contain a few surprises. For instance, the lovely nebula NGC 6826, also known as the Blinking Eye Nebula, has mysterious red FLIERS seen on either side of the Hubble Space Telescope image above. Are they also expanding outward from the central star? If so, their "bow shocks" point in the wrong direction!
Authors & editors:
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
&: Michigan Tech. U.
|
<urn:uuid:26f41c63-19f2-4de9-b508-6c7c3f864fd7>
|
CC-MAIN-2013-20
|
http://apod.nasa.gov/apod/ap971219.html
|
2013-05-19T19:43:37Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.903377
| 232
|
mammals. Young gharials may eat invertebrates and insects.
About 100 gharials live in the Ramganga and can be seen swimming in its deep pools or basking in the sun on its banks. These were released as part of the conservation programme for gharials. Though it has been saved from extinction, the gharial is still critically endangered. The main threats are – loss of habitat (fast-flowing rivers) and nesting sites (sandbanks) due to construction of dams and barrages which changes the flowage of water and exploitation of fish by humans (depletion of prey species).
The still waters of Corbett, especially the Ramganga reservoir, are home to the Mugger crocodile (Crocodylus palustris). Muggers are more general carnivores and take a variety of animals as food. Muggers are also found in Nakatal, Corbett’s only lake.
|
<urn:uuid:87676348-3a4b-43fe-9224-01bfb51236ad>
|
CC-MAIN-2013-20
|
http://www.jimcorbettnationalpark.com/corbett_det.asp?file=corbett_fauna_reptiles
|
2013-05-19T02:45:56Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.957007
| 194
|
Roundworms may harbour secrets of human wound healingPublished On: Fri, Nov 18th, 2011 | Cell Biology | By BioNews
Scientists have claimed that roundworms may be the ideal laboratory model to learn more about the complex processes involved in repairing wounds and could eventually allow them to improve the body’s response to healing skin wounds.
Andrew Chisholm and his team from the University of California, San Diego have discovered genes in the laboratory roundworm C. elegans that signal the presence of surface wounds and trigger another series of chemical reactions that allow the worms to quickly close cuts in their surfaces that would turn fatal if left unrepaired.
The scientists have reported that these two findings and a third discovery they made in the worms, involving genes that inhibit wound healing, could allow them to design ways to improve the healing of cuts and sores by possibly blocking the action of these inhibitory genes or finding ways to enhance the chemical signalling and wound healing process.
Chisholm and his postdoctoral fellow Suhong Xu took time-lapse movies of areas around the transparent worms where they punctured the skin with a needle or laser.
Then they monitored the calcium with a fluorescent protein so they could see how the calcium molecules spread from the point of injury.
They also developed genetic screens to pinpoint the specific calcium pathway or “channel” that is signalling the presence of the wound and stimulating the healing process.
“We think the channel is playing an important role in either sensing damage or responding to some other receptor that senses damage,” the Daily Mail quoted Chisholm, the lead researcher as saying.
‘Is it sensing a change in the tension of the cell? Is it sensing some kind of change in electrical potential? We don’t know,” he said.
Chisholm thinks that the lowly roundworms have a delicate surface susceptible to injury and a rapid wound response mechanism that keeps their surface wounds from being fatal.
“They have a hydrostatic skeleton in which the skin and muscles are under pressure to allow the animal to stay semi-rigid, so when you jab a worm with a needle it will, in effect, explode.
“But remarkably, they don’t die when you do that because they have evolved ways to very rapidly close wounds to survive in the wild. In their natural environment, their predators try to exploit the worm’s vulnerable exoskeleton. There are a whole group of fungi with tiny spikes that just sit around waiting for the worms to crawl over them so they can poke holes through their cuticle.
“For us, they are easy to work with, because worms are small, easy to grow and they”re transparent, so when you put them on a slide, you can see the calcium clearly,” he added.
The study will be published in the journal Current Biology.
|
<urn:uuid:429ab465-3ed6-45cb-ac0e-d86b54419b56>
|
CC-MAIN-2013-20
|
http://news.bioscholar.com/2011/11/roundworms-may-harbour-secrets-of-human-wound-healing.html
|
2013-05-22T00:55:55Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948242
| 596
|
What's something that you do all day, every day, no matter where you are or who you're with?
(a) think about what's for lunch tomorrow
(b) put your finger in your nose
(c) hum your favorite song
It's possible that some kids could say (a) or (c) or that others might even say — yikes! — (b). But every single person in the world has to say (d). Breathing air is necessary for keeping humans (and many animals) alive. And the two parts that are large and in charge when it comes to breathing? If you guessed your lungs, you're right!
Your lungs make up one of the largest organs in your body, and they work with your respiratory system to allow you to take in fresh air, get rid of stale air, and even talk. Let's take a tour of the lungs!
Locate Those Lungs
Your lungs are in your chest, and they are so large that they take up most of the space in there. You have two lungs, but they aren't the same size the way your eyes or nostrils are. Instead, the lung on the left side of your body is a bit smaller than the lung on the right. This extra space on the left leaves room for your heart.
Your lungs are protected by your rib cage, which is made up of 12 sets of ribs. These ribs are connected to your spine in your back and go around your lungs to keep them safe. Beneath the lungs is the diaphragm (say: DY-uh-fram), a dome-shaped muscle that works with your lungs to allow you to inhale (breathe in) and exhale (breathe out) air.
You can't see your lungs, but it's easy to feel them in action: Put your hands on your chest and breathe in very deeply. You will feel your chest getting slightly bigger. Now breathe out the air, and feel your chest return to its regular size. You've just felt the power of your lungs!
From the outside, lungs are pink and a bit squishy, like a sponge. But the inside contains the real lowdown on the lungs! At the bottom of the trachea (say: TRAY-kee-uh), or windpipe, there are two large tubes. These tubes are called the main stem bronchi (say: BRONG-kye), and one heads left into the left lung, while the other heads right into the right lung.
Each main stem bronchus (say: BRONG-kuss) — the name for just one of the bronchi — then branches off into tubes, or bronchi, that get smaller and even smaller still, like branches on a big tree. The tiniest tubes are called bronchioles (say: BRONG-kee-oles), and there are about 30,000 of them in each lung. Each bronchiole is about the same thickness as a hair.
At the end of each bronchiole is a special area that leads into clumps of teeny tiny air sacs called alveoli (say: al-VEE-oh-lie). There are about 600 million alveoli in your lungs and if you stretched them out, they would cover an entire tennis court. Now that's a load of alveoli! Each alveolus (say: al-VEE-oh-luss) — what we call just one of the alveoli — has a mesh-like covering of very small blood vessels called capillaries (say: CAP-ill-er-ees). These capillaries are so tiny that the cells in your blood need to line up single file just to march through them.
All About Inhaling
When you're walking your dog, cleaning your room, or spiking a volleyball, you probably don't think about inhaling (breathing in) — you've got other things on your mind! But every time you inhale air, dozens of body parts work together to help get that air in there without you ever thinking about it.
As you breathe in, your diaphragm contracts and flattens out. This allows it to move down, so your lungs have more room to grow larger as they fill up with air. "Move over, diaphragm, I'm filling up!" is what your lungs would say. And the diaphragm isn't the only part that gives your lungs the room they need. Your rib muscles also lift the ribs up and outward to give the lungs more space.
At the same time, you inhale air through your mouth and nose, and the air heads down your trachea, or windpipe. On the way down the windpipe, tiny hairs called cilia (say: SILL-ee-uh) move gently to keep mucus and dirt out of the lungs. The air then goes through the series of branches in your lungs, through the bronchi and the bronchioles.
The air finally ends up in the 600 million alveoli. As these millions of alveoli fill up with air, the lungs get bigger. Remember that experiment where you felt your lungs get larger? Well, you were really feeling the power of those awesome alveoli!
It's the alveoli that allow oxygen from the air to pass into your blood. All the cells in the body need oxygen every minute of the day. Oxygen passes through the walls of each alveolus into the tiny capillaries that surround it. The oxygen enters the blood in the tiny capillaries, hitching a ride on red blood cells and traveling through layers of blood vessels to the heart. The heart then sends the oxygenated (filled with oxygen) blood out to all the cells in the body.
Waiting to Exhale
When it's time to exhale (breathe out), everything happens in reverse: Now it's the diaphragm's turn to say, "Move it!" Your diaphragm relaxes and moves up, pushing air out of the lungs. Your rib muscles become relaxed, and your ribs move in again, creating a smaller space in your chest.
By now your cells have used the oxygen they need, and your blood is carrying carbon dioxide and other wastes that must leave your body. The blood comes back through the capillaries and the wastes enter the alveoli. Then you breathe them out in the reverse order of how they came in — the air goes through the bronchioles, out the bronchi, out the trachea, and finally out through your mouth and nose.
The air that you breathe out not only contains wastes and carbon dioxide, but it's warm, too! As air travels through your body, it picks up heat along the way. You can feel this heat by putting your hand in front of your mouth or nose as you breathe out. What is the temperature of the air that comes out of your mouth or nose?
With all this movement, you might be wondering why things don't get stuck as the lungs fill and empty! Luckily, your lungs are covered by two really slick special layers called pleural (say: PLOO-ral) membranes. These membranes are separated by a fluid that allows them to slide around easily while you inhale and exhale.
Your lungs are important for breathing . . . and also for talking! Above the trachea (windpipe) is the larynx (say: LAIR-inks), which is sometimes called the voice box. Across the voice box are two tiny ridges called vocal cords, which open and close to make sounds. When you exhale air from the lungs, it comes through the trachea and larynx and reaches the vocal cords. If the vocal cords are closed and the air flows between them, the vocal cords vibrate and a sound is made.
The amount of air you blow out from your lungs determines how loud a sound will be and how long you can make the sound. Try inhaling very deeply and saying the names of all the kids in your class — how far can you get without taking the next breath? The next time you're outside, try shouting and see what happens — shouting requires lots of air, so you'll need to breathe in more frequently than you would if you were only saying the words.
Experiment with different sounds and the air it takes to make them — when you giggle, you let out your breath in short bits, but when you burp, you let swallowed air in your stomach out in one long one! When you hiccup, it's because the diaphragm moves in a funny way that causes you to breathe in air suddenly, and that air hits your vocal cords when you're not ready.
Love Your Lungs
Your lungs are amazing. They allow you to breathe, talk to your friend, shout at a game, sing, laugh, cry, and more! And speaking of a game, your lungs even work with your brain to help you inhale and exhale a larger amount of air at a more rapid rate when you're running a mile — all without you even thinking about it once.
Keeping your lungs looking and feeling healthy is a smart idea, and the best way to keep your lungs pink and healthy is not to smoke. Smoking isn't good for any part of your body, and your lungs especially hate it. Cigarette smoke damages the cilia in the trachea so they can no longer move to keep dirt and other substances out of the lungs. Your alveoli get hurt too, because the chemicals in cigarette smoke can cause the walls of the delicate alveoli to break down, making it much harder to breathe.
Finally, cigarette smoke can damage the cells of the lungs so much that the healthy cells go away, only to be replaced by cancer cells. Lungs are normally tough and strong, but when it comes to cigarettes, they can be hurt easily — and it's often very difficult or impossible to make them better. If you need to work with chemicals in an art or shop class, be sure to wear a protective mask to keep chemical fumes from entering your lungs.
You can also show your love for your lungs by exercising! Exercise is good for every part of your body, and especially for your lungs and heart. When you take part in vigorous exercise (like biking, running, or swimming, for example), your lungs require more air to give your cells the extra oxygen they need. As you breathe more deeply and take in more air, your lungs become stronger and better at supplying your body with the air it needs to succeed. Keep your lungs healthy and they will thank you for life!
|
<urn:uuid:f6612ae2-12a2-4a53-b946-e2d414226490>
|
CC-MAIN-2013-20
|
http://kidshealth.org/PageManager.jsp?lic=175&dn=GirlsHealthDotGov&article_set=54039&cat_id=20607
|
2013-05-22T00:17:36Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955489
| 2,219
|
In the late 1960s the wider framework for and the basic structure of the North Atlantic alliance was being challenged on virtually all fronts at the same time, causing the need for a reappraisal of relationships. In the Cold War with the Soviet Union and its allies, the confrontation continued, but now it was being combined with détente, i.e. cooperation on important military, political, and economic issues. In the American-European relationship, it was obvious that Europe was striking out more on its own. Not only France, but even loyal West Germany was developing its own policy, particularly toward the Soviet Union and Eastern Europe in the form of its Ostpolitik. Western Europe had also come to count for more than it had in the early years of NATO. With British membership in the European Community, the EC was beginning to rival the United States in importance, at least economically. In Southern Europe a democratic revolution was taking place.
On the other side of the Atlantic, even the Nixon administration was talking about the decline of the US and how it would now have to cooperate with the other economic centers of the world. Such self-doubts were greatly stimulated by the American withdrawal from Vietnam and the Communist takeover of South Vietnam. Also outside of Europe, the combination of the rise of the Organization of Petroleum-Exporting Countries (OPEC) and the volatility of the Middle East highlighted a growing energy problem that was to prove quite troublesome in Atlantic relations. The rise of Japan and the Pacific rim was also beginning to redefine the role and importance of Western Europe in the world. In 1979 trade across the Pacific was to be greater than across the Atlantic.
With all these redefinitions taking place at the same time, one can easily imagine the strain they imposed on American-European relations; and, indeed, many were the quarrels and debates, on relations with the Soviet
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: The United States and Western Europe since 1945: From "Empire" by Invitation to Transatlantic Drift. Contributors: Geir Lundestad - Author. Publisher: Oxford University Press. Place of publication: Oxford. Publication year: 2003. Page number: 168.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.
|
<urn:uuid:e17d1f1d-56cb-4b01-85f4-19c0c482ae87>
|
CC-MAIN-2013-20
|
http://www.questia.com/read/109851893/the-united-states-and-western-europe-since-1945
|
2013-05-19T18:35:37Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964613
| 494
|
On This Day - 25 October 1917
Theatre definitions: Western Front comprises the Franco-German-Belgian front and any military action in Great Britain, Switzerland, Scandinavia and Holland. Eastern Front comprises the German-Russian, Austro-Russian and Austro-Romanian fronts. Southern Front comprises the Austro-Italian and Balkan (including Bulgaro-Romanian) fronts, and Dardanelles. Asiatic and Egyptian Theatres comprises Egypt, Tripoli, the Sudan, Asia Minor (including Transcaucasia), Arabia, Mesopotamia, Syria, Persia, Afghanistan, Turkestan, China, India, etc. Naval and Overseas Operations comprises operations on the seas (except where carried out in combination with troops on land) and in Colonial and Overseas theatres, America, etc. Political, etc. comprises political and internal events in all countries, including Notes, speeches, diplomatic, financial, economic and domestic matters. Source: Chronology of the War (1914-18, London; copyright expired)
Germans gain footing north of Chaume Wood (Verdun).
Further French advance on Aisne front; Filain captured; 160 guns taken since 23rd.
German attempt to consolidate on Verder Peninsula frustrated.
Italians retreat from Plezzo to south-west of Tolmino and prepare to evacuate Bainsizza Plateau.
Germans claim 30,000 prisoners and 300 guns.
Naval and Overseas Operations
German ships from Moon Sound bombard Kuno Island near Pernau (Riga).
Fall of Boselli Cabinet in Italy.
Franco-British convention for Military Service.
Sinn Fein convention in Dublin.
|
<urn:uuid:61249468-9b73-4037-8cf5-791200ddafa4>
|
CC-MAIN-2013-20
|
http://www.firstworldwar.com/onthisday/1917_10_25.htm
|
2013-05-22T07:49:26Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.868887
| 353
|
oilbirdArticle Free Pass
oilbird (Steatornis caripensis), also called guácharo, nocturnal bird of South America that lives in caves and feeds on fruit, mainly the nuts of oil palms. The oilbird is an aberrant member of the order Caprimulgiformes; it comprises the family Steatornithidae. About 30 centimetres (12 inches) long, with fanlike tail and long broad wings, it is dark reddish brown, barred with black and spotted with white. It has a strong hook-tipped bill, long bristles around the wide gape, and large dark eyes.
The oilbird uses echolocation, like a bat, to find its way within the caves where it roosts and nests from Trinidad and Guyana to Bolivia. The sounds the bird emits are within the range of human hearing: bursts of astonishingly rapid clicks (as many as 250 per second). It also utters hair-raising squawks and shrieks that suggested its Spanish name, guácharo (“wailer”). At night it flies out to feed, hovering while it plucks fruit from trees.
Two to four white eggs are laid on a pad of organic matter on a ledge high up in the cave. The young, which may remain in the nest for 120 days, are fed by regurgitation until they are 70 to 100 percent heavier than adults. Indians render the squabs for an odourless oil for cooking and light; hence the bird’s popular and scientific names.
What made you want to look up "oilbird"? Please share what surprised you most...
|
<urn:uuid:c678bd5b-2111-4ab9-b52c-605670159e8f>
|
CC-MAIN-2013-20
|
http://www.britannica.com/EBchecked/topic/426268/oilbird
|
2013-05-20T02:56:53Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.93852
| 344
|
The Clean Air Act is 40 years old. On Sept. 14, 2010, EPA Administrator Lisa Jackson will lead a day-long celebration of the anniversary. It is appropriate to celebrate past successes, but in truth the Clean Air Act cannot handle today's pollution problems, and not just those caused by greenhouse gases. EPA has found that traditional pollutants continue to harm public health, but the Clean Air Act, a statute passed in 1970 during the dawn of environmentalism, mandates an ineffective, inefficient response: a requirement that each state adopt its own plan to control emissions. Congress should replace the state plan requirement with federal market-based regulation.
The assumptions behind that 1970 scheme no longer hold true. First, Congress assumed back then that each state's pollution came almost entirely from smokestacks within that state and, on that basis, required each state to adopt a formal plan to cut pollution. Experience has shown, however, that much pollution comes from other states and even other nations. Yet, the state plan requirement remains the Clean Air Act's major program.
Second, Congress assumed in 1970 that the best way to control pollution was for the state plan to tell each big factory what to do. This top-down approach worked well enough when industries had yet to install well-known, relatively inexpensive control devices. Today it is far less obvious how to eke out further progress. To cut pollution further often requires changes in the industrial processes themselves, changes in small businesses, buildings, and other small sources, and changes based on innovation still being worked out. Regulators writing state plans today cannot know enough to pick the most effective and efficient ways to further reduce pollution. Yet, the state plan requirement continues.
Third, Congress assumed in 1970 that each state would design, implement and enforce a plan sufficient to meet federal environmental targets. The result has been an overwhelmingly complex and disappointingly ineffective program. According to a 2004 National Research Council study, the state plan requirement is "legalistic," "often frustrating" and "probably discourages innovation." It "overtaxes the limited financial and human resources available" and "draws attention and resources away from the more germane issue of ensuring progress." Yet, the state plan requirement continues.
There is an alternative to this top-down, state-by-state approach: market-based regulation. Congress adopted a market-based approach in 1990 when it placed a declining cap on total acid rain emissions from power plants. Within this cap, each plant was given an allowance to emit pollutants, along with permission to trade allowances that it did not use. This approach let the market rather than regulators decide at which plants to cut emissions and how. As a result, acid rain emissions were halved at far less than the cost of top-down regulations.
Barack Obama called cap-and-trade "a smarter way of controlling pollution" than top-down regulation, but the state plan requirement places many obstacles in the way of national trading of emission allowances. Under the Clinton and Bush II administrations, EPA worked around the state plan requirement by adopting national regulations that impose market-based approaches to interstate pollution problems. But the state plan requirement got in the way. A federal court in July 2008 invalidated the EPA's market-based Clean Air Act Interstate Rule, to the consternation of industry, environmentalists and states. EPA in July 2010 proposed a new rule to replace the market-based approach invalidated by the court. The result: a collapse of the price of allowances to emit sulfur from more than $300 to $5, which wrecked the market incentive to cut emissions.
The Clean Air Act has achieved its greatest successes without state plans. These include making new cars 99% cleaner than they were a half century ago and eliminating lead from gasoline as well as halving acid rain. These successes share three features that the state plan requirement lacks: (1) direct federal regulation, (2) flexibility on how to cut pollution provided through market-based approaches that allow wide choice, and (3) an overt political decision by Congress on how much to cut pollution and who bears the burden. A new Clean Air Act in which Congress replaces the state plan requirement with a federal market-based regulation would bring more health protection at less cost. How to restructure the Clean Air Act to protect health across the country is spelled out in the report of the New York Law School-NYU School of Law project, Breaking the Logjam.
The time to celebrate will come when the Clean Air Act is itself reformed to make it capable of dealing with today's challenges. For Congress to take on this job, EPA will need to show some leadership.
David Schoenbrod is a visiting scholar at AEI.
|
<urn:uuid:53cd07ff-86b5-4075-bd64-610872466f18>
|
CC-MAIN-2013-20
|
http://aei.org/article/energy-and-the-environment/the-clean-air-act-is-in-no-shape-to-be-celebrated/
|
2013-05-19T02:23:46Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948362
| 955
|
Lateral vibrations can control friction at the nanoscale, researchers reported in the 1 July 2005 issue of Physical Review Letters.
The researchers modeled a tip interacting with a substrate that vibrates in the lateral direction, and showed that vibrations at the correct frequency and amplitude can dramatically reduce friction, and can even make it possible to transform stick-slip motion to smooth sliding.
Previous studies have suggested controlling friction with normal vibrations; this paper adds another new method scientists can potentially use to reduce friction. The authors also suggest experiments to test the effects they predict.
Being able to control friction in this way may be useful for micromechanical devices and computer disk drives, where friction may cause unwanted stick-slip motion or damage to the device.
Z. Tshiprut, A. E. Filippov, and M. Urbakh
Phys. Rev. Lett. 95, 016101 (2005)
Tuning Diffusion and Friction in Microscopic Contacts By Mechanical Excitations
We demonstrate that lateral vibrations of a substrate can dramatically increase surface diffusivity and mobility and reduce friction at the nanoscale. Dilatancy is shown to play an essential role in the dynamics of a nanometer-size tip which interacts with a vibrating surface. We find an abrupt dilatancy transition from the state with a small tip-surface separation to the state with a large separation as the vibration frequency increases. Atomic force microscopy experiments are suggested which can test the predicted effects.
Explore further: How nanotechnology could keep your heart healthy
|
<urn:uuid:b594dbee-e612-493a-ad4d-ca712ae5fa1e>
|
CC-MAIN-2013-20
|
http://phys.org/news5030.html
|
2013-05-19T02:52:42Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.880655
| 319
|
by Tan Copsey
The need for humans to adapt to climate change is clear, but the where, when and how remain fuzzy. Tan Copsey reports on existing – and hotly debated – plans for financing change in vulnerable countries.
The climate is changing and human beings will need to change with it. People will have to adapt to floods, droughts, disease, increasingly severe weather events and disrupted water and food supplies. But some of those facing these threats have limited capacity to respond. International finance to help vulnerable nations adapt to climate change is therefore hugely important.
But who accesses these funds and how they access them are already hotly contested issues. At the heart of this debate are differing views over the status of adaptation. Should it be considered aid or reparations for past wrongs? More prosaically, can adaptation actually work, or will it fall prey to the types of problems that have hindered development aid and international attempts to reduce greenhouse-gas emissions?
At global climate-change negotiations in Copenhagen last year, it was agreed that developed nations would provide US$30 billion (201 billion yuan) in new funding for developing nations between 2010 and 2012, and the money would be split between helping people adapt to climate change and financing projects to alleviate its effects. A further US$100 billion (666 billion yuan) would be found from public and private sources between 2012 and 2020, and the most vulnerable nations given priority access to funds for adaptation.
However, implementing these promises has proved difficult. For a start, developing nations have expressed concerns that these funds are not going to be “new and additional” as promised, and that money may be counted as both adaptation finance and development aid. In November, a high-level panel assembled by United Nations secretary-general Ban Ki Moon will issue recommendations on how to find new money. Norwegian prime minister Jens Stoltenberg, who co-chaired a panel advising the UN on the issue, said in October that it is “challenging but feasible, achievable to raise the $100 billion”, further noting that “carbon pricing” in the developed world would need to be part of the solution.
Another fault-line is emerging over definitions of vulnerability. The Copenhagen Accord states that “funding for adaptation will be prioritized for the most vulnerable developing countries, such as the least developed countries, small island developing states and Africa.” But exactly who qualifies remains a point of contention. Pakistan, for example, beset by terrible floods that may have been exacerbated by climate change, is requesting access to adaptation funds. Pakistan does not fit the definition but, facing huge shortfalls in aid to flood-hit regions, is seeking to contest and expand it.
Linda Siegele, a lawyer at the UK-based Foundation for International Environmental Law and Development (FIELD) noted that this is a “difficult and divisive issue right now among developing nations. There is a huge fear that there is a limited pie and that everyone wants a piece of it. I don’t think that’s a realistic picture. There has to be some priority setting.”
Negotiations over how the money would be disbursed also stalled at Copenhagen [PDF]. Existing multilateral entities like the World Bank seemed well placed to channel funds. Some donor nations were also keen to provide funds directly on a bilateral basis. But many developing nations rejected these ideas, arguing that institutions like the World Bank had a bad track record in providing finance to developing nations, in part because these institutions were controlled and administered by developed nations. Bilateral funding would also lead to uneven outcomes, with some nations favoured while others missed out, they said.
The key point here is that adaptation finance is not aid. Existing processes were designed to provide grants and loans from developed nations. Sven Harmeling, an expert on climate change adaptation at German NGO Germanwatch, notes that “developing countries are entitled to receive adaptation funds because of the harm done by (developed country) greenhouse-gas emissions.” As such they have a moral case that they should have a say over how money is provided and spent. Funding for adaptation is not granted but owed.
An existing United Nations institution – the Adaptation Fund – might be part of the solution to this impasse. The fund uses an approach known as “direct access”, where a developing country can nominate a national institution to receive resources. This institution is then responsible for overseeing and reporting on how the funds are used. This gives recipient countries a larger say in how the funds are spent. More practically, Harmeling notes “an international fund alone cannot decide on hundreds or thousands of projects. It wouldn’t know the local situations, while national entities are better placed to evaluate projects.”
Developing nations can still access funds through accredited multilateral institutions, including the World Bank, if they choose. Ultimately it is likely that the innovative financial architecture provided by the adaptation fund will play a role, but it will almost certainly be alongside traditional multilateral institutions like the World Bank and bilateral finance.
The Adaptation Fund draws money from a 2% levy on the Clean Development Mechanism along with direct donations, ranging from a 45 million euro contribution (US$63 million) from Spain to a 100 euro (US$140) donation from European schoolchildren. In June, projects from Nicaragua, Pakistan, Senegal and the Solomon Islands received funding for adaptation projects to combat sea-level rise, deal with droughts and floods and reduce risks associated with glacial-lake outbursts.
The project in the Solomon Islands illustrates just how much planning and work adaptation involves. Improving the resilience of the country’s infrastructure in the face of climate threats includes everything from the construction of new sea-walls in order to keep out rising seas to improving the design of the airport to better cope with huge storms and facilitate subsequent relief efforts.
Money is also needed to complete “community vulnerability and adaptation assessments”, which cover tricky issues including relocation of peoples and land rights. For the country to adapt successfully, land and property will need to be provided for internally displaced people. Developing legal frameworks and strengthening governance structures that facilitate this process and prevent disputes is therefore a crucial part of adapting to climate change in the Solomon Islands.
Strong local institutions and legal structures are also crucial to donor nations. Where institutions are weak, the possibility of corruption increases. There is a risk that efforts to finance adaptation will be undermined by failed projects and misappropriation of funds. Harmeling believes donor nations should have realistic expectations: “It is not likely that 100% of adaptation projects will be 100% successful. But this is a chance to show that developing countries are able to work through a structure with more overall responsibility.”
To minimise the risk of failure, he suggests that “it is important not to scale up too fast. Experience from development programmes shows that some caution and trial and error is required to implement projects. Adaptation has to be done carefully.”
It makes sense that time is spent putting in place structures and institutions to distribute money and agree a fair definition of who should get first access. This may mean that not all of the fast-start finance pledged for adaptation will be spent by 2012. But it is surely better to have positive examples to learn from than to rush blindly to spend set quantities of money ahead of artificial deadlines. As the climate changes, adaptation is only going to become more important. It is vital that mistakes are minimized at the start of a process that may be centuries long.
Tan Copsey is development manager at chinadialogue.
Image from BBC World Service Bangladesh Boat shows a Bangladeshi village after a cyclone.
This post originally appeared on chinadialogue.
|
<urn:uuid:fd69ed8e-ccb2-45a7-9e1e-ed185b41a9ef>
|
CC-MAIN-2013-20
|
http://www.worldchanging.com/archives/011680.html
|
2013-05-22T22:12:10Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.958677
| 1,593
|
In 2007, James Watson eyed his genome for the very first time. Through more than 50 years of scientific and technological advancement, Watson saw the chemical structure he once helped unravel now fused into a personal genetic landscape laid out before him.
Yet there was a small stretch of nucleic acids on chromosome 19 that he preferred to leave uncovered, a region that coded the apolipoprotein E gene. APOE, as it’s called, has been a telling genetic landmark of Alzheimer’s risk, strongly correlated to the disease since the early 90s. Watson’s grandmother suffered from Alzheimer’s, and without any reasonable treatments or suitable preventive strategies, the father of DNA decided the information was too volatile, its revelation creating more potential harm than good.
Watson’s apprehension was warranted. Treatments for Alzheimer’s Disease have consistently failed, sometimes miserably. But as we learn more and more about the brain, it has become apparent that genetics alone rarely dictate the course of disease. Instead, brain disorders result from a complex interaction of our genes and the environments to which we’re exposed. And now, a recent wave of research has unveiled another player in the genesis of neurodegenerative disease: stress.
While scientists have already catalogued the effect of our surroundings and environment on psychological conditions – including depression and anxiety disorders – new studies suggest that stress may also figure into the complex equation that determines if someone will develop a neurodegenerative disease or not. Because stress can be mitigated through lifestyle changes, people may finally gain some control over these devastating, and feared, illnesses.
Since Alois Alzheimer first noted his clinical findings of “presenile dementia” in a patient at the turn of the twentieth century, doctors have continually observed that the disease tends to run through families. But it wasn’t until the early 90s, when a team led by Margaret Pericak-Vance, then a researcher at Duke University Medical School, uncovered the genetic link to Alzheimer’s Disease. By extracting DNA from circulating lymphoblasts, Pericak-Vance and colleagues were able to correlate Alzheimer’s Disease to variations of the APOE gene on chromosome 19.
Around the same time, another group of researchers at Duke University’s Department of Psychiatry and Behavioral Science, led by Brenda Plassman, started a series of experiments to see if non-genetic factors contributed to Alzheimer’s. They wondered: could a person’s environment also affect whether or not they’d acquire the disease?
|
<urn:uuid:0f48fa8a-740a-4745-bb9b-5047986c5764>
|
CC-MAIN-2013-20
|
http://www.scientificamerican.com/article.cfm?id=neurostress-how-stress-ma
|
2013-05-19T19:30:07Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698017611/warc/CC-MAIN-20130516095337-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948704
| 527
|
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Procrastination is a type of behavior which is characterized by deferment of actions or tasks to a later time. Psychologists often cite procrastination as a mechanism for coping with the anxiety associated with starting or completing any task or decision. Psychology researchers also have three criteria they use to categorize procrastination. For a behavior to be classified as procrastination, it must be counterproductive, needless, and delaying.
For an individual, procrastination may result in stress, a sense of guilt, the loss of personal productivity, the creation of crisis and the disapproval of others for not fulfilling one's responsibilities or commitments. These combined feelings can promote further procrastination. While it is normal for people to procrastinate to some degree, it becomes a problem when it impedes normal functioning. Chronic procrastination may be a sign of an underlying psychological or physiological disorder.
The word itself comes from the Latin word procrastinatus: pro- (forward) and crastinus (of tomorrow). The term's first known appearance was in Edward Hall's The Union of the Noble and Illustre Famelies of Lancastre and York, first published sometime before 1548. The sermon reflected procrastination's connection at the time to task avoidance or delay, volition or will, and sin.
Causes of procrastination
The psychological causes of procrastination vary greatly, but generally surround issues of anxiety, low sense of self-worth, and a self-defeating mentality. Procrastinators are also thought to have a lower-than-normal level of conscientiousness, more based on the "dreams and wishes" of perfection or achievement in contrast to a realistic appreciation of their obligations and potential.
Author David Allen brings up two major psychological causes of procrastination at work and in life which are related to anxiety, not laziness.[How to reference and link to summary or text] The first category comprises things too small to worry about, tasks that are an annoying interruption in the flow of things, and for which there are low-impact workarounds; an example might be organizing a messy room. The second category comprises things too big to control, tasks that a person might fear, or for which the implications might have a great impact on a person's life; an example might be the adult children of a deteriorating elderly parent deciding what living arrangement would be best.
From the behavioral psychology point of view, James Mazur has said that procrastination is a particular case of "impulsiveness" as opposed to self control.[How to reference and link to summary or text] Mazur states that procrastination occurs because of a temporal discounting of a punisher, as it happens with the temporal discount for a reinforcer. Procrastination, then, as Mazur says, happens when a choice has to be made between a later larger task and a sooner small task; as the absolute value of the task gets discounted by the time, a subject tends to choose the later large task.[How to reference and link to summary or text].
Research on the physiological roots of procrastination mostly surrounds the role of the prefrontal cortex. This area of the brain is responsible for executive brain functions such as planning, impulse control, attention, and acts as a filter by decreasing distracting stimuli from other brain regions. Damage or low activation in this area can reduce an individual's ability to filter out distracting stimuli, ultimately resulting in poorer organization, a loss of attention and increased procrastination. This is similar to the prefrontal lobe's role in attention-deficit hyperactivity disorder (ADHD), where underactivation is common.
Procrastination and mental health
Procrastination can be a persistent and debilitating disorder in some people, causing significant psychological disability and dysfunction. These individuals may actually be suffering from an underlying mental health problem such as depression or ADHD.
While procrastination is a behavioral condition, these underlying mental health disorders can be treated with medication and/or therapy. Therapy can be a useful tool in helping an individual learn new behaviors, overcome fears and anxieties, and achieve an improved quality of life. Thus it is important for people who chronically struggle with debilitating procrastination to see a trained therapist or psychiatrist to see if an underlying mental health issue may be present.
Traditionally, procrastination has been associated with perfectionism, a tendency to negatively evaluate outcomes and one's own performance, intense fear and avoidance of evaluation of one's abilities by others, heightened social self-consciousness and anxiety, recurrent low mood, and workaholism. Slaney (1996) found that adaptive perfectionists (when perfectionism is egosyntonic) were less likely to procrastinate than non-perfectionists, while maladaptive perfectionists (people who saw their perfectionism as a problem; i.e., when perfectionism is egodystonic) had high levels of procrastination (and also of anxiety).
While academic procrastination is not a special type of procrastination, procrastination is thought to be particularly prevalent in the academic setting, where students are required to meet deadlines for assignments and tests in an environment full of events and activities which compete for the students' time and attention. More specifically, a 1992 study showed that "52% of surveyed students indicated having a moderate to high need for help concerning procrastination".
Some students struggle with procrastination due to a lack of time management or study skills, stress, or feeling overwhelmed with their work.
- Main article: Student syndrome
Student syndrome refers to the phenomenon that many students will begin to fully apply themselves to a task only just before a deadline. This leads to wasting any buffers built into individual task duration estimates.The term originated in Eliyahu M. Goldratt's novel-style book Critical Chain. The principle is also addressed in Agile Software Development.
Types of procrastinators
The relaxed type
The relaxed type of procrastinators view their responsibilities negatively and avoid them by directing energy into other tasks. It is common, for example, for relaxed type procrastinating children to abandon schoolwork but not their social lives. Students often see projects as a whole rather than breaking them into smaller parts. This type of procrastination is a form of denial or cover-up; therefore, typically no help is being sought. Furthermore, they are also unable to defer gratification. The procrastinator avoids situations that would cause displeasure, indulging instead in more enjoyable activities. In Freudian terms, such procrastinators refuse to renounce the pleasure principle, instead sacrificing the reality principle. They may not appear to be worried about work and deadlines, but this is simply an evasion of the work that needs to be completed.
The tense-afraid type
The tense-afraid type of procrastinator usually feels overwhelmed with pressure, unrealistic about time, uncertain about goals and many other negative feelings. Feeling that they lack the ability or focus to successfully complete their work, they tell themselves that they need to unwind and relax, that it's better to take it easy for the afternoon, for example, and start afresh in the morning. They usually have grandiose plans that aren't realistic. Their 'relaxing' is often temporary and ineffective, and leads to even more stress as time runs out, deadlines approach and the person feels increasingly guilty and apprehensive. This behavior becomes a cycle of failure and delay, as plans and goals are put off, penciled into the following day or week in the diary again and again. It can also have a debilitating effect on their personal lives and relationships. Since they are uncertain about their goals, they often feel awkward with people who appear confident and goal-oriented, which can lead to depression. Tense-afraid procrastinators often withdraw from social life, avoiding contact even with close friends.
Stigma and misunderstanding
Procrastinators often have great difficulty in seeking help, or finding an understanding source of support, due to the stigma and profound misunderstanding surrounding extreme forms of procrastination. One of the symptoms, known to psychologists as task-aversiveness, is often mischaracterised simply as laziness, a lack of willpower or loss of ambition.
- Academic procrastination
- Attention-deficit hyperactivity disorder
- Analysis paralysis
- Deferred gratification
- Getting Things Done
- Temporal discounting
- Time management
- Parkinson's Law
- ↑ Fiore, Neil A (2006). The Now Habit: A Strategic Program for Overcoming Procrastination and Enjoying Guilt- Free Play, New York: Penguin Group. p. 5
- ↑ Schraw, G., Wadkins, T., & Olafson, L. (2007). Doing the things we do: A grounded theory of academic procrastination [Electronic version]. Journal of Educational Psychology, Vol 99(1), 12-25.
- ↑ Procrastination. Oxford English Dictionary, 2nd edition (1989).
- ↑ * Burka, Yuen (1983, 2008). Procrastination: Why You Do It, What To Do About It Now, New York: Da Capo Lifelong Books.
- ↑ 5.0 5.1 Strub, R. L. (1989). Frontal lobe syndrome in a patient with bilateral globus pallidus lesions. Archives of Neurology 46, 1024-1027.
- ↑ Yellowlees, P.M., Marks, S. (2007). Problematic Internet use or Internet addiction?. Computers in Human Behavior 23 (3): 1447–1453.
- ↑ McGarvey. Jason A. (1996) The Almost Perfect Definition
- ↑ R P Gallagher, S Borg, A Golin and K Kelleher (1992), Journal of College Student Development, 33(4), 301-10.
- ↑ 9.0 9.1 Procrastination, How to Stop Procrastinating
- ↑ Steel, Piers The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure.. American Psychological Association. Psychological Bulletin. Vol 133(1). URL accessed on [[February 5th 2009]].
- 73 essential ideas for stop procrastination and personal growth
- Procrastination Central - A scientific summary of procrastination
- CalPoly - Procrastination - analysis of procrastinating behavior and possible cures
- Article regarding studies on procrastination
- Article about a possible cure for procrastination
- Psychological Self-Help - Another scientific summary of procrastination and methods to address the issue
- Stop Procrastinating - A 21 day program designed by psychologists to break teh habit of procrastination
- The Procrastination Blog - Useful articles and information about procrastination
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|
|
<urn:uuid:e429c912-cbe5-41c4-b051-50cfec2e3abd>
|
CC-MAIN-2013-20
|
http://psychology.wikia.com/wiki/Procrastination?direction=prev&oldid=144760
|
2013-05-26T03:04:10Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.910137
| 2,252
|
Students will gather as much information as possible from the sculpture of Rosa Parks before reading about her role in the Montgomery County Bus Boycott. They will bring the lesson to life by role playing and participating in skits.
Students will be able to:
- Describe a sculpture.
- Differentiate between realistic and distorted features of the sculpture.
- Identify clues to Rosa Parks’s story in the sculpture.
- List questions left unanswered by the sculpture.
- Use other sources of information to find answers to these questions.
- Use what they learn about Rosa Parks, the Montgomery County Buss Boycott, and the Civil Rights Movement to produce original illustrations and skits.
- Develop their ability to use nonviolent techniques for resolving interpersonal problems through role playing.
- Reproduction of Marshall D. Rumbaugh’s Rosa Parks
- Opaque projector
- Photograph of Rosa Parks
- Handout 5: “Chronology of Rosa Park’s Arrest and the Montgomery County Bus Boycott”
|
<urn:uuid:c3a3fdde-dc98-4f04-9916-3817c45bc76f>
|
CC-MAIN-2013-20
|
http://smithsoniansource.org/display/lessonplan/viewdetails.aspx?TopicId=1032&LessonPlanId=1003
|
2013-05-21T10:19:59Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.881917
| 212
|
Earth from Space: Deep South Delta
This Landsat image of 3 October 2011 shows the Mississippi River Delta, where the largest river in the United States empties into the Gulf of Mexico. In this false-colour image, land vegetation appears pink, while the sediment in the surrounding waters are bright blue and green. The delta is known as the ‘bird-foot’ delta because of the shape created by the channels extending outward.
The size of the Mississippi River Delta built over millions of years owing to sediment deposition. The tons of sediment carried by the river system created the wetlands in southern Louisiana, which are home to many endangered species and help to protect the mainland from hurricane winds by acting like speed bumps.
Over the last several decades, however, the delta’s sediment load has been drastically reduced by natural and man-made factors. Extensive oil and gas extraction causes the subsidence of the delta and wetlands, and rising sea levels increase erosion as the fresh water vegetation dies due to the influx of salt water.
|
<urn:uuid:00b2acb1-7767-4661-97f8-7ed951079ea3>
|
CC-MAIN-2013-20
|
http://insidious-intent.tumblr.com/tagged/river
|
2013-05-25T05:51:25Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.949181
| 208
|
Nutrition & Education
Discover the Power of Protein in the Land of Lean Beef!
You can feel good about loving beef because the protein in beef is a powerful nutrient that strengthens and sustains the body. A substantial body of evidence shows protein can help in maintaining a healthy weight, building muscle and fueling physical activity – all of which play an important role in a healthful lifestyle and disease prevention. There are 29 cuts of beef that meet government guidelines for lean, with less than 10 grams of total fat, 4.5 grams or less of saturated fat, and less than 95 milligrams of cholesterol per serving and per 100 grams. It’s easy for people to “go lean with protein” and follow the U.S. Dietary Guidelines.
You can learn more about beef and beef’s nutrition with our educational resources for the classroom and health professionals.
Beef for the Classroom
Nutrition Resources for Health Professionals & Registered Dietitians
Educational Resources and Downloads
|
<urn:uuid:4bc17480-6df2-43a3-bbcd-3451795d6b20>
|
CC-MAIN-2013-20
|
http://www.wabeef.org/nutritioneducation.aspx
|
2013-05-21T18:19:26Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700380063/warc/CC-MAIN-20130516103300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.916827
| 204
|
George Eliot is the pen name of Mary Anne Evans (22 November 1819 - 22 December 1880), who was an English novelist. She was one of the leading writers of the Victorian era. Her novels, largely set in provincial England, are well known for their realism and psychological perspicacity.
She used a male pen name, she said, to ensure that her works were taken seriously. Female authors published freely under their own names, but Eliot wanted to ensure that she was not seen as merely a writer of romances. An additional factor may have been a desire to shield her private life from public scrutiny and to prevent scandals attending her relationship with the married George Henry Lewes.
|
<urn:uuid:d3046d33-389c-4d85-82f0-c59b9214a3ea>
|
CC-MAIN-2013-20
|
http://quotationsbook.com/quote/35012/
|
2013-05-18T18:10:01Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.987213
| 137
|
William Flinders Petrie, Father of Pots
By Marie Parsons
In the words of James Baikie, author of the book A Century of Excavation in the Land of the Pharaohs, "if the name of any one man must be associated with modern excavation as that of the chief begetter of its principles and methods, it must be the name of Professor Sir W.M. Flinders Petrie. It was hewho first called the attention of modern excavators to the importance f "unconsidered trifles" as means for the construction of the pastthe broken earthenware of a people may be of far greater value than its most gigantic monuments."
William Matthew Flinders Petrie was the grandson of the first man to chart Australia. When he was four Petrie became so ill his mother became convinced that he was a weak child. Since she was a scholar herself, she taught him at home and introduced him to Hebrew, Latin and Greek. Later on, he was taught by a governess, but when he became ill again, his official education effectively ended.
Petrie had an inquisitive mind and developed an insatiable appetite for facts, toying with mathematics, discovering geometry and Euclid and devising chemical experiments at the age of 15. His father, an industrial engineer, taught him the use of a sextant and how to map sites, so by the time he was 18 Petrie spent days alone making surveys around his home. He wrote his first book at the age of 22 on the recovery of Ancient Measurements from Monuments, based on work he had done at Stonehenge.
In 1867 Petrie read with interest books written by a family friend, Charles Piazzi-Smyth, the Scottish Astronomer Royal, on the Great Pyramid of Giza, whose measurements, the author swore, epitomized all mathematical and astronomical knowledge, past, present and future. Petrie wrote to him that "pi" must have been used in calculating the pyramid.
Between 1880 and 1882, Petrie went to Egypt to confirm those results, since the book was heavily criticized. He traveled to Giza and the Great Pyramids, Saqqara, Dahshur and the Bent Pyramid, and Abu Rawash, exploring the pyramids interiors and measured and triangulated. Petrie also walked through the Theban tombs behind the temple of Medinet Habu. He returned again to Giza, measuring the thickness of sides and base of the royal sarcophagus and of the inside floor. He eventually found that every measurement Piazzi-Smyth had taken was inaccurate. Petries own survey, the Pyramids and Temples of Giza, was published in 1883 and remains a standard in the field.
Before he had left for Egypt, Petrie visited Samuel Birch, Keeper of the British Museum, who suggested Petrie bring back some samples of pottery. Petrie thus began the process of keeping the meticulous records he would continue to use throughout his career. He noted and marked on each pot or shard the exact location where it had been found, and also listed the other artifacts present in the same context. Petrie was eventually given the Arabic name "Abu Bagousheh", father of pots.
Dr. Poole of the British museum was so impressed by Petries work to-date that he recommended Petrie to the Egypt Exploration Fund, who needed an archaeologist in Egypt to succeed Edouard Naville. Petrie accepted and was given the sum of 250 pounds per month to cover his plus the excavations expenses. In November 1884, Petrie arrived in Egypt and excavated at Tanis, at Naucratis, a city had been built to house the Greek residents living in Egypt and which he himself discovered, and at Tell Farun and Defenneh.
Petrie was not the first excavator in Egypt. But he was severely critical of the shoddy work done by his predecessors. He wrote, "Nothing seems to be done with any uniform or regular plan, work is begun and left unfinished; no regard is paid to future requirements of exploration, and no civilized or labor saving devices are used. It is sickening to see the rate at which everything is being destroyed and the little regard paid to preservation." His two greatest supporters and patrons, Jesse Haworth, a wealthy Manchester businessman, and Amelia Edwards, one of the founders of the Egypt Exploration Fund who had herself written an account of journeying down the Nile, shared his opinion.
From every site Petrie excavated he sent back thousands of objects, most of those tiny pieces regarded by his predecessors as unimportant. He gave a small reward to any workman who found something to ensure nothing found its way to the black market. But the Egypt Exploration Fund committee clashed with Petrie, as he was severely critical of its wasteful mis-mangement and intolerant of its criticisms of his work. In 1886 Petrie tendered his resignation from the Fund. With Haworths support, Petrie excavated at Illahun, Kahun, the tomb of Senwosret I dated from the Middle Kingdom and its workers village, and Gurob, another town nearby.
He set up an independent organization called the Egypt Research Account, later to become the British School of Archaeology in Egypt. He was also appointed the first Edwards Professor of Egyptology at University College in London, which he held from 1892 to 1933. He personally trained many, such as James Quibell, Gertrude Caton-Thompson and Guy Brunton, who themselves went on to become masters in the field.
Though he was eccentric and fickle, never quite mastering Arabic, Petrie set the standard for every other Egyptologist with his meticulous excavations, and thorough analysis. Despite his interest in the conservation and display in a museum context of all objects, he understood that excavated material would eventually deteriorate and thus should be promptly published. He wrote over a thousand books, articles and reviews reporting on his excavations and his finds.
With all his body of work in Egypt, excavating almost every major site over more than 37 years, perhaps Petries most significant contribution made to Egyptology was the discovery of the existence of an extensive period of civilization prior to what had been called the First Dynasty. This preceding period is now known as the Predynastic Period and Petrie first devised his "Sequence dating" at the site of Naqada.
In 1894 Petrie arrived at Naqada on the west bank of the Nile, about 20 miles north of Luxor. He took on James Quibell as companion and assistant. Quibell himself would go on to work at Hierakonpolis and discover the Narmer palette in the Main Deposit there.
Over the next few months, more than 2200 shallow pit graves were discovered, each occupant curled into fetal position and accompanied by lavish grave goods, from ivory figurines and combs to simple slate palettes, and a variety of pots and jars. No inscriptions were found, leading Petrie to conjecture that these graves belonged to foreigners who had invaded Egypt during the First Intermediate Period. But by 1899, after examining more cemeteries at Abydos and Hu, Petrie concurred with the theory held by Quibell and others, that these were the cemeteries of the earliest settlers in Egypt.
Petrie began to analyze the grave goods methodically. Grave A might contain certain types of pot in common with Grave B; Grave B also contained a later style of pot, the only type to be found in Grave C. By writing cards for each grave and filing them in logical order, Petrie established a full sequence for the cemetery, concluding that the last graves were probably contemporary with the First Dynasty. The development of life along the Nile thus was revealed, from early settlers to farmers to political stratification.
Three phases of this Predynastic Naqada culture are now recognized, as first set described by Flinders Petrie. The earliest is called Naqada I or Amratian (since similar pottery types were found at the site of el-Amra). This is characterized by black-topped red ware with white cross-lined bodies. The next culture was Naqada II, or Gerzean, characterized by decorated wavy-handled pots.
Flinders Petrie left Egypt in 1923 and went on to excavate in the Near East, where he traced Egyptian trade and cultural links, and added even more information to the field of Egyptology and expanded the breadth of growing knowledge of our ancient past.
- From World of the Pharaohs by Christine Hobson
- From The Experience of Ancient Egypt by Rosalie David
- From Flinders Petrie: A Life in Archaeology by Margaret S. Drower
Who are we?
Tour Egypt aims to offer the ultimate Egyptian adventure and intimate knowledge about the country. We offer this unique experience in two ways, the first one is by organizing a tour and coming to Egypt for a visit, whether alone or in a group, and living it firsthand. The second way to experience Egypt is from the comfort of your own home: online.
|
<urn:uuid:bf507b5b-6d1b-4ba4-a697-567fa0999a59>
|
CC-MAIN-2013-20
|
http://www.touregypt.net/featurestories/flinders.htm
|
2013-05-21T10:40:38Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.976496
| 1,892
|
There's no single answer to the fuel of the future, at least certainly not at this time. Part of the solution lies in reclaiming waste streams that now mostly end up in landfills. Some of these -- like poop -- we don't even want to think about, but it's about time we did. And scientists are on the case, given the strange-but-true examples cited here.
Cellulosic ethanol (made from sugar cane, wood waste or sweet sorghum) is probably the wave of the future, but here are some other ways we can -- and probably will -- make fuel from bio-materials.
Is it possible to run a car on chocolate? Well, maybe not wholly on chocolate. Your Hershey bar won't get you home in an emergency, but a team from the University of Warwick in Britain has built and is track-testing a Formula 3 race car, running on 30% biodiesel derived from chocolate waste. That's not all; the steering wheel is partly made of carrots, and the mirrors and aerodynamic front wing are formed with potato starch and flax fiber.
According to James Meredith, who heads the project at Warwick, "Anything with a fat in it can be turned into diesel, and that's what we've managed to do." The chocolate is waste from bad batches at Cadbury's in nearby Birmingham. The researchers have managed to keep their fingers out of the chocolate vats. "It's waste, so I assume it's no good to eat," Meredith said.
It was bad enough when scientists figured out how to reclaim paper pulp from used disposable diapers, but they're also saying they can make diesel fuel from them using a pyrolysis process. A Canadian company called AMEC is in the process of building a pilot plant in Quebec that will process the plastics, resins, fibers (and poop) into a predictable mix of gas, oil and char. Now adult poop would work just as well, but we don't collect it in handy sealed containers as we do baby waste.
The great advantage, says AMEC, is that the raw material is not contaminated with anything else -- it's a rich, if aromatic, source of fuel. The company hopes to take in 180 million diapers a year -- a quarter of Quebec's output -- to produce 11 million liters of diesel. Considering that diapers can take 100 years to decompose in a landfill, turning them into domestically produced fuel seems a good alternative.
Yes, we will soon be able to make gasoline -- and diesel and jet fuel, too -- from everything from wood chips and sawdust to switchgrass. Companies around the country are doing this on an experimental basis, using a variety of methods, but the embryonic technology got a huge boost when the Obama Administration revised the biofuel standards earlier this month to include a billion gallons of diesel fuel from biomass by 2022.
Biomass gasoline won't be much, if any, cleaner out of the tailpipe than current fuel, but when the lifecycle carbon reductions from growing the "feedstock" is taken into account, it's a big winner.
All Power Labs in Berkeley, California is competing for the illustrious Auto X Prize with a car that runs on wood chips. "Specifically, we're making carbon-negative, open-source fuel from basically garbage," says team member Tom Price. The process itself isn't new: during World War II, when gasoline was unobtainable in Europe, there were more than a million cars using gasification technology -- turning coal and wood chips into gas for internal-combustion engines. Price envisions using waste walnut shells, which normally release the potent greenhouse gas methane. "We can crack the hydrogen out to run an Accord," Price says, "then put the leftovers on the ground to grow more walnuts, which suck more CO2 out of the atmosphere, and the cycle continues."
Americans consume an estimated 45 million turkeys on Thanksgiving, raising the impolite question: what happens to all the turkey guts? A bunch of entrepreneurs in Carthage, Missouri not only asked that question, they answered it, too, by opening a plant that could process turkey waste (including feathers, using up everything but the gobble) into a fuel oil that could be processed into diesel, gasoline or jet fuel. The process, known as thermo-depolymerization (TDP), is well known, and it works, The turkeys' private parts break down under very high heat and pressure, yielding natural gas, fuel oil and minerals. The company says it could also produce light crude from hog and chicken waste -- or onion byproducts and Parmesan cheese rinds, for that matter.
The big problem, however, is that the plant stinks, and it's close to a residential area, prompting withering complaints. The company, Changing World Technologies, may seek greener pastures.
Wow, according to the United Nations, the livestock industry (including the growing of all the cattle feed, the transportation to market and energy for factory-farm operations) is responsible for 18% of global warming emissions -- more than transportation worldwide. And it will get worse: Current projections show meat production more than doubling to 469 million tons in 2050. One of the main culprits is methane, a global warming gas that is 23 times more potent than carbon dioxide. The world has 1.5 billion cows, and they produce methane out of both ends (belching more than flatulence). An estimated two thirds of the planet's ammonia comes from cows, too. In New Zealand, livestock accounts for 34% of greenhouse gas emissions.
Partly because they're eating grain instead of the grass nature intended, cows can produce 50 to 130 gallons of methane every day. Suppose we could use that as a fuel, since methane burns very well. Eureka! Dairy farms such as Blue Spruce Farm in Vermont are putting their cow waste in anaerobic (no oxygen) digesters for three weeks, producing methane, and then burning it in generators to produce electricity. This "cow power" is being sold to a nearby college, and it can also be fed back into the grid. The process also generates useful fertilizer.
Dr. Craig Alan Bittner, a Beverly Hills cosmetic surgeon who reportedly conducted more than 7,000 liposuctions -- and believed in waste not, want not -- saved the leftover fat and turned it into perfectly good biofuel for his Ford SUV and his girlfriend's Lincoln Navigator. "The vast majority of my patients request that I use their fat for fuel," he said on his website, "and I have more fat than I can use." They lose their love handles and help the Earth at the same time, he said.
A gallon of fat can be turned into a gallon of biofuel, but the fact that it's illegal is a minor deterrent, though not apparently to Dr. Bittner. He had other problems, too, including reportedly getting his unlicensed assistant and girlfriend to perform his operations.
Like turkey guts, coffee grinds are an unwanted waste product that fills up landfills and takes a long time to biodegrade. In Europe, however, household food scraps are considered a fuel source. In Germany and Switzerland, for example, a company collects and then ferments those scraps, producing both a natural gas fuel and compost. So could we actually power cars on biodiesel from coffee grounds? It's a distinct possibility.
You know how coffee can sometimes look (and taste) slightly oily? That's because it contains 10 to 15% of usable oil that can be refined into a biofuel. A study says used cappuccino scraps can offset our imported oil -- as much as 340 million gallons a year from the world's 15 billion pounds of annual coffee production. "It's a simple two-step process," says Susanta Mohapatra, a University of Nevada, Reno, researcher who is a co-author of the study. Her team raided Starbucks to find the "feedstock" for the coffee fuel. "We can definitely make a big impact on our environment with fuel made out of nature," she said.
According to Robert Malloy of the University of Massachusetts, used polystyrene coffee cups will make a great fuel component. Polystyrene (used to make disposable foam plates and cups) is very lightweight but also bulky, so it's difficult and expensive to send out for recycling. But it could make a very effective fuel additive, says an Iowa State study last April. "This study demonstrated that polystyrene-biodiesel blends could be successfully used in diesel engines with minor modifications to the fuel system and appropriate adjustments to engine operating conditions."
According to Song-Charng Kong, a co-author of the Iowa study, polystyrene melts quickly in biodiesel, and fuel that is as much as five percent coffee cups does quite well. At higher concentrations (they tried up to 20%) it gets too thick. Right now emissions are a problem, but they're working on it.
Enter your city or zip code to get your local temperature and air quality and find local green food and recycling resources near you.
|
<urn:uuid:c693ff5c-23aa-4854-9de5-72b458db9dee>
|
CC-MAIN-2013-20
|
http://www.thedailygreen.com/living-green/blogs/cars-transportation/alternative-fuel-cars-460509-blog
|
2013-05-21T10:19:42Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.963355
| 1,875
|
A NEW FORM OF ENERGY - AND A NEW WORLD
by Joe Shea
May 23, 2012
BRADENTON, Fla., Jan. 12, 2011 -- The National Aeronautics & Space Agency (NASA) revealed today that it is working on a "new form of energy" that can transmutes common materials to a "different element" and that can replace fossil fuels for use in homes, space and other transportation systems and infrastructure. A patent for the work was filed by NASA in October.
NASA Senior Research Scientist Dr. Joseph Zawodny, who had briefed NASA scientists about the technology last September, said "This other form of nuclear energy releases energy by adding neutrons. Eventually, they gain a sufficient number of neutrons that they decay into something of the same mass, but a different element," he said.
"It has demonstrated ability to produce excess amounts of energy, cleanly, without hazardous ionizing radiation, without producing 'nasty' waste."
The production of more energy than is input in any process has long been thought to violate hallowed laws of physics that say no process can create more energy than is used to produce it. The NASA work suggests that the energy is transformed, rather than increased, and that the transformation itself - the loss and buildup of protons in a new element - is the source of the heat energy created. The "different element" may sometimes be copper and other common materials, rather than an exotic new byproduct of low-energy nuclear collisions.
The work echoes other physics theories that also rely on the interaction between hydrogen, carbon and nickel hydrides. One such theory, the "hydrino" reactor created by Dr. Randall Mills of BlackLight Power in Cranbury, Mass., has yet to be demonstrated although a demonstration has been promised several times in recent years.
Dr. Mills and other cold fusion scientists have recently formed a consortium that may intend to either enhance or head off work that would award a patent on the new energy to the U.S. government. NASA contracted with Dr. Mills for hydrino research that could propel rcokets into deep space several years ago. So far, venture capitalists have poured $70 millionm into Mills' hydrino apparatus.
A spectroscopic study based on Mills' theory at the Harvard/Smithsonian Center for Astrophysics organized by Gen III partners, a technology validation firm that investigates claims for venture firms, did find predicted emissions below the ground state of hydrogen - a region earlier thought not to exist - in February 2010, according to the BlackLight Power website.
The combination of inexpensive nickel, catalytic carbon and abundant hydrogen through electrolysis is also at the core of the Rossi technology, and he too has reported occasional instances of transmutation of these materials to copper during the operation of his Energy Catalyzer.
An unidentified NASA spokesman says the energy can support "transportation systems and infrastructure." A video released by NASA exemplifies that with images of passenger planes and trains.
"The easiest implementation of this would be for the home," Dr. Zawodney adds in the video.
"You would have a unit that would replace your water heater. And you would have some sort of cycle to derive electrical energy from that. And then it would dump its waste heat into the water, or air-handling system for the building.
"So it would be a dual-use thing," he continued, "so it would be sitting there producing heat, then you'd derive electricity from it to run your electronics. Power the house, power the building, power the light industry. And then the waste heat would be used for environmental control and warm water.
The video has the weighty title, "NASA's Method for Enhancement of Surface Plasmod Polaritons to Initiate & Sustain LENR in Metal Hydride Systems."
A NASA spokesman, Dwayne Brown, cautioned that NASA is not yet ready to say it has achieved any enormous breakthrough. "We're not in a panic here that we have developed a new form of energy," Brown said from his Washington office. Brown said he had not yet seen the video, but was informed about its content by Langley Research Center public information officer Mike Hefferan, whom said he has seen it but was unprepared to comment. Heffernan, a former Miami Herald reporter, said he has seen the video but has not been able to contact Dr. Zawodny for comment. "He's apparently on leave," Hefferan told AR.
"This is capable by itself of completly changing geoeconomics, geopolitics and solving climate and energy, NASA Chief Scientist Dennis Bushnell has said.
According to reports on a lecture at Orebro University in Sweden, the distinguished retired Swedish physicist Sven Kullander of Uppsala University, who is chairman of the energy committee of the Swedish Royal Academy of Science (which selects Nobel Laureates in Chemistry), Italian engineer Andrea Rossi sold a 1-megawatt device to NASA after a 9-hours-long eleventh demonstration on October 28, 2011, at the conclusion of which the agency reportedly purchased that device. , and then ordered 12 more - at $2 million each. He has also publicly ruled out the possibility that any chemical reaction is producing the excess heat produced by the Rossi device.
After an interview in which NASA Chief Scientist Dennis Bushnell appeared to endorse the Rossi device, the agency refused to comment on further developments. NASA conducted a confidential meeting of the space agency's top research scientists at NASA's Glenn Research Center in Cleveland on Sept. 22 and declined to talk about the presentations there. Four briefings at the meeting, in the form of PowerPoint presentations, were later obtained by New Energy Times editor Steven B. Krivit, who released them to the public, just as he made the video available today. In them, Bushnell calls the Rossi device "a game-changer."
NASA has never acknowledged being present at the Bologna conference, and has filed a patent application for its technology. Rossi has been unable to obtain a US patent, although Bushnell indicated in the interview that Rossi's work influenced the NASA effort. The extensive technical collaboration with NASA may make either patent application a tricky proposition. Given its alleged properties, the technology could generate hundreds of billions in revenues in coming years for secrecy concerning the device, The Associated Press, which sent Science Writer Peter Svensson from New York to Bologna, Italy, to cover the Rossi demonstration, has not reported on the event. The AP appears determined to maintain a shroud of secrecy over the device and any new discoveries surrounding LENR.
Unquestionably, the technology described would have great impact on the oil, gas, nuclear, automotive and electric power industries if it performs as NASA described. That may be one reason it has enjoyed such a low profile debut.
Rossi said he has taken orders for 10,000 units of a home heating version of the so-called Energy Catalyzer (or "E Cat" devices over the Internet, and also said he hopes to produce a million each year using robots, possibly at a plant in Massachusetts, where state officials have welcomed the idea.
But should Americans hope their home heating bills wil be $0 in 2013? Surely not. This nation has lost the competitive edge that once made it no no less than a hungry, pouncing tiger when it comes to innovative technology. Today, no mainstream newspaper has even mentioned the NASA work, and none are likely to (at least partly because of the Pons-Fleischmann debacle in 1989), and even if they did, some of their most powerful advertisers would foster strong objections, while skeptics would jeer at the authors.
One problem NASA faces in licensing their technology is that at least one credible scientist says it is his, not theirs. In a note published Jan. 17, 2012, Andrea Rossi - who famously invented the 1mW cold fusion device called the Energy Catalyzer - told readers of pesn.com, a leading alt-energy site.
"The fact that NASA is trying to copy my work," Rossi wrote, "honors me. But their theory is wrong. We will beat them, as well as all the other competitors with our E-Cats: the E-Cats will have a too low price to allow NASA or anybody else to compete with us. They are Goliath, very big and strong; we are David…"
Amid competing claims of authorship, can either technology prosper?
In fact, it would take a great and fearless leader to deliver this incredibly disruptive technology to the American people. Unfortunately, no galvanizing American opinion leader is present on the world scene. Sales, assembly and installation of the units would require the nation to create tens of millions of new jobs, and a whole new type of vehicle engine, while eliminating a few million jobs in the competitive energy industries. Presidential contender Mitt Romney has said recently that the version of cold fusion developed in Utah would be a great boon if it can be duplicated..
"I do believe in basic science," Romney said. "I believe in participating in space. I believe in analysis of new sources of energy. I believe in laboratories, looking at ways to conduct electricity with -- with cold fusion, if we can come up with it. "It was the University of Utah that solved that. We somehow can’t figure out how to duplicate it," Romney told the paper's editorial board. There was no indication Romney was aware of NASA's new work, however.
For President Obama, the benefits of alternative energy development have been elusive and problematic, especially since the bankruptcy of Solyndra, a solar energy form that lost a $500 million federal investment. Yet, the President's first State of the Union message demonstrated a desire to boost this sector of the energy industry, which is the largest in the world.
In a speech to a tech conference on January 29, 2010, presidential energy advisor and EPA head Carol Browner noted:
In his first State of the Union address, President Obama said no area of the economy is more ripe for innovation than the energy sector. The President has also said that "comprehensive” energy and climate legislation is needed to create incentives that would make clean energy profitable. He has always listed job creation as one of the benefits of comprehensive energy and climate legislation.
Now the President has an opportunity to get far out in front of the energy discussion, especially since the new technology is owned by the United States, but there is a question: Does he have the wit or will to do that?
After Solyndra, that's unlikely. The failure of Solyndra, like the initial failure of cold fusion in 1989, may paint alternative energy advances with such a broad brush that Americans will never know what they could have had. The promise of LENR could quickly slip away to rival nations - and they would get the huge advantage of unlimited cheap electric power that might have been ours.
Resources: Four sets of PowerPoint slides made available by NASA from the Sept. 22 confidential briefings were released through the Freedom of Information Act to New Energy Times.
Defense Intelligence Agency Unclassified Report on the state of Cold Fusion Research: http://lenr-canr.org/acrobat/BarnhartBtechnology.pdf
More: Sterling Allan's Peswiki Main Page has covered developments in the LENR field assiduously for years. Read those reports at:
AR Correspondent Joe Shea has written extensively on the technology discussed here. The American Reporter is grateful to Steven B. Krivit of New Energy Times for supplying the URL for the new NASA video, which was first published by FreeEnergyTruth.com
|
<urn:uuid:b26c315e-af21-4357-b3b4-3dd2b372cb3a>
|
CC-MAIN-2013-20
|
http://www.american-reporter.com/4,565/41.html
|
2013-05-22T07:19:34Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.959573
| 2,395
|
Evolution of Medieval Warfare
Recruiting, Organization, Tactics
Pay for the Troops
Ransom for Those Captured in Battle
Warfare was a way of life in Medieval Europe. The nobility held their power by virtue of their status as professional soldiers. Many commoner soldiers were also professionals, usually led by nobles. And all who wished to maintain their safety and security had to be ready for a fight. It was, without too much exaggeration, a population in arms.
The form of warfare in Medieval Europe was that which developed out of the military traditions and practices of the German tribes that overran the Roman empire in the 4th, 5th, and 6th centuries. The Romans had been fighting the German tribes since the 2nd Century BC. The Romans had a professional, standing army and, as a result, they usually won. The Germans were bigger and wilder, but not highly organized and not nearly as professional. This changed over the centuries as technology improved and the Germans learned more about how the Romans did things, while the Romans frittered away their time having civil wars and such. In short, the Germans got better and the Romans didn't. A major innovation in the century before Rome fell was the widespread use of armored cavalry by the Germans. This came about because of the introduction of the stirrup in the first centuries AD, as German tribes migrated to the vast plains of Russia and adapted to mounted combat. There was also the German exposure to the Asiatic nomads coming in from the east (the Huns). The wealthier (and usually more skilled) German warriors began to do most of their fighting on horseback. Wearing a lot of armor and wielding a long sword, and even longer lance, the massed charge of these mounted Germans became more and more effective. As a result, the Romans suffered a number of serious defeats, even though the Romans also adopted the same mounted form of warfare. The Romans tried to adapt, but internal problems kept getting in the way of a thorough reform of the military system, and things just went downhill throughout the 5th century. The funny thing was, most of the German invaders were still infantry. But the availability of those heavy, armored cavalry troops often made a decisive difference.
While the Germans ultimately proved themselves superior militarily, the Romans had the edge in all other respects. In particular, the elaborate and efficient Roman form of government provided, century after century, a steady supply of tax money and recruits with which to form a standing army. The Germans had nothing like this, and as they settled down in the conquered Roman lands, they found their lack of administrative skills a major handicap. Fortunately, another aspect of German tribal traditions came to the rescue. It was customary for the heads of tribes (from the wealthiest families, not surprisingly, who now styled themselves kings) to reward their most powerful and successful warriors with a portion of the goodies after a successful campaign. The invasions of Roman territory had provided enormous opportunities in this department. Following a practice already common back in Germany, the new rulers gave their mounted warriors large tracts of land (thousands of acres in some cases) to run as their own little balliwick in the king's name, as well as a good chunk of the land for their personal use. The people on this land were under the control of the warrior, who provided administrative and legal services for the population. And taxed "his" subjects any way he liked.
Some of the farmers, the poorest, were serfs (either before the Germans arrived, or due to the depradations of the invading armies). These serfs owned no land, but were allowed to work a few acres for themselves while providing up to half their time to work the new rulers lands. This was the feudal system. It was nothing radically new, and had existed in much the same form earlier. Even the Romans had used a version of it in some places. The German kings expected their warlords to use whatever profit they could wrest from their land (eventually known as a fief) to support themselves. In return, they had to maintain their military skills and answer the king's call when armed forces were needed. With this system, the new German kings put trusted men in charge of every village and town while maintaining armed forces at the same time.
While the new military system wasn't as efficient as the Roman one, it was sufficient. Although the East Romans still had their professional army, they were not much of a threat after the 7th century, when the Moslem Arabs drove their armies right up to the walls of Constantinope several times. The East Romans (or Byzantines) had proven capable of defeating the fedual German armies in the 6th and 7th centuries and one could say without much exaggeration that it was the Arabs who saved the Germans by preoccupying the "Byzantines". In the century before the Moslims showed up, the East Romans were well on their way to reconquering the western portion of the empire. Later the Turks came along and not only kept the East Romans engaged, but eventually broke Byzantine military power.
The Arabs also invaded German lands, at least in Spain and thence into southern France, an by sea into portions of Italy. But the Arabs, man for man, weren't any better than the Germans. In Spain, the Arabs were at the end of a long campaign across North Africa and the Germans had superior numbers. The East Romans took the full brunt of the Arab armies and it was only the superior quality of their army that managed to stop the more numerous Arabs.
As impressive as the mounted man-at-arms (whether knighted or not) appeared, he was a warrior, not a soldier. Roman troops had been soldiers. Drilled, disciplined, and thoroughly professional, the Roman army failed only when it got sloppy with recruiting, training, and politics. Medieval troops were not so much sloppy as highly variable and individualistic. Medieval knights, like their Samurai counterparts in Japan, trained and fought as individuals. While thousands of these warriors would frequently join together as an army, and make grand, massed charges at the enemy, they always thought of themselves as fighting as individuals, not as part of a military unit.
Naturally, leading a Medieval army was a tricky proposition. Fortunately, winning battles with such armies was less of a problem because, for a long time, all the armies were the same. Tactics were simplistic in the extreme. Both sides lined up their masses of mounted troops, with foot troops in the front ranks. The infantry usually opened the battle. When one or the other side's leader judged the moment appropriate, a mass charge of mounted men would be launched. This usually decided the affair one way or another.
There were a lot of variations. Some nations still relied on a lot of infantry, if only because they were too poor to support a lot of mounted troops. Examples were the Scots and the Swiss. The poverty stricken Scots, with a more populous and wealthier England to the south, always faced their mounted foes with infantry, and had often managed to hold their own. The Swiss threw off the rule of the Hapsburgs , and maintained their independence with nothing but highly disciplined infantry. Even the Germans maintained elements of their ancient infantry tradition. Indeed, the Germans continued to win victories with infantry armies up to the 11th century.
But, all things being equal, the mounted man-at-arms was superior. It wasn't just the horse, the knight was also better armed and armored. Moreover, a knight devoted his life to training with his weapons and was usually quite good at it. The downside was that the knight's believed their own propaganda. Foot soldiers were disdained and discipline was seen as incompatible with a noble warriors honor. The basic problem was that every noble (knights and above) thought he was above obeying orders. A duke or a count had some control over his knights (and each knight's small band of armed followers), but each such noble was less impressed by the royal official, or king himself, in charge of the entire army. Every noble thought he, and his troops, deserved the post of honor in the first rank. An army commander would try and line up his various contingents in such a way that each would be used to best effect. Most knights (of whatever rank) simply wanted to get at the enemy and fight it out man to man. This was the mentality of knights through most of the Medieval period.
Moreover, unlike the Romans the feudal warriors did not train together as a unit. There were exceptions. The Swiss fought on foot and basically reinvented the Greek Phalanx . The major Swiss innovation here was the use of less body armor, even more discipline and organization in the spear formations, and greater speed and flexibility of movement. This last element was neccessary because the Swiss often fought in broken, hilly terrain (that is, Switzerland) and had to be flexible and swift to prevent the mounted knights from hitting them in their vulnerable flanks or rear. Indeed, these were the tactics the highly disciplined and mobile Romans used to defeat the original Greek Phalanx spear formations. The Swiss drilled endlessly, and fought with a ferocity that impressed even the armed nobles they faced. The Medieval knights were never able to get organized sufficiently to defeat the Swiss.
The most effective infantry of the Medieval period were the English yeomen. These were English and Welch farmers who owned their own land (hence the term "yeomen") and were paid by the king to train in peacetime, and answer his call when he needed to raise an army. The typical English army of the period would be 80-90 percent yeomen, the remainder being men-at-arms (knights and serjeants, commoners equipped as knights). The yeomen were basically light infantry, who knew how to ride a horse. For hand-to-hand fighting they usually carried a sword, an axe, or a mallet (quite effective against a dismounted knight in full armor). But their principal weapon was the longbow. Originally a hunting weapon in Wales, it's major drawback was that it required years of practice to use effectively.
The king offered money, and other favors, to encourage the peacetime training, and good pay for when the yeomen were called to action. The king also offered fines and other punishments if the yeomen didn't practice their archery in peacetime. But the yeomen were skilled at more than just handling the bow. Their training concentrated on firing in groups (they were organized into units of 20 and 100 men). Peacetime training consisted of individual archers learning how to fire at a specific range. This took a lot of practice. The archers soon learned which angle to point their bows in order to land their arrow on a white sheet (the common target, representing a group of enemy troops) at different ranges, a practice called "clout shooting" (i.e, cloth shooting). In addition to individual practice, they also drilled with their units. In a formation ten ranks deep, only the men in the first few ranks could even see the enemy, or hear the commands of their "centenaur" (leader of a hundred.)
In battle, the centuries (each with a hundred archers) would line up in formations of up to ten ranks deep. In front of each century would be an experienced (and very well paid) centenaur. A typical English army would have 50 or more centuries of archers available. In overall command would be a Master of the Archers, an experienced knight who was (unlike most knights) skilled with the longbow. The Master of the Archers would keep his eye on the enemy and judge how many yards distant the foe was. When ordered to fire, the Master of the Archers would estimate the range to the enemy and then bellow out "ready," quickly followed by his range estimate, to the Centenaurs, who would turn and bellow it to their archers (especially the most experienced ones, placed in the first rank to provide an accurate guide for the archers behind them). The Master of the Archers would then yell "loose," the Centenaurs would echo the command, and thousands of arrows would fly skywrd, most of them to land where, and when, the Master of the Archers wanted them.
The Master of the Archers might order only a few of his Centuries to fire, if enemy troops were only advancing on a portion of the front. But in Medieval warfare it was generally all or nothing. An attack usually called for the knights to advance on a broad front.
The yeomen could let loose a dozen arrows a minute, creating a steady stream of deadly missiles. Advancing horsemen were doomed, as their unarmored (or even partially armored) mounts went down from arrow wounds. The riders went down also, often with broken bones and other injuries in the process, not to mention exposing them to the possibility of being trampled by their onrushing comrades.
When the French knights advanced on foot, the results weren't much different. The long range (up to 300 yards) plunging fire would eventually cause some wounds. The sight and sound of all those arrows raining down was quite demoralizing. As the enemy knights got closer to the yeomen, they would get hit by direct fire from the front row of archers (the most experienced and accurate ones) and discover that at point blank range, the yard long arrows did indeed have an armor piercing tip. For those knights that got to within ten yards of the yeomen would encounter several rows of sharpened stakes, and perhaps even a ditch. Usually the English would post their own dismounted knights in a sort of phalanx to support their archers, which only made matters worse for the attacking French. At this point, there were few enough knights for the yeomen to successfully engage them in single combat. Well, it was more lopsided than that. Supported by the English knights, the archers would put down their bows and come out from behind their defenses with sword or axe or mallet once they saw they had a 2-to-1 or better advantage over the surviving knights. The yeomen would then team up to capture knights alive, and reap the ransoms captured nobles always brought. One yeoman would engage the knight from in front while another hit him from the side or rear with the flat end of an axe or a mallet. The stunned knight, now on the ground, would invariably surrender. The yeomen rarely lost these combats and took few casualties in them, even if they did not wear much armor. The ransoms thus obtained made many a yeoman family wealthy. The news of these riches travelled far and fast, making it easier for the king to keep his yeoman at their peacetime training and eager to answer the royal call when another campaign was afoot.
There were never that many yeomen, some 10,000-20,000 were raised for each campaign. While some of them became full time professionals during the Hundred Years War, most remained basically farmers who fought on the side. Typically, they would answer the king's call in the Spring. If they were lucky, they would go off to war after the Spring planting was out of the way. With the approach of Winter, the king would allow many to return home. In practice, all those that wanted to go home would do so. Campaigning was rarely done in the Winter and all the king needed then were troops to man the fortifications in his French lands. Garrison soldiers could be obtained locally and cost less than yeomen.
Families worked together in the period, so an family consisting of two generations, 20-30 people, and a half dozen or so married couples (plus children of all ages) could send off two or three yeomen to war while those who stayed home covered all the labor requirements needed to keep the farm going. When their yeomen returned from France, they would have several thousand ducats in pay, not to mention that, if they had had a good campaign, thousands more in loot and ransom money. After battles like Crecy, Poitiers, or Agincourt, many yeomen came home with 100,000 ducats, some with or more. This was a fortune in Medieval terms. Land cost several thousand ducats an acre, and such large sums of money made the yeomen even more prosperous farmers.
But in practice, through most of the period battles were rare, mainly because it took so long to line up all your troops in order to have one. If the other fellow didn't want to fight, he could just keep marching away. Of course, an exceptionally large, or better led, army could force an opponent to give battle. This was usually done by cutting your opponent off from any easy escape route and, in effect, giving him an offer to do battle that he couldn't refuse. Most armies were undisciplined masses of troops, straggling along on the primitive roads or cross country. Scouting was primitive, if it was done at all. When a scout, or passing traveller, brought word of an enemy army in the vicinity, an army leader would often want to hold a council of war with his leading nobles, lest some of them refuse to obey any hastily conceived orders to move off in a different direction. Sometimes battles were by pre-arrangements, with the Heralds working out the details of where and when the armies would meet. So battles didn't happen all that ofte. And when they did, most frequently, a battle was forced when an army sought to break a siege.
Sieges were the most common for of large scale combat in the Middle Ages. . Political control Medieval times depended on who held the numerous castles and walled cities that dotted the countryside. These fortifications held reserve supplies of food and large numbers of troops. From these bases, the nobility controlled the sountryside. If you wished to "conquer" an area you had to take the fortified places away from whoever currently held them. Since these places were built to resist being taken, a siege was the usual result. Sieges took time, some went on for months, and money, some cost literally millions of ducats. The larger force outside often had more serious food problems than the besieged. The surrounding countryside was often stripped bare of food at the approach of an enemy army. But the defender could not always depend on the besieger giving up because he was hungry. The usual hope was that a friendly army would come up and chase the besiegers away. This often resulted in a battle as a means to determine of the siege would continue or not.
Sieges themselves were largely a matter of engineering work, with a little knightly combat thrown in to keep the warriors from getting bored. It was not uncommon for an impromptu tournament, or series of duels, to be arranged between the knights on both sides, just to enliven what was otherwise a very tedious process. The English had an advantage in sieges for most of the war (until the French developed superior cannon ) because their yeomen were more effective at siege warfare. In addition to being able to sweep defenders from castle walls with their accurate and long range archery, the yeomen were also more skilled at the more mundane aspects of siege work. Being well paid mercenaries, the yeomen went about the digging and building that comprised most siege work in more professional manner than their French counterparts.
Armies took siege technicians with them on campaign. These were usually carpenters and miners, plus master siege artisans who had years of experience in the techniques of siege warfare. The typical siege consisted of throwing a cordon of troops around the fortified place and then building rock (or fire) throwing catapults to attack the troops on the walls, tunnels to collapse the walls, scaling ladders and movable towers to allow troops to go over the walls, and battering rams to demolish walls or gateways. But the typical activity was the threat of an attack. The custom was that if the city surrendered without a fight, it would not be pillaged by the enemy troops. Both sides preferred to end the matter through negotiation and this was basically a war of nerves. The besieger didn't really want to attack, as this would get a lot of his troops killed and it might not work, at least not the first time. Moreover, the besieger was usually after permanent possession of the place and disn't want to be stuck with the damage his angry troops would inflict if, after successfully storming the place, they pillaged it (thus wrecked everything in sight and killed off a fair percentage of the population). The defender didn't want to risk an assault either, but for different reasons. In many cases, time was on the side of the besieged. If careful preparations were made, the defenders might well have had a better supply of food and water than the besiegers. Moreover, there might be a relief army on the way. The defender had to calculate whether he could fight off enough assaults so that the attacker ran out of men or enthusiasm for the task.
If the attacker could make a breach in the wall with, say by tunneling which caused the wall to collapse, this might give the defender sufficient reason to surrender without an assault. Catapults throwing fire balls into the city or castle might start fires that would also encourage a surrender. Negotiations were usually underway from the very beginning (or even before, as the advancing army sent forward Heralds to try and convince the commander of the castle or town that it was never too early to surrender). Of course, the commander of the defenders had more than his honor at stake. His boss might punish him quite severely (unto death, perhaps) if the fortified place was lost without every possible action being taken to avoid such a loss.
There was also the question of cost. Your typical fortress or castle (the former had fewer towers and less comfotable living accommodations) had a garrison of 100-300 men. These were usually locals, full or part time soldiers on the regular payroll of the local lord. Say an army of 1,000 men approached, mercenaries, costing the attacker, on average, 170,000 ducats a week to maintain. It would take several weeks to invest the place, build siege engines (catapults, etc.), and start digging tunnels. By this time the cost would already be up to half a million ducats, with less than a hundred thousand gained from pillaging the surrounding countryside. That pillage would going to cost the local lord tens of thousands of lost taxes in the future, and some of the damage would be to things the lord owned, such as flour mills or buildings. Nevertheless, it would be costing the besieger a lot more than it would be costing the defender. If the place is taken by negotiation, there would be loot inside the castle. In addition to at least several thousand ducats in cash, there were no doubt many other valuable items. Everything from captured weapons and tools, and perhaps some gold or silver objects. But the besieger had to decide when to stop throwing good money after bad. We may not think of Medieval warlords as accountants, but they had to pay their bills, too. Unpaid troops tended to drift away, leaving you defenseless in hostile territory. It wasn't all adventure and glory. A lot of Medieval warfare was the headaches delivered via a clerk's report on your current cash position.
Medieval warfare was also very dependant on the quality of leadership. The troops didn't vary much from area to area (except in the case of the yeomen or Swiss pikemen), nor did the methods. There were few books on "how to make war," and most military leaders obtained their positions because of their social standing, not their military track records. As a result, when good leaders were present, they would quickly reorganize their troops, redistribute what good subordinate leaders there were more effectively, and run their army on a more efficient basis than their opposition.
|
<urn:uuid:67e63d5e-2abb-45a6-b654-b71257ea153d>
|
CC-MAIN-2013-20
|
http://www.hyw.com/books/history/medi0000.htm
|
2013-05-24T01:44:26Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00002-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.985647
| 4,955
|
On August 11, 2012, at 4:53 pm local time, a magnitude 6.4 earthquake struck 60km (37 mi) north of the city of Tabriz in northwestern Iran. Eleven minutes later, a magnitude 6.3 aftershock hit near the original epicenter. More than 300 people died, 5,000 were injured, and 36,000 were made homeless. A magnitude 6.4 quake normally produces local property damage, some injuries, but very few fatalities. So why was a quake of this moderate magnitude so devastating?
The people who live in the villages of rural Iran build their houses today in the same way they have done for hundreds of years. The main building material is clay brick mixed with straw. A heavy roofs rests directly on the mud brick walls, not supported by beams or framing. When the region suffers a quake of even moderate strength, the shaking causes the roof to collapse and bring the walls down, killing, injuring, or trapping everyone inside.
The city of Tabriz, where some modern building practices are followed, had some damage and 45 fatalities. It was the villages in the mountainous areas surrounding Tabriz that were hit the hardest. According to Iranian officials, 130 villages were 70% to 90% destroyed, and 20 villages were completely leveled.
This scenario keeps repeating itself. In 2002, a 6.5 magnitude earthquake killed 261. In 2003, a 6.6 earthquake southeast of Tehran wiped out the entire city of Bam, killing 31,000. In 2005, a 6.4 magnitude quake killed 612. One solution to this problem might be for the Iranian government to take steps to help the rural villages build safer houses, and retrofit the older ones against the shaking that is sure to come in this earthquake-prone region.
The Zagros Fold & Thrust Belt (FTB) is a complex of fault lines that runs for 1,800km (1,200 mi) from southern Iran to the Greater Caucasus Mountains in the north. The Zagros marks the convergence of the Arabian and Eurasian tectonic plates, where the Arabian Plate constantly pushes north into the Eurasian Plate at 20mm (0.8 in) a year, putting great stress on the fault lines separating the two plates. When the pressure builds high enough, a fault line section releases, and the Arabian Plate slips under the continental Eurasian Plate in a sudden, violent motion that produces shaking over a wide area. Iran is one of the world’s most seismically active areas.
|
<urn:uuid:2b394881-e44e-41bd-a0d1-a5a16b25f58f>
|
CC-MAIN-2013-20
|
http://www.tsunaminaturaldisaster.com/tag/earthqiake-fatalities/
|
2013-05-21T10:07:10Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00003-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.946477
| 509
|
A common assumption about hearing loss is there are three causes: age, genetics and noise exposure; when in fact there are many other causes persons are not aware of.
Recent research reported this month in the American Journal of Kidney Disease found persons suffering from moderate chronic kidney disease (CKD) are likely to have hearing loss and should receive regular screenings. Specifically the study found 54.4 of all the patients in the study with moderate CKD had some degree of hearing loss.
Kidney disease is not alone in contributing to hearing loss. Other health conditions that can cause hearing loss include diabetes and cardiovascular disease. Numerous studies have linked both of these conditions to an increase risk for developing hearing loss.
To learn more about how these health conditions contribute to the risk of hearing loss, read: Reasons for Hearing Loss: Chronic Kidney Disease for more information at HealthyHearing.com.
|
<urn:uuid:251337a8-e659-433b-b36b-64f67f8084bc>
|
CC-MAIN-2013-20
|
http://betterhearing.org/blog/post.cfm/chronic-kidney-disease-among-others-linked-to-hearing-loss
|
2013-05-18T06:30:08Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00052-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.951431
| 179
|
Talking to Your Third-Grader about Social Studies
- Watch the television news together on occasion. Let the events on the news become a basis for conversation. You might also watch documentaries about historical figures with your child; biography is a good basis for helping children learn about history.
- Look at photographs together. Family pictures showing you and your child at different ages are a good choice. Ask, "What can you remember about these earlier times? What is different now?"
- Look at photographs or children in other parts of the world. See whether your child knows where these children come from, and then ask him or her to tell you about the different countries the children come from.
- Social studies in the third grade includes learning more about maps and various regions of the world. You might ask your child what countries he or she knows about. Can your child find these countries on a globe or a map?
- Third-graders study the globe. Ask your child to pick out the continents -- Asia, Africa, South America, North America, Europe, Australia, Antarctica. Make a game of it, taking turns to find the continents. (You can do the same thing with the oceans.)
- With a map or atlas, see if your child can use map coordinates (these are the guides maps have on the edges, usually numbers on one side and letters on another, rather than latitude and longitude.)
- Ask what scientists, carpenters, mechanics, lawyers, plumbers, physicians, and nurses do. Take turns thinking of various occupations, perhaps starting with people you know or characters in books.
- Children celebrate several different holidays in school. President's Day, Martin Luther King, Jr., Day, Veterans Day, Thanksgiving, and in some settings Cinco de Mayo receive the most attention. These celebrations are good opportunities to ask your child what he or she has learned about the presidents, Martin Luther King, Jr., and various national traditions.
- Ask your child to share with you what he or she has learned about different ethnic and cultural groups in and around your community. What has your child learned about African Americans, Hispanics, Vietnamese, and Cambodians?
- Ask your child to describe how a skyscraper is built, how a car is made, how wheat is harvested, how bread is made, how oil is carried from one part of the world to another, and so on. You will learn about your child's growing understanding of the world.
Reprinted from 101 Educational Conversations with Your 3rd Grader by Vito Perrone, published by Chelsea House Publishers.
Copyright 1994 by Chelsea House Publishers, a division of Main Line Book Co. All rights reserved.
More on: 3rd Grade
|
<urn:uuid:4091d2c0-7927-40a1-8715-45775c50fbf3>
|
CC-MAIN-2013-20
|
http://school.familyeducation.com/third-grade/social-studies/37480.html
|
2013-05-26T09:42:09Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942942
| 558
|
Water is essential for life. While we could survive for a month without food, we would last but a few days without water. Yet water does not receive its fair share of the marketing and advertising that”s directed toward the American public, and far too many people don’t drink the amount of water that their body requires. On average, the requirement is about nine 8-ounce glasses a day.
Remember that your body’s requirements are just that… essential non-negotiable requirements! This is something your body truly needs to function properly. To allow this idea to register, consider an analogy. Imagine you received a bill for $100, but you decided you only wanted to pay $75. Your creditor would call and ask why you only sent tree-fourths of the bill. If you responded, “Well, it’s close,” that would not satisfy your creditor. In a similar way, if you do not take in the full amount of water that your body needs on a daily basis, you will build up a deficit and become dehydrated.
Chronic dehydration can lead to a variety of physical problems. Conversely, consistent water intake is health promoting.
Looking at water’s many functions in the body brings home its significance.
Water’s duties include:
- Transporting nutrients, hormones, and chemical messengers to appropriate sites within your system.
- Diluting toxins and waste products and escorting them out of the body.
- Acting as a solvent, ridding the bloodstream of excess fat .
- Protecting the internal surfaces of your digestive tract and other systems.
- Maintaining the body’s operating temperature.
- Keeping your joints “oiled” with water-based solutions.
- Surrounding the brain and spinal cord with fluid to shield them from impact.
It’s been discovered that our thirst mechanism begins to malfunction when we do not consume enough water. So we can easily be unaware of the need for more water.
The Cleansing Center helping you back on the path to vibrant health naturally and sensibly!
Call ‘The Cleansing Center’ today for your free consultation and for more information on detoxing/building up the bodies immune system. (858) 539 9355
|
<urn:uuid:a86d4dbf-3d15-4cce-ab0f-7c5f1b50d244>
|
CC-MAIN-2013-20
|
http://coloncleansesandiego.com/the-most-basic-and-often-overlooked-nutrient-water/
|
2013-05-20T02:05:56Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00050-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.942607
| 478
|
As part of Black History Month, we honor Charles V. Carr, a gravel-voiced, skillful and wily 30-year Cleveland councilman who helped make the city a force in black politics in the 1950s and 1960s.
Born in Texas in 1903, Carr came to Cleveland in 1918, briefly attending East Tech High School. He graduated from Fisk University and John Marshall Law School. In 1945, Carr became councilman for Ward 17, an area that roughly encompasses the Central neighborhood.
The following year, he helped pass a civil rights ordinance that revoked the license of any public business convicted of discrimination against blacks. He also led the battle in 1947 to integrate Euclid Beach Park, fought for fair housing ordinances in 1959 and played a key role in passing the city's income tax in 1966.
"I hope to see the day when it will be recognized that what is in bad taste in dealing with one citizen is in bad taste in dealing with another," he told 600 well-wishers 'in 1960 after becoming 'council's first black majority leader.
Carr lost his council seat in 1975 but served on the Regional Transit Authority's Board of Trustees until his death in 1987 at age 83.
"Carl B. Stokes was the nation's first big-city black mayor, and George L. Forbes became the most powerful politician ever to reside at City Hall," Plain Dealer columnist Brent Larkin wrote, "but it was Carr, the grandson of a slave, who made it possible."
|
<urn:uuid:ff4def7c-6757-4d2a-a622-4cbbd02ca8e9>
|
CC-MAIN-2013-20
|
http://www.cleveland.com/metro/index.ssf/2013/02/charles_v_carr_served_30_years.html
|
2013-05-18T17:50:45Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00053-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.973109
| 309
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.