text
stringlengths
144
682k
Characterization of liquid chromatography-tandem mass spectrometry method for the determination of acrylamide in complex environmental samples Patrick DeArmond and Amanda DiGoregorio. Analytical and Bioanalytical Chemistry. May 2013. Acrylamide is a chemical used across a variety of industries and has been reportedly used in hydraulic fracturing fluids as a friction reducer. The EPA has classified acrylamide as a probable human carcinogen. Additionally, short- and long-term oral exposures to acrylamide have resulted in damage to the nervous system. Because of the potential hazards associated with acrylamide and its reported use in hydraulic fracturing fluids, the EPA has developed a method for quantifying acrylamide in environmental samples, including drinking water, ground water, and flowback and produced water as part of its study of the potential impact of hydraulic fracturing for oil and gas on drinking water resources.
Webcast Recap Nationalism is often viewed as harmful to democratization and democracy in general. Liberal scholars often insist that nationalism is a major irritant in international conflicts, and even a cause of war. According to Nobuo Fukuda, the Wilson Center’s Japan Scholar and former Jakarta bureau chief at the Asahi Shimbun, this perception is not always accurate. In postwar Asia, nationalism has also played a positive role in democratic transitions, helping to bring about social and economic reforms. Nationalistic sentiment has been strong in young countries, but violent conflict has rarely broken out between nations of the region.  Fukuda first pointed to Indonesia, which has been transformed from an authoritarian state to a democracy in the past decade, and noted that it can be seen as a success story of open nationalism. Most important for Indonesia was the limited form of devolution of political and financial power that began after the fall of the Suharto regime in 1999. For example, while there were calls for greater regional autonomy throughout the country, and mayors and local candidates were given more power, the nation’s cabinet rejected federalism outright, knowing that too much local autonomy would lead to the fragmentation of the state.  Prioritization of national unity has continued to be a major factor in Indonesian identity politics. In the 2005 elections, for example, it was not uncommon to see candidates from different backgrounds and religions running on joint tickets in order to appeal to a sense of united Indonesian, rather than religious, identity. Debate over the electoral system, where liberal nationalists have supported proportional representation as a reflection of the diversity of Indonesia, is also characteristic of the commitment to open nationalism. Fukuda noted that because of this commitment to inclusive national unity, most of the violent domestic conflicts of the early post-Suharto years have been extinguished. In India, open nationalism was also espoused by the nation’s pre-eminent political and ideological leader, Mohandas Gandhi. However, Fukuda noted that despite Gandhi’s teachings and his image today as a symbol of inclusiveness, India today stands as an example of the negative consequences for democracy of divisive, closed politics. Gandhi’s commitment to non-violence, secularism, pluralism, and the rule of law—all key elements of open nationalism—has been eroded by exclusivist, ethno-religious nationalism that has exploited India’s democratization.  Unlike Indonesia, where a unified state was constructed out of difference, Hindu nationalists have taken advantage of decentralization to become highly visible in the political world. For the members of Hindu nationalist groups such as the Bharatiya Janata Party, which rose from relative obscurity to govern India between 1998 and 2004, or the National Volunteer Organization (Rashtriya Swayamsevak Sangh, or RSS), the largest existing private “paramilitary” body in the world, Hinduism is above all a symbol of Indian national identity. However, Hinduism is used by such groups to exclude others, as violence between ethnic groups in the nation and international friction between Pakistani Muslims and Indian Hindus demonstrates. Indeed, open nationalist movements are not always successful. Fukuda also cited the 1989 pro-democracy protests in China as a significant, if failed, form of open nationalism. The students involved regarded themselves as the embodiment of Chinese patriotism and the vanguard of the legal and political reforms needed to end China’s stagnation. While student demonstrators were committed to universal ideals, their stated goal was to promote China’s strength and dignity through the implementation of democratic political reforms that guaranteed human rights and equality before the law. Theirs was a truly national exercise. Despite the failure of open nationalism to take hold in China or in India, Fukuda sees it as an attractive concept for Japan, where nationalism in general has been discredited by its association with Japan’s history of colonization and war. Japan’s dependence on the United States for defense also meant that it could afford to take a low posture in foreign affairs, and focused on rebuilding and expanding its economic base as a focus of national efforts rather than debate about the appropriate position of Japan in relation to its neighbors. Recession in the 1990s and social and political malaise since then have given rise to feelings of frustration in Japan, and some commentators believe that a persistent sense of loss may be partly explained by the lack of progressive open nationalism. Fukuda thinks that Japan should explore a new national identity in Asia, the region it belongs to geographically, but in which it has never been accepted as a full member. Japan might do well to embrace open nationalism as the theoretical backbone for its foreign policy as the earthquake-stricken nation positions itself in 21st-century Asia. Japan needs to set an example by creating a national discourse that is focused on the notions of human rights and democracy that it has successfully adopted and by opening its borders to foreign workers. Part of this new openness should be to accept and explore the open nationalisms of the region in order to forge a sense of community between nations.  By Bryce Wakefield Robert M. Hathaway, Director, Asia Program
Page last updated at 16:08 GMT, Monday, 16 July 2007 17:08 UK Nuclear scare after Japan quake The fire at the Kashiwazaki plant was put out after several hours Clouds of smoke poured from the Kashiwazaki nuclear power plant A strong earthquake in central Japan has damaged a large nuclear power plant causing a leak of radioactive material, officials at the plant have said. Reactors at the plant automatically shut during the magnitude 6.8 quake. 'Vertical jolt' The seven deaths occurred in the city of Kashiwazaki. Map of Japan showing Kashiwazaki More than 800 people were reported injured, most with broken bones, cuts and abrasions from collapsing buildings and falling objects. "First there was a sharp vertical jolt and then it shook sideways for a long time and I couldn't stand up," said Kashiwazaki teacher Harumi Mikami, who was at her school when the earthquake struck at 1013 (0113 GMT). "Tall shelves fell over and things flew around," she told Reuters news agency. More than 7,000 people were evacuated from their homes as aftershocks of up to magnitude 5.8 shook the area. No damage from the second earthquake deep under the sea off Kyoto was reported but Tokyo residents said they felt buildings shake. Safety fears Japanese Prime Minister Shinzo Abe broke off from election campaigning to visit Kashiwazaki. He promised to "make every effort towards rescue and also to restore services such as gas and electricity". The safety of Japan's nuclear installations, which supply much of Japan's power, has come under the spotlight in recent years after a string of accidents and mishaps. Japan lies in one of the world's most earthquake-prone regions and the ability of some reactors to withstand a strong tremor has been questioned. Three years ago an earthquake in the Niigata area killed 65 people. In 1995, a magnitude 7.3 tremor hit the city of Kobe, killing more than 6,400 people. video and audio news Footage of damage caused by the earthquake In pictures: Japan earthquake 16 Jul 07 |  In Pictures Small tsunamis hit northern Japan 15 Nov 06 |  Asia-Pacific Water leak at Japan nuclear plant 12 Apr 06 |  Asia-Pacific Japan's shaky nuclear record 24 Mar 06 |  Asia-Pacific Japan's earthquake watch 25 Oct 04 |  Asia-Pacific How earthquakes happen 01 Jun 09 |  Science & Environment Country profile: Japan 03 Jul 07 |  Country profiles Has China's housing bubble burst? How the world's oldest clove tree defied an empire Why Royal Ballet principal Sergei Polunin quit Sign in BBC navigation Americas Africa Europe Middle East South Asia Asia Pacific
View more editions Discrete Math and Its Applications 6th Edition • 3790 step-by-step solutions • Solved by publishers, professors & experts • iOS, Android, & web May 2015 Chegg Study Survey Chapter: Problem: Estimate the number of comparisons used by the algorithm described in Exercise 16. Let {an} be a sequence of real numbers. The forward differences of this sequence are defined recursively as follows: The first forward difference is ∆an = an+1an; the (k + 1)st forward difference ∆k+1an is obtained from kan by k+1an = ∆kan+l − ∆kan. Chapter: Problem: • Step 1 of 4 Consider the following recurrence relation: Here, f is an increasing function, n is divisible by 2, and Use the fact tha, the number of comparisons or the size of a function f that satisfies the recurrence relation of the form is given by the following: Here, f is an increasing function, k is a positive integer, , b is an integer greater than 1, and c is a positive real number. • Chapter , Problem is solved. Corresponding Textbook Discrete Mathematics and Its Applications 6th edition 9780073229720 0073229725 Discrete Math and Its Applications | 6th Edition 9780073229720ISBN-13: 0073229725ISBN: Kenneth H. RosenAuthors:
Definitions for halfwayˈhæfˈweɪ, ˈhɑf- This page provides all possible meanings and translations of the word halfway Princeton's WordNet 1. center(a), halfway, middle(a), midway(adj) equally distant from the extremes 2. halfway(adj) at a point midway between two extremes "at the halfway mark" 3. halfway(adverb) including only half or a portion "halfway measures" 4. halfway, midway(adverb) at half the distance; at the middle "he was halfway down the ladder when he fell" 1. halfway(Adverb) Half of the way between two points; midway. Webster Dictionary 1. Halfway(adverb) 2. Halfway(adj) equally distant from the extremes; situated at an intermediate point; midway 1. Halfway Halfway is a small town in Baker County, Oregon, United States. This town took its name from the location of its post office, on the Alexander Stalker ranch, half way between Pine and Cornucopia. The population was 288 at the 2010 census. The town made history in December 1999, when it agreed to rename itself as, Oregon after the e-commerce company as a publicity stunt and became the first city in the United States to rename itself for a dot-com company. The proclamation was in name only, so the city did not legally change its name. Images & Illustrations of halfway Translations for halfway From our Multilingual Translation Dictionary Get even more translations for halfway » Find a translation for the halfway definition in other languages: Select another language: Discuss these halfway definitions with the community: Word of the Day Please enter your email address:      Use the citation below to add this definition to your bibliography: "halfway." STANDS4 LLC, 2015. Web. 24 Nov. 2015. <>. Are we missing a good definition for halfway? Don't keep it to yourself... Nearby & related entries: Alternative searches for halfway: Thanks for your vote! We truly appreciate your support.
Font Size Topic Overview The heart What is endocarditis? Endocarditis is caused by bacteria (or, in rare cases, by fungi) that enter the bloodstream and settle on the inside of the heart, usually on the heart valves. Bacteria can invade your bloodstream in many ways, including during some dental, surgical, and medical procedures. If you don't take care of your teeth, having your teeth cleaned or even brushing your teeth can cause bacteria to enter the bloodstream. See a picture of endocarditisClick here to see an illustration.. What increases the risk for endocarditis? If you have a normal heart, you have a low risk for endocarditis. But if you have a problem with your heart that affects normal blood flow through the heartClick here to see an illustration., it is more likely that bacteria or fungi will attach to heart tissue. Some health care procedures or implanted devices may raise your risk for endocarditis. This is because they can let bacteria or fungi enter your bloodstream. You have a higher risk of endocarditis if you have: Not all heart problems give you a higher risk of endocarditis. You do not have a higher risk if you have: What can you do if you are at risk for endocarditis? Procedures that may require antibiotics include: • Certain dental work or dental surgery. • Lung surgery. • Surgery on infected skin, bone, or muscle tissue. • Certain medical procedures, such as a biopsy. What are the symptoms? Symptoms include: Endocarditis can also cause other problems, including: How is endocarditis diagnosed? First your doctor will ask about your medical history and do a physical exam. If your doctor thinks that you may have endocarditis, he or she will check for signs of the infection, such as a heart murmur, an enlarged spleen, skin rashes, and bleeding under your nails. Blood cultures will be done to check for bacteria in your bloodstream. And other tests, such as an echocardiogram, may be done to check your heart function and look at your heart valves. It is important to treat endocarditis as soon as possible to avoid permanent damage to the heart muscle or heart valves. How is it treated? Antibiotics given through a vein (intravenously, or by IV) are the usual treatment for endocarditis. If your heart valves are damaged by the infection or if you have an artificial heart valve, surgery to repair or replace the valve may be needed. You may also need surgery if your endocarditis is caused by a fungus. If it is not treated, endocarditis can be fatal. Frequently Asked Questions Learning about endocarditis: Being diagnosed: Getting treatment: Ongoing concerns: Living with endocarditis: eMedicineHealth Medical Reference from Healthwise To learn more visit Medical Dictionary
Want to see correct answers? Login or join for free!   Physical Ed Worksheets Looking for Physical Ed worksheets? Check out our pre-made Physical Ed worksheets! Share/Like This Page Basketball Questions - All Grades You can create printable tests and worksheets from these Basketball questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page. 1 2 3 4 ... 16 Grade 10 Basketball None Basketball Where was the game of basketball invented? 1. Dallas, TX 2. New York, NY 3. Springfield, MA 4. Bismark, ND 5. Trenton, NJ Grade 2 Basketball Which of the following is NOT part of a "Good Pass" between 2 players (the "Passer" and the "Receiver")? 1. Step into Pass 2. Give Passer a Good Target 3. Step toward the Passer to Receive the Pass 4. All of the Above Grade 11 Basketball What is the penalty for committing a violation? 1. 2 free throws are awarded 2. A coach is assessed a technical foul 3. The player is given a technical foul 4. The other team gains possession of the ball Grade 11 Basketball None Basketball What was the first object used as a hoop? 1. Buckets 2. Garbage can 3. Metal hoops 4. Old hat 5. Peach Basket Grade 5 Basketball Grade 11 Basketball Grade 11 Basketball 1 2 3 4 ... 16
1. You are here: 2. Home 3. Courses & Training 4. Resources 5. Why Iron is important to the Dancer Why Iron is important to the Dancer By Nicolette Braybrook LISTD (Dip) BSc (Chem) Mast. Nut. & Diet. Iron is a mineral found in food, which is essential in keeping the body healthy. In addition, iron is necessary for the dancer for maximum energy and peak performance. Iron has several important functions within the body. It is needed to form an important part of red blood cells called haemoglobin, which transports oxygen from the lungs to the rest of the body. It also forms part of a muscle protein called myoglobin, which provides oxygen to the muscles during strenuous physical activity. Iron can also strengthen the immune system of the body, increasing resistance to colds, infections and disease. Therefore, dancers who have a marginal or inadequate iron intake can impair their exercise performance by decreasing the amount of oxygen delivered to the muscles, impairing muscle contractions and strength. People at risk from developing iron deficiency anaemia include infants, children less than two, teenage girls, pregnant women, pre-menopausal women, vegetarians, the elderly and female endurance athletes. A poor dietary intake and increased losses due to menstruation are primary reasons for the development of a deficiency. Menstruating females require almost twice as much iron as their male counterparts to replace monthly losses. A lack of iron can leave you feeling tired, rundown and prone to infections. As the condition worsens, more dramatic symptoms may develop such as severe fatigue, cramps, headaches, shortness of breath, poor stamina and feeling the cold more than usual. If you suspect that you have low iron levels, consult your doctor to arrange a blood test. All dancers should be consuming iron rich foods each day in their diet. Lean red meat is the best source of iron because it contains 'haem' iron, which is well absorbed by the body. Red meat has twice as much iron as chicken and three times as much iron as fish. Generally, the darker the colour of the meat the more iron it contains. 'Non haem' iron found in breakfast cereals, wholegrain breads, legumes, dark green leafy vegetables, seeds, nuts and eggs is not as well absorbed as the iron found in meat. With increasing numbers of dancers selecting vegetarian diets, it is important that they seek the advice of a dietician to ensure their diet contains not only adequate amounts of iron, but other nutrients that may be at risk, such as protein, calcium, zinc and Vitamin B12. Vegetarians along with people who consume minimal amounts of red meats can in fact obtain sufficient iron from their food with the help of Vitamin C, which can enhance the body's uptake of iron. Vitamin C reacts with 'non haem' iron making it easier for the body to absorb. For example, drinking a glass of orange juice at breakfast along with a wholegrain cereal or adding a tomato or capsicum to a legume/vegetable stir-fry for dinner. In comparison to orange juice, tea and coffee can suppress the uptake of iron from these 'non-haem' sources. Therefore, it is best to drink tea and coffee between meals rather than with them. Supplements should only be taken on your doctor's and/or dietician's advice. High intakes of iron in the form of supplements can cause iron overload in some people. Symptoms may include diarrhoea, constipation, stomach discomfort and nausea. Iron supplements have also been shown to interfere with the absorption of other minerals, such as calcium. In conclusion, to treat or prevent the onset of iron deficiency it is important that the dancer attempts to:- • Eat more iron rich foods such as lean red meat, skinless chicken, fish, eggs and legumes. • Include Vitamin C rich foods or drinks at each meal. • Eat more wholegrain breads and iron fortifies cereals. • Drink tea and coffee between meals rather than with them. • If vegetarian or restricting certain food groups from your diet, obtain further dietary advice to suit individual requirements. Nicolette Aisen (nee Braybrook) is a privately consulting dietician focussing on the dance field. She is also the owner and director of a dance school in Melbourne, Australia. You can contact her at: 34 Croxton Drive Toolern Downs Melton VIC 3337 Tel: + 61 3 97469823 e-mail: nicobret@mail.austasia.net
kottke.org posts about quantum mechanics Study: quantum entanglement is realOct 22 2015 The scientists who conducted a study at the Delft University of Technology in the Netherlands say they have proved that quantum entanglement is a real effect. The Delft researchers were able to entangle two electrons separated by a distance of 1.3 kilometers, slightly less than a mile, and then share information between them. Physicists use the term "entanglement" to refer to pairs of particles that are generated in such a way that they cannot be described independently. The scientists placed two diamonds on opposite sides of the Delft University campus, 1.3 kilometers apart. Each diamond contained a tiny trap for single electrons, which have a magnetic property called a "spin." Pulses of microwave and laser energy are then used to entangle and measure the "spin" of the electrons. The distance -- with detectors set on opposite sides of the campus -- ensured that information could not be exchanged by conventional means within the time it takes to do the measurement. The study, published in Nature, has yet to be verified, but still, exciting! Everything is made from somethingAug 21 2015 In A Children's Picture-book Introduction to Quantum Field Theory, Brian Skinner explains quantum field theory -- "the deepest and most intimidating set of ideas in graduate-level theoretical physics" -- as if you and I are five-year-old children. The first step in creating a picture of a field is deciding how to imagine what the field is made of. Keep in mind, of course, that the following picture is mostly just an artistic device. The real fundamental fields of nature aren't really made of physical things (as far as we can tell); physical things are made of them. But, as is common in science, the analogy is surprisingly instructive. So let's imagine, to start with, a ball at the end of a spring. (via @robinsloan) Quantum mechanics made relatively simpleNov 13 2013 Hans Bethe was a giant in the field of nuclear physics. He rubbed shoulders with Einstein, Bohr, and Pauli, was head of the Theoretical Division of the US atomic bomb project, and was awarded a Nobel Prize. In 1999, at the age of 93, Bethe gave a series of three lectures to the residents of his retirement community near Cornell University, where he had taught since 1935. Video of the lectures is available on the Cornell website. In the first lecture, Bethe covers the development of the "old quantum theory", covering the work of Max Planck and Niels Bohr. In the second and third lectures, he relates how modern quantum mechanics was developed, with a healthy amount of personal recollection along the way: Professor Bethe offers personal anecdotes about many of the famous names commonly associated with quantum physics, including Bohr, Heisenberg, Born, Pauli, de Broglie, Schrödinger, and Dirac. Without a doubt, this is the most high-power presentation ever made at a retirement home. (via @stevenstrogatz) Is Google's quantum computer even quantum?Nov 12 2013 Google and NASA recently bought a D-Wave quantum computer. But according to a piece by Sophie Bushwick published on the Physics Buzz Blog, there isn't scientific consensus on whether the computer is actually using quantum effects to calculate. At any given moment, each of a classical computer's bits can only be in an "on" or an "off" state. They exist inside conventional electronic circuits, which follow the 19th-century rules of classical physics. A qubit, on the other hand, can be created with an electron, or inside a superconducting loop. Obeying the counterintuitive logic of quantum mechanics, a qubit can act as if it's "on" and "off" simultaneously. It can also become tightly linked to the state of its fellow qubits, a situation called entanglement. These are two of the unusual properties that enable quantum computers to test multiple solutions at the same time. (via fine structure) Google's new quantum computerOct 16 2013 Google's got themselves a quantum computer (they're sharing it with NASA) and they made a little video about it: I'm sure that Hartmut is a smart guy and all, but he's got a promising career as an Arnold Schwarzenegger impersonator hanging out there if the whole Google thing doesn't work out. Wipeout track using quantum levitationJan 03 2012 Quantum levitation!Oct 18 2011 What the what? This video gives a little more explanation into the effect at work here (superconductivity + quantum trapping of the magnetic field in quantum flux tubes) and an awesome demonstration of a crude rail system. You can almost hear your tiny mind explode when the "train" goes upside-down. Wingardium Leviosa! (via stellar) DNA and quantum entanglementJul 16 2010 Quantum mechanics just got REALMar 18 2010 The opening paragraph of the article says it all: Wait, what? Like, WHAT? Ok, let's start over: Andrew Cleland at the University of California, Santa Barbara, and his team cooled a tiny metal paddle until it reached its quantum mechanical 'ground state' -- the lowest-energy state permitted by quantum mechanics. They then used the weird rules of quantum mechanics to simultaneously set the paddle moving while leaving it standing still. The fuck? In my day, we were taught, with the help of non-graphing calculators and paper notebooks, that quantum mechanics was a lot of wand-wavey nonsense about wave/particle duality that you never had to worry about because it belonged to some magical tiny land that no one visits with their actual eyes. This...this is straight-up magic. [Cue Final Countdown] Nature's quantum computersFeb 11 2010 (via mr) The unobserved tree makes noiseMar 09 2009 Faster-than-light communicationAug 18 2008 In a Swiss experiment, two entangled photons 18 km away from each other were able to communicate with each other almost instantaneously. On the basis of their measurements, the team concluded that if the photons had communicated, they must have done so at least 100,000 times faster than the speed of light -- something nearly all physicists thought would be impossible. In other words, these photons cannot know about each other through any sort of normal exchange of information. Update: Hrm, the link above scampered behind Nature's paywall. Here's a post on the Scientific American blog instead. Physicists at the University of Washington areNov 17 2006 Physicists at the University of Washington are hoping to use entangled photons to send information back in time. "Here's where it gets weird." Researching quantum honeybeesJun 14 2005 Brian Greene on Albert Einstein's miracle year,Apr 08 2005 Brian Greene on Albert Einstein's miracle year, his discovery of the photoelectric effect, and his uneasiness with quantum mechanics. Tags related to quantum mechanics: science physics video biology computing Google NASA this is kottke.org    Front page    About + contact    Site archives Ad from The Deck We Work Remotely Hosting provided EngineHosting
How do you verify if a given email address is real or fake? The obvious solution is that you send a test mail to that email address and if your message doesn’t bounce, it is safe to assume* that the address is real. Ping an Email Address to Validate it! Step 1. Enable telnet in Windows or use the PuTTy tool. If you are on a Mac, open the iTerm app. Step 2. At the command prompt, type the nslookup command: nslookup –type=mx This nslookup command will query name servers for that domain. Since we have specified the type as MX, our command will extract and list the MX records of the email domain. Replace with the domain of the email address that you are trying to verify. MX preference=30, exchanger = MX preference=20, exchanger = MX preference=5, exchanger = MX preference=10, exchanger = MX preference=40, exchanger = Step 3. As you may have noticed in the nslookup output, it is not uncommon to have multiple MX records for a domain. Pick any one of the servers listed in the MX records, may be the one with the lowest preference level number (in our example,, and “pretend” to send a test message to that server from you computer. For that, go to the command prompt window and type the following commands in the listed sequence: 3a: Connect to the mail server: telnet 25 3b: Say hello to the other server 3c: Identify yourself with some fictitious email address mail from:<> rcpt to:<> • – The email account that you tried to reach does not exist. • – The email account that you tried to reach is disabled. Comments »
Topic: BIRDS Date: 1300-1400 Origin: From the sound 1 verb twit‧ter1 [intransitive] 1HBB if a bird twitters, it makes a lot of short high sounds 2 to talk about unimportant and silly things, usually very quickly and nervously in a high voice Explore BIRDS Topic Word of the Day The BIRDS Word of the Day is: Other related topics
5 Food Rules for Fighting Cancer You are here 5 Food Rules for Fighting Cancer These new guidelines for fighting cancer are worth paying attention to. Change up your diet accordingly and you could save your own life as a result. When it comes down to it, you really can’t beat the health benefits of a plant-based diet. Fruits and veggies, especially leafy greens, help reduce overall cancer risk. A high intake of cruciferous vegetables, such as broccoli, kale, and cabbage, is associated with an 18% reduced risk of colorectal cancer and reduced risk of lung and stomach cancers. Also, including tomato products regularly in your diet has been shown to reduce risk of gastric cancer by 27%. Furthermore, garlic and other allium vegetables, such as onions, significantly reduce risk for gastric cancer. Researchers found that a Western diet (high amounts of meat and fat with minimal amounts of fruits and vegetables) doubles the risk for gastric cancer.   8 Foods That'll Never Make You Fat>>> Want more Men's Fitness? Sign Up for our newsletters now. more galleries
Return to the Purplemath home page The Purplemath Forums Helping students gain understanding and self-confidence in algebra powered by FreeFind Find a Passaic Park, NJ Precalculus Tutor Subject: ZIP: Daniel B. ...Trigonometry is the mathematical study of the relations of angles to side lengths. Trigonometry commonly is taught with Algebra II, which is a subject many high school students find difficult already. It also arises on the New York Regents Exams. 24 Subjects: including precalculus, chemistry, calculus, algebra 1 Tarrytown, NY 11 Subjects: including precalculus, calculus, geometry, trigonometry Paramus, NJ Cheng H. ...For this type of problems, students don't really know how to eliminate some extra and unuseful words. So most students can't solved correctly, and probably do random guess. The most important thing is that, students need to find the main words from each long sentences. 16 Subjects: including precalculus, reading, geometry, Chinese Flushing, NY Brian R. ...I have been working in Manhattan as an analyst and would like to help students as much as a part time job to help with the costs of my newborn. I scored in the 700s on my math GRE exam. I have taken undergraduate and graduate level economics coursework. 4 Subjects: including precalculus, algebra 1, economics, SAT math East Rutherford, NJ 24 Subjects: including precalculus, reading, calculus, geometry Brooklyn, NY  Feedback   |   Error?
Try Our Apps Gobble up these 8 terms for eating [riv-er] /ˈrɪv ər/ a similar stream of something other than water: a river of lava; a river of ice. any abundant stream or copious flow; outpouring: rivers of tears; rivers of words. (initial capital letter) Astronomy. the constellation Eridanus. sell down the river, to betray; desert; mislead: to sell one's friends down the river. up the river, Slang. 1. to prison: to be sent up the river for a bank robbery. 2. in prison: Origin of river1 1250-1300; Middle English < Old French rivere, riviere < Vulgar Latin *rīpāria, noun use of feminine of Latin rīpārius riparian Related forms riverless, adjective riverlike, adjective Can be confused brook, creek, river, stream. Unabridged Cite This Source British Dictionary definitions for sell down the river 1. a large natural stream of fresh water flowing along a definite course, usually into the sea, being fed by tributary streams 2. (as modifier): river traffic, a river basin 3. (in combination): riverside, riverbed, related adjectives fluvial potamic any abundant stream or flow: a river of blood (informal) sell down the river, to deceive or betray (poker, slang) the river, the fifth and final community card to be dealt in a round of Texas hold 'em Derived Forms riverless, adjective Word Origin C13: from Old French riviere, from Latin rīpārius of a river bank, from rīpa bank Collins English Dictionary - Complete & Unabridged 2012 Digital Edition Cite This Source Word Origin and History for sell down the river early 13c., from Anglo-French rivere, Old French riviere "river, riverside, river bank" (12c.), from Vulgar Latin *riparia "riverbank, seashore, river" (cf. Spanish ribera, Italian riviera), noun use of fem. of Latin riparius "of a riverbank" (see riparian). Generalized sense of "a copious flow" of anything is from late 14c. The Old English word was ea "river," cognate with Gothic ahwa, Latin aqua (see aqua-). Romanic cognate words tend to retain the sense "river bank" as the main one, or else the secondary Latin sense "coast of the sea" (cf. Riviera). U.S. slang phrase up the river "in prison" (1891) is originally in reference to Sing Sing prison, which was literally "up the (Hudson) river" from New York City. Phrase down the river "done for, finished" perhaps echoes sense in sell down the river (1851), originally of troublesome slaves, to sell from the Upper South to the harsher cotton plantations of the Deep South. Online Etymology Dictionary, © 2010 Douglas Harper Cite This Source sell down the river in Science A wide, natural stream of fresh water that flows into an ocean or other large body of water and is usually fed by smaller streams, called tributaries, that enter it along its course. A river and its tributaries form a drainage basin, or watershed, that collects the runoff throughout the region and channels it along with erosional sediments toward the river. The sediments are typically deposited most heavily along the river's lower course, forming floodplains along its banks and a delta at its mouth. The American Heritage® Science Dictionary Cite This Source Slang definitions & phrases for sell down the river Copyright (C) 2007 by HarperCollins Publishers. Cite This Source sell down the river in the Bible (1.) Heb. 'aphik, properly the channel or ravine that holds water (2 Sam. 22:16), translated "brook," "river," "stream," but not necessarily a perennial stream (Ezek. 6:3; 31:12; 32:6; 34:13). (2.) Heb. nahal, in winter a "torrent," in summer a "wady" or valley (Gen. 32:23; Deut. 2:24; 3:16; Isa. 30:28; Lam. 2:18; Ezek. 47:9). These winter torrents sometimes come down with great suddenness and with desolating force. A distinguished traveller thus describes his experience in this matter:, "I was encamped in Wady Feiran, near the base of Jebel Serbal, when a tremendous thunderstorm burst upon us. After little more than an hour's rain, the water rose so rapidly in the previously dry wady that I had to run for my life, and with great difficulty succeeded in saving my tent and goods; my boots, which I had not time to pick up, were washed away. In less than two hours a dry desert wady upwards of 300 yards broad was turned into a foaming torrent from 8 to 10 feet deep, roaring and tearing down and bearing everything upon it, tangled masses of tamarisks, hundreds of beautiful palmtrees, scores of sheep and goats, camels and donkeys, and even men, women, and children, for a whole encampment of Arabs was washed away a few miles above me. The storm commenced at five in the evening; at half-past nine the waters were rapidly subsiding, and it was evident that the flood had spent its force." (Comp. Matt. 7:27; Luke 6:49.) (3.) Nahar, a "river" continuous and full, a perennial stream, as the Jordan, the Euphrates (Gen. 2:10; 15:18; Deut. 1:7; Ps. 66:6; Ezek. 10:15). (4.) Tel'alah, a conduit, or water-course (1 Kings 18:32; 2 Kings 18:17; 20:20; Job 38:25; Ezek. 31:4). (5.) Peleg, properly "waters divided", i.e., streams divided, throughout the land (Ps. 1:3); "the rivers [i.e., 'divisions'] of waters" (Job 20:17; 29:6; Prov. 5:16). (6.) Ye'or, i.e., "great river", probably from an Egyptian word (Aur), commonly applied to the Nile (Gen. 41:1-3), but also to other rivers (Job 28:10; Isa. 33:21). (7.) Yubhal, "a river" (Jer. 17:8), a full flowing stream. (8.) 'Ubhal, "a river" (Dan. 8:2). Easton's 1897 Bible Dictionary Cite This Source Idioms and Phrases with sell down the river sell down the river Betray, as in They kept the merger a secret until the last minute, so the employees who were laid off felt they'd been sold down the river. This expression, dating from the mid-1800s, alludes to slaves being sold down the Mississippi River to work as laborers on cotton plantations. Its figurative use dates from the late 1800s. The American Heritage® Idioms Dictionary Cite This Source Word of the Day Word Value for sell Scrabble Words With Friends Nearby words for sell down the river
Skin Problems, ASCO's curriculum This section has been reviewed and approved by the PLWC Editorial Board, 05/05 The skin is an organ that protects the body from infection. Because skin is on the outside of the body and visible to others, many people have difficulty coping with skin problems. In addition, skin problems can be painful because the skin contains many nerves. As with other side effects, prevention or early treatment of skin problems is best. With established skin problems, treatment and wound care can often improve pain symptoms and quality of life. With most skin problems, the doctor must decide if healing the wound is a realistic goal. If it is not, then containing it and keeping the patient as comfortable and pain-free as possible becomes the goal. Some common skin problems experienced by people with cancer are listed below. Chemotherapy extravasation. This is the term used to describe chemotherapy drugs leaking out of a vein. The drugs can damage the skin where the intravenous (IV) tube was placed or the tissue around the area it touches, causing pain or burning. Pain or burning during chemotherapy may not be normal, and you should tell the nurse or doctor right away. If an extravasation occurs during treatment, the infusion should be stopped and the wound cleaned. The nurses and doctors will care for the wound and instruct you how to care for it at home. Radiation-induced treatment skin problems. When radiation treatment kills cancer cells, it also kills some healthy cells. When it kills cells on the skin, it can cause the skin to peel. Damage to the skin from radiation treatment can start after one or two weeks of treatment and usually resolves a few weeks after treatment is finished. If skin damage from radiation treatment becomes a problem, the doctor may change the dose or schedule of treatments. Necrotic wound. This is a wound with dead skin or tissue around it. A necrotic wound cannot heal if the dead tissue is still present, so removing the dead tissue is the first step in treating it. This process is called debridement and may involve surgery, enzymes or gel, or other methods. Pressure ulcers (bed sores). Pressure ulcers are sores that form where there is constant pressure on one area of the body. They often form on the heels of the feet or the sacrum (tailbone). Ulcers are less likely to form on parts of the body where there is a thicker layer of fat. For patients who are confined to a bed, an air or water mattress overlay may help prevent ulcers. Low-air-loss beds or air-fluidized beds may also help. For pressure ulcers that are not expected to heal, providing comfort and pain control and preventing ulcers from getting worse become the main goals. Malignant wounds. These injuries form when cancer breaks the skin and causes a wound. A malignant wound can be caused by cancer from another part of the body or from skin cancer. These wounds may have a foul odor, bleed, and can be very painful. Malignant wounds carry a high risk of infection. They may also exudate, or secrete, a large amount of fluid or blood. The odor from a malignant wound can be overpowering and can be managed by placing a competing odor or odor absorber in the room, such as cat litter, charcoal, or coffee. Pruritus. This is the medical term for itching. Pruritus in people with cancer is most often caused by leukemia, lymphoma, myeloma, or other cancers. Kidney or liver failure, thyroid problems, a drug reaction, dry skin, hives, and other skin infections can also cause itching. Pruritus that is caused by an irritant, such as a drug, can be treated by stopping the use of that drug. Moisturizers, antihistamines, steroid medications, and cooling or painkilling creams or gels may also help relieve itching. Back to Top
Hotelling's Theory DEFINITION of 'Hotelling's Theory' This theory proposes that owners of non-renewable resources will only produce a supply of their product if it will yield more than instruments available to them in the markets - specifically bonds and other interest-bearing securities. This theory assumes that markets are efficient and that the owners of the non-renewable resources are motivated by profit. Hotelling's theory is used by economists to attempt to predict the price of oil and other nonrenewable resources, based on prevailing interest rates. BREAKING DOWN 'Hotelling's Theory' The theory states that if oil prices do not rise at the prevailing interest rate, there would be no restrictions on supply. If, conversely, oil prices were expected to increase faster than interest rates, producers would be better off not bringing the oil out of the ground. 1. Supply A fundamental economic concept that describes the total amount ... 2. Barrels Of Oil Equivalent Per Day ... A term that is used often in conjunction with the production ... 3. Barrels Per Day - B/D 4. Oil Sands Sand and rock material which contains crude bitumen (a heavy, ... 5. Interest Rate 6. Demand An economic principle that describes a consumer's desire and ... Related Articles 1. Active Trading Oil And Gas Industry Primer 2. Active Trading Uncovering Oil And Gas Futures 3. Forex Education Commodity Prices And Currency Movements 4. Active Trading What Determines Oil Prices? 5. Active Trading Why You Can't Influence Gas Prices Don't believe the water-cooler talk. Big oil companies aren't to blame for high prices. 6. Active Trading Oil As An Asset: Hotelling's Theory On Price Not sure where oil prices are headed? This theory provides some insight. 7. Economics Long-Term Investing Impact of the Paris Attacks 8. Chart Advisor Copper Continues Its Descent 9. Markets PEG Ratio Nails Down Value Stocks 10. Stock Analysis What Exactly Does Warren Buffett Own? Learn about large changes to Berkshire Hathaway's portfolio. See why Warren Buffett has invested in a commodity company even though he does not usually do so. 1. Which mutual funds made money in 2008? 2. How do mutual funds work in India? 3. What are the maximum Social Security disability benefits? 4. How do I calculate the future value of an annuity? When planning for retirement, it is important to have a good idea of how much income you can rely on each year. There are ... Read Full Answer >> 5. Do hedge funds invest in commodities? 6. Have hedge funds eroded market opportunities? Hedge funds have not eroded market opportunities for longer-term investors. Many investors incorrectly assume they cannot ... Read Full Answer >> You May Also Like Hot Definitions 1. Quick Ratio 2. Black Tuesday 3. Black Monday 4. Monetary Policy 5. Indemnity 6. Discount Bond Trading Center
Into the abyss: Scientists explore one of Earth's deepest ocean trenches What lives in the deepest part of the ocean--the abyss? A team of researchers funded by the National Science Foundation (NSF) will use the world's only full-ocean-depth, hybrid, remotely-operated vehicle, Nereus, and other advanced technology to find out. They will explore the Kermadec Trench at the bottom of the Pacific Ocean. Hubble pictures deepest-ever view of space Astronomers have pieced together the deepest-ever view of the universe, peering back more than thirteen billion years. Chinese sub dives over 5,000 meters Following a successful test dive in the Pacific, Chinese scientists say they're on course to complete one of the world's deepest ever dives next year. Chikyu drill prepares to pierce Earth's mantle The world's deepest drill is currently being prepped to pierce the Earth's enigmatic mantle.
Musings on ancient Hebrew versus Arabic I am not an expert in Semitic languages – but I recognize depth of knowledge when I see it.  A commentator, who goes by the name of “Abu Rashid”, wrote in another topic: His reference to the third century B.C.E. is, of course, based on the famous translation of the Hebrew Bible into Greek: the Septuagint.  There we find clear indications of the double ‘Ayin/Ghayin (ע).  For example, Gaza, which is  ‘Aza in modern Hebrew and Gomorrah, which is ‘Amorah in modern Hebrew.  Similarly, we find indications of the double Heth/Kheth (ח) there.  Both double versions are found in Arabic to this day.  Another double letter, found in Arabic but not in Hebrew, is the double Sad/Dad (ص‎, ض).  I am not aware of any vestiges of this double letter in Hebrew, even from ancient times. These ancient double letters seem to be in conflict with some of the current double letters of Hebrew; our current soft Ghimel (ג) seems to be too close to the ancient Ghayin for coexistence.  Our current soft Khaf (כ) seems, likewise, to be too close to the ancient Kheth for coexistence. The Book of Yesirah (probably of ancient origin) states (chapter 3): Twenty two letters, the foundation of three nations, seven doubles, and twelve simple (letters)… (The) seven doubles are Beth, Gimal, Daleth, Kaf, Pe, Resh and Taw. The identities of almost all these double letters are well established throughout the Jewish diaspora (with the exception of the Ashkenazi one, which lost the double Daleth long ago, and the double Gimal only about three hundred years ago).  The double Resh (ר) is the only one that no longer has any living tradition of its nature.  There are only a handful of instances where the Resh appears with a dagesh in Scripture.  For those of you familiar with Arabic, the dagesh (in its “strong” form) is the equivalent of the Shadda.  Among Yemeni Jews, these isolated dageshed Resh’s are actually pronounced differently: ahothi ra’yathi yonathi thammathi shaRRoshi nimla tal… (Song of Songs 5:2) … with a strong emphasis on the trill.  However, I am fairly certain that it is not those isolated dageshed Resh’s that the Book of Yesirah is referring to.  Instead, my hunch is that the soft Resh was pronounced much like the American “R”, while the hard Resh was flapped or trilled.  I noticed this double pronunciation among Iraqi Jews – though it does not seem to be an officially recognized distinction among them.  My late friend Ben Siyyon Cohen was of the opinion, for a while, that one Resh was supposed to be said near the teeth (as in Arabic and other Semitic languages) while the other with the glottis like most Israelis do now.  I convinced him of his error – fortunately before he wrote his books.  The glottal Resh is an abomination. Be it as it may, I find it interesting (and somewhat disturbing) that the archaic double letters survived up until the Hellenistic period – and yet there seems to be no written record of any transition.  Surely there must have been some sort of lingering tradition of those lost double letters at the time of Hillel and Shammai.  The simplest explanation would be that the Mishnah mainly concerned itself with matters of Jewish law and did not delve into history or folklore except when it pertained to the law.  By the time it became fashionable to record folklore (as in the Talmud), those traditions had already been forgotten. I may be way off base here, but there may be an indirect vestige of the ancient form of Hebrew – among Yemeni Jews.  According to their tradition, the letter Beth is called “Beh“, the letter Daleth is called “Dal“, the letter Heth is called “Het“, the letter Teth is called “Tet“, and the letter Samekh is called “Semak“.  There are other differences but these are the ones that interest us – for they all lack the final soft Thaw (or Khaf) at the end.  Is it possible that this is a relic from earlier days when the older double letter system was still in use and there was no distinction between Taw and Thaw or between Kaf and Khaf? What Abu Rashid wrote about Arabic and Ugaritic is probably also true of Phoenician; for all practical purposes, the Phoenicians spoke a dialect of Hebrew.  I wonder how much of their language has been recorded in historical records.  As an aside, I think it is interesting that for all the effort the ancient Jews put into stamping out the cult of Ba’al, he still exists.  He exists in the modern Lebanese city of Ba’albek and he exists in Hannibal, which is actually Hen Ba’al = the grace of Ba’al. But by far, the greatest resource we have in understanding ancient Hebrew is Arabic.  Classical Arabic, and it modern counterpart formal Arabic, is a living fossil.  For all its brutality, the birth and spread of Islam did have a silver lining in the preservation of Arabic.  Had Islam never been born, it is possible that the entire Semitic world would have succumbed to Greek or some other foreign tongue.  All that would have remained would be Amharic and maybe some holdouts in Yemen.  For all the destruction it wrought upon non-Semitic peoples, Islam was like Mount Vesuvius for the Semitic world – destroying and preserving at the same time.  It did replace Aramaic and perhaps some other North Semitic languages but this happened very slowly.  Christians cling to Aramaic to this very day. My impression has always been that Arabic is far more conservative than Hebrew in its spoken form (as Abu Rashid, who is not a native Arabic speaker, says elsewhere).  But the Hebrew alphabet seems to be more ancient than the Arabic one.  The modern Hebrew alphabet is not even Hebrew.  It is traditionally called “Kethav Ashuri“, which means “Assyrian script” – a clear indication that it was adopted while in exile.  Nevertheless, it is of ancient Semitic origin.  While Semitic languages probably had their origins in the South, writing (at least our form of writing) had its origins in the North. There are many words that are, essentially, the same in Hebrew and Arabic.  There are many other words that are quite similar – but it is not always obvious that this is so.  There are Biblical passages that do not seem to make sense without a knowledge of Arabic.  For example: Yosef called the firstborn Menasheh, for God has made me forget all my toil and all my father’s house. The word for “made me forget” is nashani.  No Hebrew speaker would recognize this word if used in another context (unless he happened to remember this verse).  Hebrew uses an entirely different word for “forget” than this one.  But Arabic still uses the Biblical word to this day, in a slightly different pronunciation. Every name has a meaning; people do not simply assemble random phonemes together and call it a name.  English speakers tend to lose sight of this because most of our names come from other languages.  When we look at the names of ancient Jewish kings, for example, many of them seem to have no meaning.  What does King ‘Omri’s name mean?  A Hebrew speaker would shrug his shoulders and say, “it’s just a name.”  But an Arabic speaker would recognize it as coming from the root ‘amr, which means “life”.  It is the equivalent of the modern Hebrew name Hayyim (and Arabic Omar). There are some very interesting and colorful names in Scripture.  Finding their meanings would be an interesting project indeed.  Arabic would be an essential tool for such an endeavor.  What about the name Abraham? Abraham’s name first appears as Abram (Hebrew: אַבְרָם, ModernAvramTiberianʾAḇrām), meaning either “exalted father” or “my father is exalted” (compare Abiram). Later in Genesis God renamed him Abraham; the text glosses this as av hamon (goyim), “father of many (nations)”,[4] but it does not have any literal meaning in Hebrew.[5] Many modern interpretations based on textual and linguistic explanations have been offered, including an analysis of a first element abr- “chief”, but this yields a meaningless second element. Johann Friedrich Karl Keil suggests there was once a word raham (רָהָם) in Hebrew, meaning “multitude”, on analogy with the word ruhâm which has this meaning in Arabic, but no evidence that this word existed has been found;[6] and David Rohl suggests the name comes from Akkadian “the father loves.”[7] It seems to me that there might have been an ancient plural form that involved adding an extra letter to the middle of the word.  This type of plural is foreign to Hebrew speakers – but quite common in Arabic. The use of Arabic, as a tool to understand Hebrew, does have its limits.  The two languages split apart several thousand years ago and then flourished within completely different environments – for the most part;  in Medieval Andalusia, they reunited once again and the embers from that glow still burn. About jewamongyou 32 Responses to Musings on ancient Hebrew versus Arabic 1. Very nice. I studied the Mongols quite a bit, and since they were an illiterate people, their stories were captured in Chinese, translated into Persian, then into German, then into English. At which point the narrative has been more laundered than was Greek Mythological violence by tasteful and imaginative Victorians. Spent a little time on ancient greek. A little more on early Cuneiform (Ashurnasirpal’s tablets). At which point I lost all sense of wonder at the words of the ancients. I can’t remember who said “What we know about the world prior to 700 BC is precisely nothing.” By which he meant, that the content of those languages is largely opaque to us. A friend of mine, working on his dissertation, translated an ancient greek letter from one man to another. And the meaning is almost impossible to discern. Although, while we are conditioned to listen to well written prose of hollywood script writers, listening in on most common conversations that take place among friends is an exercise in deduction. Perhaps we have not come so far. I only wish the early inhabitants of northwestern europe had not been so adverse to writing down their myths. That ancient cult and their world view is lost to us except as fragments captured in latin. Hebrew by contrast, (though the early practitioners applied the same reasoning that by requiring memorization of sacred knowledge, the wise men were self selected), the mythology was eventually written down. Beautiful things. Really. Time in a bottle. 2. Patrick says: It’s interesting how in The Bible the names of people correspond to what their role is and their personal attributes, and if something about them changes so does their name. Often times when people are initiated into various spiritual mysteries they are given a secret name associated with that spirituality. Converts to islam are given a muslim name and converts to judaism are given a jewish name. Its also interesting how language is a defining feature of a nation. Often times subcultures develop there own slang so those subcultures can be said to be incipient nations. Often they feel pressure to abandon their slang and to speak the real language… but who determines what the real language is? People can determine social class in Britain by peoples accents, or at least at one time they could…. I don’t know if thats still the case and so that indicates Britain is a multi-tribal society and so the idea that that nation is a single tribe would be faulty. 3. Abu Rashid says: Very enlightening reading JAY. The issue of the modern double letters being attached to letters which they don’t belong to etymologically, in comparison with other Semitic languages, is a perplexing one. The only explanation I can postulate is that in ancient times Hebrew was made up of a standard language and a number of regional dialects (much like Arabic and even Aramaic & Hebrew to some degree are today) and that the standard language merged these phonemes, whilst some dialects retained them. As the influence of the standard language began to drown out the regional retentions of these letters, they ended up being handed back to the standard language as a memory of an archaic pronunciation, perhaps during some kind of attempt to reform and preserve the language. This could well have occurred as Greek, Aramaic and other languages were beginning to encroach upon the dominance of Hebrew during the period in question, as mentioned in your post. Regarding the Arabic letter Dod (ض), apart from the Arabian peninsula there are no Semitic languages which exhibit any vestiges of this letter, apart from the fact they seem to have merged this letter into Tsade/Sod (ص) a very very long time ago. And it is for this reason that Arabic is known as Lughat ad-Dod (The language of the letter Dod) because this letter alone is unique to Arabic (and the other extinct languages of the peninsula). Ugaritic, the closest language phonetically-speaking to Arabic retains almost every single one of the emphatic Semitic letters, except for the Dod. Regarding the Arabic & Hebrew scripts, you are correct that the Hebrew script is older than the modern Arabic one, as the modern Arabic script is actually a descendant of the Hebrew one, however unlikely that seems when looking at them. More accurately both are descendants of two different variants of the Aramaic alphabet. Hebrew is obviously a descendant of the square script, whilst Arabic is a descendant of the cursive variety that developed through Estrangelo -> Nabataean -> Arabic. The ancient Arabic script though is a different story. This script fell into disuse in the Arabian peninsula during the Islamic period as modern Arabic (as mentioned above, a Northern Semitic descended script) supplanted it. The ancient Arabian script is a direct descendant of Proto-Sinaitic, the same script that Paleo-Hebrew/Phoenician descended from, and it’s similarity to those scripts is quite obvious when comparing quite a few of the letter shapes (noon, qof, shin, taw to name a few are almost identical). In fact I am of the opinion that the ancient Arabic script, along with Paleo-Hebrew actually formed a South-West family of scripts, which were slowly replaced by the North-East scripts. So whilst the modern scripts do have their origins in the North[-East], they were pre-dated by scripts that originated in the South[-West]. The modern Ethiopian script is the only script in use today that is based on this South-West branch of Semitic scripts. Todah Rabah for those examples of Biblical names & their etymologies. Although I have come across the one about Abraham, I had never heard about Menasheh. It is suggested by Arabic scholars that the Semitic word for human (ins in Arabic, ish in Hebrew) actually derives from this root, as mankind is a being that is inherently forgetful. Abu Rashid. • jewamongyou says: It’s interesting that Ethiopians are the only ones who continue to use the South-West script. Amharic/Ge’ez has always reminded me of the ancient Sabaean inscriptions. Some day I’d like to study Ge’ez; who knows what treasures lay buried within it. The word for “human” in Hebrew would be “anash” – retaining the nun. The word “ish” means “man”. To the best of my knowledge anyway. Interesting theory that “ins” is derived from the word for “forget”. If I compose a list of interesting scriptural names, would you help me try to figure out their meanings? 4. Abu Rashid says: The Ge’ez script is indeed descended from the Sabaean script, but it has lost a few letters and added others. It also developed into an Abguida, a variation on the abjad, which is pretty much a consonant-only Semitic-style script, but with diacritics built into the letters, a very interesting development for Semitic scripts. You’re right about ish/anash, I mixed that up. Anash fits even better then, since it shows all the radicals of the root. As for figuring out the origins of the names, I can’t guarantee I’ll be able to, but I’ll give it shot, list away. Abu Rashid. 5. Shalomo Eliyahu Schorr says: I have had a similar hunchthan you regarding the resh raphe in Tiberian Hebrew (that is an american r). You mentioned you have noticed this pronunciation amongst Iraqi Jews? According to Rav Saadiah Gaon (on his commentary on Sepher yetsira) the Babylonian Jews in his days (end of the 1st millenium CE) still differentiated between the two pronunciations of resh. I am interested in where you noticed this pronuncian: is it from personal experience, or is there another source? Thank you 6. Shalomo Eliyahu Schorr says: I hope you do not mind me bugging you like this. Which type of personal experience did you mean (working in the field, personal aquantances, theoretical)? you mention that it isn’t an officially recognised distinction amongst Iraqi Jews…does that mean that it’s an anomaly in selected individuals, or is it wide spread, but just not recorded in previous literature? Thank you! • jewamongyou says: Oh, sorry about my short answer earlier. On the contrary, you’re not bugging me at all! It’s good to have you here my friend. When I lived in Jerusalem, back in the 80s, I had a friend who (though young as I was) had learned Iraqi Hebrew and traditional songs from the best of his community. I heard it from him, though I doubt he was aware of it, and from the hazan at Hakham Ya’aqob Mussafi’s synagogue in Geulah: Hakham Tufiq. 7. Allen Rasafar says: I am so delighted to read this. it is a great pleasure to participate to learn deeper meaning of the origins of the languages. During my few trips to Holy land, Israel, my interest was prompted by shapes of the Hebrew letters, let alone the phonemes and pronunciation of the words. It was and is hard for me to make sense of the Hebrew alphabets, however your recommendation to learn Arabic is quite interesting. In my short journey, I have learned that pronunciation of some of Arabic letters are similar to Chinese as in letters Sad, Soft zad, soft Ta, and more. It seems that Root of both Hebrew and Arabic may go back to Chinese (Uighur) language as the father of Aramaic language but it takes some expertise to breakdown the Chinese words. What this linguistic relationship means in Historical, Political and Religious terms I am not the expert to elaborate on it. Raised in USA, my mother tongue is Azeri, and familiar with Armenian, Russian, Korean, Turkish, Chinese, some Japanese, some Persian and Latin, it is always so joyful to hear, read and learn that as a big family of Human beings, we are connected in so many ways, whilst parted and re united by our own choice of languages. Cannot resist to comment on effect of Religion on languages, though my most concern and interest is in how a great percentage of names and places are wiped out by Christian/Vatican domination as it was with other major historical events. I like to know names of places and people before it was over ran by Christians. As to Northern Europeans, whether it is Cyrillic, Swedish, German, English,… it is evident that they migrated from Sarmatia, as Sarmatians. I would like to learn how this word was composed, and what is root of this word and what was their language. Best Regards, • jewamongyou says: If you could show that the Semitic languages are related to Chinese, it would be quite an accomplishment. Glad you enjoyed the post. • Yirmeyahu says: I was reading this and didn’t care until you made the claim that hebrew is related to chinese. That doesn’t make any sense at all. Modern Hebrew has 22 letters. Biblical hebrew may have had around 30. Chinese does not use letters at all. They uses characters to represent different syllables, words, or even tenses of words. The average speaker of Mandarin chinese may know around 2,000 to 4,000 individual characters. A graduate in chinese letter may know double that, or up to 10,000 characters. Besides the obvious difference in the alphabet is grammar, culture, and context. Perhaps at the time of the tower of Babel they may have been dimly related, but by the time of Abraham or even modern day times they are completely unrelated. What you state has no basis in fact. • Allen Rasafar says: I will continue to learn from this discussion. You are right about the diffrence of order of magnitude in letters, expressions and words, in chinese verses Arabic, or Hebrew, however, as you mentioned, by the time of Abraham, the Aramic, Hebrew and Arabic languages were divesified and may have simplified and changed to a large degree. But again, the original meanings of the some clasical words and names in bible, and in arabic texts makes more sense in Chinese and Korean translation than in Arabic or Hebrew meanings. Best wishes, • Yirmeyahu says: That does not make any sense. These texts were never written in the chinese language, by chinese people, from a chinese perspective. You are taking a text written by people who were dynamically different from the chinese and refabricating them to fit a people they never belonged to. If they do make sense in hebrew then you do not know hebrew very well. There is no Jarusalem in china, and not one name of a single Hebrew person means anything in Chinese. For example south in hebrew is darom, in chinese it’s nan, such as Nanjiang which means south capital. The language, people, and culture are completely unrelated. Stating they make more sense in a language and culture unrelated is foolishness. You obviously are not educated in hebrew or arabic. Hebrew, aramaic, and arabic are considered the oldest living languages in the world. This is a generalization based on the writing systems of these languages. Archaic chinese dates to about 1100 BCE. Hebrew dates to about, oh I don’t know 3,000 BCE. Aramaic itself dates to about 1100 BCE, it’s about as old as archaic chinese. Arabic is the youngest of these languages and it dates to about 500 CE. Not a single linguist, or scholar in his right mind would ever make the claim that the Semitic languages descended from chinese. Why on earth are you claiming such a thing? Is it because you prefer the chinese language? That would be the only valid consensus I could imagine. • Allen Rasafar says: Thank you for your complements, though we do not agree on the origins of the Hebrew and Arabic languages, it does not make me foolish, but rather an presenting another perspective to discussion. Obviously you are bias on historical issue is going too far. Hebrew language and Culture is not the oldest in the world, you can find it easily by looking up, Arkaim of Russia, Pyramids of China and Japan. I will continue to learn Hebrew, but I shall no longer participate in your rather unfriendly discussion. Just to add some depth to into this discussion, I know more about Hebrew and Arabic than you may know about classic Chinese, Korean or Uyghur languages. Thank you, Best wishes 8. Yirmeyahu says: I was reading your article and I enjoyed it thoroughly. I’m not sure you know this but Ben Yehudah used the Sephardic pronunciation of Hebrew because he thought it was the most beautiful, but he also used Arabic to ‘glue’ the language together, probably because much of the language outside of it’s liturgical setting was lost. What I found humorous though is when you stated that the glottal Resh was an abomination. I could not agree more. Whenever I go to a synagogue ran by Ashkenazi jews it amazes me how different they pronounce their hebrew. I have studied various dialects of both Hebrew and Arabic and most of their pronunciations are identical, of course this is a generalization. Ashkenazi Hebrew seems to be the farthest away from the original hebrew spoken by the great patriarchs, however the Chabad, and even many modern Israelis seem to proudly wear it like a badge of honor. Let me ask you this; do you think this is due top modern day Jews trying to appear lest related our Arab cousins, or do think it comes from a genuine belief that they speak the same language as our forefathers. I’d love your opinion on the matter. • jewamongyou says: Thanks! As for your question, I’m pretty sure the average Israeli speaker gives not thought, whatsoever, to how he speaks. He doesn’t care – but he knows the Yemenite pronunciation is more authentic. He’s too lazy to do anything about it even if he cared. As for those who do give it some thought, yes I believe they do want to distance themselves from the Arabs. As for religious Ashkenazim, they can’t stand the thought of their rabbis having been wrong, so they emulate them in every respect, including their Hebrew accent. • Yirmeyahu says: Lol. That’s what I thought as well. If Ashkenazi Rabbis are incorrect reformed should be made. Israel is still a young country, and if we are not careful we will lose our language and culture again. I have been trying to learn the Yemenite pronunciation for a while now, however many of it’s sounds are foreign to me. The most difficult being Ayin. I doubt I will ever be able to pronounce it correctly. It is too difficult for me today. I always thought that the Hebrew people were close in culture to the Arabs. Perhaps not brothers, but definitely cousins. My ancestors were Sephardic jews. My grandmother told me we once dressed identical to the Arabs, but thanks to the Ashkenazi and other Jews of European descent we’ve lost most of our heritage. I wonder why the jews of modern Israel are so dead set against their obviously middle eastern origin. 9. jewamongyou says: Re: Yirmeyahu, Yes, it’s a pity about losing the traditional dress. I tried to adopt such garb myself, on an experimental basis, while living in Jerusalem. People looked at me like I was a weirdo – and I was! But there were a few young Yemenites who wore traditional dress, and raised their families the same. Their’s is an uphill struggle. It’s all but illegal to do so in Israel. • Yirmeyahu says: It’s mind boggling to hear that. Most jews I’ve met were either of Sephardic or Temani descent, and those jews always dressed like their arab cousins. I do not think Kind David ever wore a long dress coat or a fedora. Those are distinct european styles and you would think modern Israelis would want to be traditional as possible, especially since it’s the first time in 2,000 years that we can finally to do so. I live in Florida, however I do try to emulate my ancestors as close as possible. I do not want my family traditions dying off only because they are not popular. My child is 3 years old and is already learning the Hebrew and Arabic scripts. I figured to start with those since she will be flooded with english and Western culture in school. What do you think of my decision? • jewamongyou says: I think it’s great that you’re raising your child in your own traditions. But if she attends public school, it might be very difficult for her to maintain them or respect them. It’s also, of course, very important to inculcate in her the importance of only dating boys of her own people. If you wait until she’s a teen, it’s be too late. • Yirmeyahu says: That is my concern as well. My wife was born to a secular family, but she likes my religion and culture very much. We are raising our kids jewish. However, I am afraid that they will date kids outside of our people and they will wind up not maintaining our traditions. People of the christian persuasion seem to want everyone to give up their traditions in favor of western culture. I’m hoping my kids grow to love who they are and where they come from. We are are even thinking of enrolling her into a private jewish school. Perhaps then it won’t be as large a concern. 10. Aharon says: wouldn’t alveolar trill for Resh Deghusha and Alveolar tap for Resh refuyah be much more probable than the akward alveolar aproximant (which in some instances is almost retroflex) of English and some dialects of dutch? The tap/Trill contrast is found in Spanish for example. I don’t know any language that contrasts an alveolar aproximant with a tril or tap. • jewamongyou says: Actually that has occurred to me. The only reason I didn’t bring it up is that, as far as I know, there is no living tradition of such a distinction. If I had to re-invent Hebrew, that’s what I’d go for. • Yirmeyahu says: I would assume so. The only Jews that I know that use the English R are Ashkenazi, and Ashekenazi is obviously been highly influenced by the surrounding European people. Sephardic hebrews was mostly influenced by Arabic. However, Arabic descends from the same language tree as Hebrew anyways. It’s like Spanish to Italian. Arabic uses the trilled, or Spanish R. In Teutonic (which is the language that german, dutch, english, norweigin, etc.. you know, northern european languages.. descend from; they are not latin based as many people think.) the trilled r is actually maintained. Linguists have no idea where on the language tree it was dropped. 11. Samer Jamal says: Salam, first of all I would like to thank you for this wonderful article, I am a native Arabic speaker and I am proud to speak it. I would disagree with you at some points in this article. I would like to give you the meaning of Abraham, which originally in AR pronounced “Abu Rahim” meaning ” the merciful father”…i would also like to add that Akkadian was Arabic written with akkadian alphabet, other alphabets wrot AR were nabtean and Canaanite as well as eremite/aramaic. • jewamongyou says: According to the Torah, “Abraham” meant “father of many nations”, though I suppose we can entertain any alternate theories we like. As for Akkadian being Arabic, this is an odd assertion to make. Arabic wasn’t introduced into that area until the Arab conquest hundred of years later. • Yirmeyahu says: Sorry, but I disagree with you, and I think JewAmoungYou would agree with me as well. Abraham according to the Torah does in fact mean ‘Father of the Nations’. Abraham is not an Arabic name. It’s Hebrew. While Hebrew and Arabic are closely related languages they still have their differences and are distinct from each other. You must also understand that Arabic did not develop until the ‘Common Era’. That would be during AD times. The earliest surviving writings are dated to 8th century CE. Scholars do not believe it’s any older then the 6th century CE. There is also the fact that most Arabic dialects are unintelligible to each other. Egyptian is so well known due to their position in the media. The earliest surviving Hebrew text may have been found at Khirbet Qeiyafa. It’s a piece of pottery written using the Oldest Hebrew alphabet known to man. It is form the 10 century BCE. The Dead Sea Scrolls themselves are at least 2,000 years old. As you can see Hebrew is plainly older then Arabic. It makes no sense that the Hebrew people would have borrowed from a language that did not exist yet. 12. Pingback: Filipinos rallying behind Israel « Jewamongyou's Blog Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
Manners and Morals Arizona Gov. Jan Brewer speaks to reporters in 2010. Photo via Getty Images. When our parents teach us at a very young age to say the magic words — please and thank you — they give us our first lessons in morality. Manners are the first step to morality. Etiquette is the first gesture of ethics. Manner and morals derive from the mores of a society. Etiquette derives from the ethos and ethics of a society. When Arizona Governor Jan Brewer wagged her finger in President Obama’s face upon his arrival in her state, she demonstrated not only a disregard for the Office of the President, but she simply displayed bad manners. In the United States, we do not have a monarch that embodies the state in his or her person. In the United States, that person is the president of the United States. He and the vice president are the only two elected officials who are elected nationwide. Thus, the president is not only the head of the executive branch of government, but he is the representative of the entire country. Governor Brewer’s demeanor toward the president was inappropriate. However, the deeper question is why would this woman think it is appropriate to put her finger in anyone’s face, president or not? NEWS: Quick Links 2 It's Finally Over -- and It Was Wrong Coming Home From Killing The recent British film In Our Name is a returning-soldier drama featuring a married woman, Suzy, who leaves her husband and little girl to fight in Iraq. Because she's involved in the killing of a little girl during her tour-this part is based on a true story, but it happened to a man -- she returns home only to steadily fall apart under the stress of soul-destroying anxieties. Could the Riots in England Have Been Averted? The rioting and rampages that spread across English cities last week have caused severe property destruction and raised public alarm. Writing in London's Guardian, community organizer Stafford Scott describes how he was among the group that on August 6 sought information from the police in Tottenham, a poorer section of London. They wanted an official statement on whether Mark Duggan had been killed by police bullets, as had been reported in the news. All we really wanted was an explanation of what was going on. We needed to hear directly from the police. We waited for hours outside the station for a senior officer to speak with the family, in a demonstration led by young women. A woman-only delegation went into the station, as we wanted to ensure that this did not become confrontational. It was when the young women, many with children, decided to call it a day that the atmosphere changed, and guys in the crowd started to voice and then act out their frustrations. This event is what most media accounts have identified as the spark that set England on fire, which has caught the world by surprise. Yet, says Scott, "If the rioting was a surprise, people weren't looking." Hidden Battles: A Story of Five Former Soldiers Hidden Battles is a 65-minute documentary which follows a female Sandinista rebel, an Israeli officer, a Palestinian freedom fighter, and two American soldiers as they come to terms with their combat experiences. The film offers unique insight and hope into the internal conflicts that human beings around the world continue to face long after they have left the battlefield. The documentary listens to the stories of these former soldiers as they reconcile what it means to have killed another human. A Vietnam veteran recalls that when he first killed, he was gripped by the feeling that he "did something -- literally against God." Watch this film and see how these veterans have fought to overcome. Each soldier deals with killing in his or her own unique way. Hidden Battles shows five ways in which this act is integrated into five different lives. Ultimately these stories testify to the resilience of the human spirit and hopefulness for the future. Just Jesus and an Unjust July 4: Why I Don't Celebrate Independence Day My friends and I can be stupid. Add explosives to the equation and the idiocy quotient increases exponentially. Such was the case every 4th of July during high school. A group of about 20 of my friends and I would get together to barbecue and play with illegal fireworks. At any unsuspected moment while taking a bite out of a burger, an M-80 could be lit under your seat, a sparkler thrown at your chest like a dart, or a mortar could be shot like a bazooka, catching bushes on fire. These chaotically stupid memories simultaneously serve as some of the most fun I can recall experiencing. So for me, Independence Day equals fun. However, there's a deeper reality to this holiday. Only about three years ago did I realize that in celebrating Independence Day, I'm also glorifying the roots on which this nation was founded: an unjust war. The "rockets red glare" and "the bombs bursting in air" remind us not of the day God liberated the colonies, but of the moment in history when our forefathers stole the rhetoric of God from authentic Christianity to justify killing fellow Christians. There's two reasons I'm convinced that celebrating Independence Day celebrates an unjust war.
Difference between revisions of "Static Code Analysis" Jump to: navigation, search (Uploaded and implemented control flow graph image) (Fixed html encoding typo) Line 63: Line 63: &lt;?php $name = "Ryan"; ?&mt; &lt;?php $name = "Ryan"; ?&gt; Revision as of 16:09, 19 March 2013 Every Control should follow this template. This is a control. To view all control, please see the Control Category page. Last revision (mm/dd/yy): 03/19/2013 Static Code Analysis (also known as Source Code Analysis) is usually performed as part of a Code Review (also known as white-box testing) and is carried out at the Implementation phase of a Security Development Lifecycle (SDL). Static Code Analysis commonly refers to the running of Static Code Analysis tools that attempt to highlight possible vulnerabilities within 'static' (non-running) source code by using techniques such as Taint Analysis and Data Flow Analysis. Ideally, such tools would automatically find security flaws with a high degree of confidence that what is found is indeed a flaw. However, this is beyond the state of the art for many types of application security flaws. Thus, such tools frequently serve as aids for an analyst to help them zero in on security relevant portions of code so they can find flaws more efficiently, rather than a tool that simply finds flaws automatically. Some tools are starting to move into the Integrated Development Environment (IDE). For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development lifecycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful as compared to finding vulnerabilities much later in the development cycle. The UK Defense Standard 00-55 requires that Static Code Analysis be used on all 'safety related software in defense equipment'. [0] There are various techniques to analyze static source code for potential vulnerabilities that maybe combined into one solution. These techniques are often derived from compiler technologies. Data Flow Analysis Data flow analysis is used to collect run-time (dynamic) information about data in software while it is in a static state (Wögerer, 2005). There are three common terms used in data flow analysis, basic block (the code), Control Flow Analysis (the flow of data) and Control Flow Path (the path the data takes): Basic block: A sequence of consecutive instructions where control enters at the beginning of a block, control leaves at the end of a block and the block cannot halt or branch out except at its end (Wögerer, 2005). Example PHP basic block: 1. $a = 0; 2. $b = 1; 4. if ($a == $b) 5. { # start of block 6. echo “a and b are the same”; 7. } # end of block 8. else 9. { # start of block 10. echo “a and b are different”; 11.} # end of block Control Flow Graph (CFG) An abstract graph representation of software by use of nodes that represent basic blocks. A node in a graph represents a block; directed edges are used to represent jumps (paths) from one block to another. If a node only has an exit edge, this is known as an ‘entry’ block, if a node only has a entry edge, this is know as an ‘exit’ block (Wögerer, 2005). Example Control Flow Graph; ‘node 1’ represents the entry block and ‘node 6’ represents the exit block. Control flow graph.png Taint Analysis Taint Analysis attempts to identify variables that have been 'tainted' with user controllable input and traces them to possible vulnerable functions also known as a 'sink'. If the tainted variable gets passed to a sink without first being sanitized it is flagged as a vulnerability. Some programming languages such as Perl and Ruby have Taint Checking built into them and enabled in certain situations such as accepting data via CGI. Lexical Analysis Lexical Analysis converts source code syntax into ‘tokens’ of information in an attempt to abstract the source code and make it easier to manipulate (Sotirov, 2005). Pre tokenised PHP source code: <?php $name = "Ryan"; ?> Post tokenised PHP source code: Strengths and Weaknesses • Scales Well (Can be run on lots of software, and can be repeatedly (like in nightly builds)) • For things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, etc. they are great. • Many types of security vulnerabilities are very difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. Tools of this type are getting better, however. • High numbers of false positives. • Frequently can't find configuration issues, since they are not represented in the code. • Difficult to 'prove' that an identified security issue is an actual vulnerability. • Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc. False Positives A static code analysis tool will often produce false positive results where the tool reports a possible vulnerability that in fact is not. This often occurs because the tool cannot be sure of the integrity and security of data as it flows through the application from input to output. False positive results might be reported when analysing an application that interacts with closed source components or external systems because without the source code it is impossible to trace the flow of data in the external system and hence ensure the integrity and security of the data. False Negatives The use of static code analysis tools can also result in false negative results where vulnerabilities result but the tool does not report them. This might occur if a new vulnerability is discovered in an external component or if the analysis tool has no knowledge of the runtime environment and whether it is configured securely. Important Selection Criteria • Requirement: Must support your language, but not usually a key factor once it does. • Types of Vulnerabilities it can detect (The OWASP Top Ten?) (more?) • Does it require a fully buildable set of source? • Can it run against binaries instead of source? • Can it be integrated into the developer's IDE? • License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.) RIPS PHP Static Code Analysis Tool OWASP LAPSE+ Static Code Analysis Tool Open Source/Free Other Tool Lists [0] Ministry of Defence (MoD). (1997) SAFETY RELATED SOFTWARE IN DEFENSE EQUIPMENT [Online]. Available at: http://www.software-supportability.org/Docs/00-55_Part_2.pdf (Accessed: 5 January 2012). [1] Northumbria University. (2012) Implementing Basic Static Code Analysis into Integrated Development Environments (IDEs) to Reduce Software Vulnerabilities [Online]. Available at: http://www.ethicalhack3r.co.uk/wp-content/uploads/2012/09/Implementing-Basic-Static-Code-Analysis-into-Integrated-Development-Environments-IDEs-to-Reduce-Software-Vulnerabilities.pdf (Accessed: 19 March 2013) Further Reading
You are here: Home / HPRC Blog RSS Feed for Total Force Fitness HPRC Fitness Arena: Total Force Fitness Artfully working through trauma HPRC Fitness Arena: Mind Tactics, Total Force Fitness Filed under: Art, Mood, Therapy, Trauma Art therapy can be a helpful tool in the recovery from trauma. Art therapy is one more tool in the arsenal against PTSD and similar disorders. It uses various forms of artwork and creativity to explore feelings, confront emotional conflicts, improve self-awareness, manage behaviors and addictions, reduce anxiety, and increase self-esteem. Under the supervision of an experienced therapist, art therapy can improve general functioning, health, and well-being and can help in recovery from trauma. Responses to traumatic experiences can include flashbacks and nightmares as your mind unconsciously tries to make sense of what happened. Art can be effective in helping your mind process, express, and even master traumatic experiences, because visual imagery can express what words can’t. Engaging in creative arts has been used specifically to help service members work through trauma. This kind of therapy involves working through your difficulties with a licensed therapist, but the same creative outlets can be great outside of therapy too. Find a craft or art that you find calming, enjoyable, and expressive. Engaging in the arts can be fun and therapeutic. Fight the effects of bullying with exercise Filed under: Children, Exercise, Teens The mental health benefits of exercising for children and teens are just as important as the physical ones. Children and teens face a lot of challenges these days, but exercise can help, even in such seemingly unrelated situations as bullying, a form of peer aggression. Bullying recently has come to the forefront as a public health concern. While the best solution is to prevent it, there are ways to cope and manage the effects of being bullied (such as depression, sadness, and decreased self-worth). Exercise can serve as a buffer against effects of being bullied. Bullied teens who regularly exercise at least 60 minutes a day, 4 days a week, are less likely to experience sadness or hopelessness. That’s important when you also consider that these feelings sometimes lead to suicidal thoughts or attempts among teens. Encouraging your child to participate in some kind of physical activity can help him or her conquer social obstacles while building good habits for a healthy adulthood. By also making physical activity a family matter, you can lead by example.  Learn more about how to prevent bullying and consult a healthcare professional and a school counselor if you’re concerned that your child might be a victim of bullying.  Tart cherry juice for muscle soreness? Drinking tart cherry juice might offer one more way to get relief from tough workouts. Tart cherry juice might help soothe muscle pain after exercise, especially intense or long workouts. A few studies researched how drinking tart cherry juice affects muscle soreness and pain following different types of exercise. Participants drank tart cherry juice 5­­–7 days before exercise (such as running a marathon). Those who drank the tart cherry juice instead of the placebo experienced a decrease in intensity and duration of muscle pain, but these measurements weren’t consistent from study to study, and not all measures of muscle pain improved. However, tart cherry juice does contain anti-oxidant and anti-inflammatory benefits. Keep in mind that research participants drank 8–12 oz of tart cherry juice twice daily. Drinking that amount could add 260–390 calories per day to your diet, mostly from sugar. Too many calories and not enough exercise to balance it out can lead to weight gain. If you enjoy drinking tart cherry juice, then consider adding it to your nutrition plan. In addition to stretching and foam rolling after your workouts, it could help you experience less muscle soreness. Kratom concerns Kratom use is on the rise, but is it safe? Planning a home birth? Be prepared! Deciding whether or not to birth at home is a big decision, but if you’re thinking about it, follows these tips and develop a solid plan. If you’re considering giving birth at home, make an informed choice, including a plan that lays out expenses, your nearest hospital, your delivery team, and more. The American Academy of Pediatrics (AAP) and the American College of Obstetrician and Gynecologists (ACOG) both say that hospitals and birthing centers are the safest places for birth in the U.S. However, they also recognize the right to make a medically informed decision about where and how to give birth. If you’re considering home birth, here are some specific suggestions to help you make safe decisions. Read more here. Improving mood with food Mindful eating can help you make better decisions and transform your whole eating experience! “You are what you eat” means that food affects your physical AND emotional health! A tip that also helps your mood is to stay away from “comfort foods.” Choose foods that give you more steady energy, such as an orange or raisins (not ice cream or fries). This might be old advice, but here’s a new twist: Eat that snack mindfully! By practicing mindfulness before you eat, when you’re feeling a craving, and while you eat, you can overcome binge eating, eating to soothe emotional concerns, and impulses triggered by yummy sights, sounds, or smells. It helps you understand your motivation. Are you eating because you’re hungry and it’s time to eat? Or is it a “quick fix” for your stress or worries? Once you’re eating, instead of analyzing why you’re eating or focusing on other tasks such as texting, be mindful of the eating experience, embracing the experiences of smell, taste, temperature, and texture. You may find yourself slowing down and enjoying your food more! Before diving into your next snack or meal, think about what you’re eating and be mindful of why. Here’s a simple example of how you can weave mindful eating into your daily life: You might notice that it’s 3pm, and you’ve had nothing to eat since that healthy lunch, and you need a pick-me-up, so you reach for an orange. Now, mindfully enjoy each part of the experience as you peel the orange, noticing the textures inside and outside, the stickiness, the spray, and the smell. Notice how you salivate with the anticipation of citrus acids, and the moment when the piece of orange hits your tongue, followed by squirts of flavor, and changing texture. Enjoy! Massage therapy and muscle recovery Filed under: Massage, Recovery Sports massages may be beneficial for your recovery after a workout, as well as relaxing! Getting a sports massage after a hard workout could help relieve muscle soreness and improve recovery. Sports massages typically focus on those areas of the body that are specific to a sport or activity. These kinds of massages decrease inflammation and promote blood circulation, allowing for the delivery of essential nutrients such as oxygen to damaged muscles, resulting in faster recovery. Symptoms such as pain, tenderness, muscle weakness, and discomfort associated with delayed onset muscle soreness (DOMS) contribute to a decreased recovery process. If you’re able, treat yourself to a 10- to 15-minute sports massage after an intense workout such as resistance training or endurance events. If a sports massage isn’t possible, self-massage such as foam rolling can also reduce the effects of DOMS and increase blood flow to your muscles. Marathon recovery Green coffee beans: What’s all the hype? Green coffee beans are popular for weight loss, but don’t be fooled by all the hype surrounding this dietary supplement ingredient. If you’re looking for ways to optimize your performance or perhaps drop some weight quickly, you may be tempted by the marketing hype and claims around green coffee beans, a dietary supplement ingredient often found in weight-loss products. Green coffee beans are the raw, unroasted seeds or “beans” of the Coffea plant. They’re a source of caffeine, but they have become popular as a dietary supplement ingredient because they also contain a chemical called chlorogenic acid that supposedly offers some health benefits. Some research suggests that chlorogenic acid might help with weight loss and prevent heart disease, diabetes, and high blood pressure. However, there’s limited evidence to support the use of dietary supplement products with green coffee beans for weight loss or other health conditions, so consumers should beware of health claims associated with this ingredient. In fact, the Federal Trade Commission (FTC) sued a company for using unsupported weight-loss claims and fake news websites to market their green coffee extract dietary supplement. Read more in FTC’s Press Release. Energy drinks: different labels, same risks Energy drinks are now being marketed as conventional foods, but there are still risks involved. Learn how to stay safe if you drink these beverages. Most energy drinks are now labeled with Nutrition Facts instead of Supplement Facts, but that doesn’t automatically make them safe. The most popular energy drinks contain about 80–120 mg of caffeine per serving (8 oz.)—about the same amount of caffeine in an 8-ounce coffee. Caffeine isn’t necessarily a bad thing. When used appropriately, caffeine can boost mental and physical performance. But each energy drink can or bottle often contains more than one serving, making it easier to consume larger amounts of caffeine, especially if you drink more than one per day. Too much caffeine (>400 mg) can cause nervousness, shakiness, rapid heart rate, and trouble sleeping. In addition to caffeine, energy drinks commonly contain amino acids, vitamins, and plant-based ingredients such as guarana (which also contains caffeine) and ginseng. Although these ingredients are generally safe, there still isn’t enough reliable information about their long-term safety or how combinations of these ingredients might interact in the body. If you drink energy drinks, here are some things to keep in mind: • Be aware of how much caffeine (from all sources) is in each can or bottle, and limit the number you drink each day. • Avoid caffeinated foods, beverages, and medications while using energy drinks. You may be consuming more caffeine than you realize. • Don’t mistake energy drinks for sports drinks. Unlike energy drinks, sports drinks are designed to fuel and hydrate you during long workouts. • Don’t mix energy drinks with alcohol. Energy drinks can mask the feeling of intoxication but still leave you impaired. • Find other ways to energize yourself. It’s best to get the sleep your body needs, but you can try other ways to stay alert, such as exercising or listening to upbeat music. RSS Feed for Total Force Fitness
[an error occurred while processing this directive] BBC News watch One-Minute World News Last Updated: Wednesday, 18 July 2007, 11:03 GMT 12:03 UK Water find 'may end Darfur war' Girl in Darfur carrying water in a sandstorm Getting enough water is a major struggle in Darfur A huge underground lake has been found in Sudan's Darfur region, scientists say, which they believe could help end the conflict in the arid region. Some 1,000 wells will be drilled in the region, with the agreement of Sudan's government, the Boston University researchers say. Analysts say competition for resources between Darfur's Arab nomads and black African farmers is behind the conflict. More than 200,000 Darfuris have died and 2m fled their homes since 2003. "Much of the unrest in Darfur and the misery is due to water shortages," said geologist Farouk El-Baz, director of the Boston University Center for Remote Sensing, according to the AP news agency. "Access to fresh water is essential for refugee survival, will help the peace process, and provides the necessary resources for the much needed economic development in Darfur," he said. The team used radar data to find the ancient lake, which was 30,750 km2 - the size of Lake Erie in North America - the 10th largest lake in the world. A similar discovery was made in Sudan's neighbour Egypt, where wells have been used to irrigate 150,000 acres of farmland, the researchers say. map showing underground lake The discovery is "very significant", Hafiz Muhamad from the lobby group Justice Africa told the BBC's Focus on Africa programme. "The root cause of the conflict is resources - drought and desertification in North Darfur." He says this led the Arab nomads to move into South Darfur, where they came into conflict with black African farmers. He also said that it has long been known there was water in the area but the government had not paid for it to be exploited. French researcher Alain Gachet has also been using satellite images to look for new water resources in Darfur. Last month, the UN Environmental Programme (Unep) said there was little prospect of peace in Darfur unless the issues of environmental destruction were addressed. It said deserts had increased by an average of 100 km in the last 40 years, while almost 12% of forest cover had been lost in 15 years. Secretary-General Ban Ki-moon said climate change was partly to blame for the conflict in Darfur in an editorial for US newspaper The Washington Post in June. How water could be the answer in Darfur The BBC is not responsible for the content of external internet sites Has China's housing bubble burst? How the world's oldest clove tree defied an empire Why Royal Ballet principal Sergei Polunin quit Americas Africa Europe Middle East South Asia Asia Pacific
A Salute to Patriotic Stamps Collections of U.S. postage stamps reveal a lot about our country's history. Stamps are a prolific document of America's history and traditions. From the first official U.S. government postage stamp of 1847 to those purchased and used today, stamps reflect the heritage, leaders, triumphs, passions, and philosophies of the country. When using vintage stamps for crafts and decorating, check the value of the stamp before using. Set aside rare, valuable stamps. Post offices and some crafts supply stores sell a variety of stamps. Or collect used stamps by removing them from envelopes. 1. To remove stamps, tear or cut off the upper right-hand corner of the envelope. Soak it, stamp side down, in warm water. 2. Once the stamp falls away from the paper, let it soak for a few minutes more to remove any remaining glue on the stamp. 3. Pick up the stamp with tongs and dry it between paper towels. 4. Place it under a heavy book for several hours. 5. If the stamp will not peel away from the paper backing, trim around the stamp using decorative-edge scissors.
You are here The growing use of online teaching in the nation’s public schools has placed a related burden on district administrators to ensure that they use high quality and highly qualified instructors. Brain-Based Learning: The New Paradigm of Teaching, 2nd ed., $35.95 When Alamo Heights (Texas) Independent School District opened in 1909 as a rural, two-room wooden-frame school, who would have thought that 96 years later its students would become teachers to their own parents? Teacher collaboration and professional learning communities are frequently mentioned in articles and reports on school improvement. Schools and teachers benefit in a variety of ways when teachers work together. A small but growing body of evidence suggests a positive relationship between teacher collaboration and student achievement.
Social Responsibility DEFINITION of 'Social Responsibility' The idea that companies should embrace its social responsibilities and not be solely focused on maximizing profits. Social responsibility entails developing businesses with a positive relationship to the society which they operate in. According to the International Organization for Standardization (ISO), this relationship to the society and environment in which they operate is "a critical factor in their ability to continue to operate effectively. It is also increasingly being used as a measure of their overall performance." BREAKING DOWN 'Social Responsibility' Many companies, particularly "green" companies have made social responsibility an integral part of their business models. What's more, some investors use a company's social responsibility - or lack thereof - as an investment criterion. For example, one who has a moral (or other) objection to smoking, may not want to invest in a tobacco company. That said, not everybody believes that business should have a social conscience. Noted economist Milton Friedman noted that the "social responsibili­ties of business are notable for their analytical looseness and lack of rigor." Friedman believed that only people could have social responsibilities. Businesses, by their very nature, cannot. 1. Sin Stock 2. Corporate Accountability The performance of a publicly traded company in non-financial ... 3. True Cost Economics 4. Land Rehabilitation A re-engineering process that attempts to restore an area of ... 5. Pigovian Tax 6. Marginal Social Cost - MSC Related Articles 1. Personal Finance Understanding the Influence of the Latino Vote Learn why the Hispanic vote, which quintupled in number between 1980 and 2012, is vital for any candidate to have a serious chance at capturing The White House. 2. Mutual Funds & ETFs Proxy Voting Gives Fund Shareholders A Say 3. Investing Basics Sinful Investing: Is It For You? Sin stocks may seen outright undesirable to some, but these "naughty" industries bring stable returns - even in hard times. 4. Mutual Funds & ETFs Socially Responsible Mutual Funds 5. Retirement Working With Islamic Finance There is no division between the spiritual and the secular in this type of socially responsible investing. 6. Personal Finance Building Green For Your House And Wallet 7. Investing What’s the Difference Between Duration & Maturity? 8. Financial Advisors SEC Audit? How Financial Advisors Can Be Ready 9. Investing News What Affirmative Action Means for Businesses A look at what Affirmative Action means for your business. 10. Investing Basics 1. How do you conduct effective social responsibility training? 2. Is there a difference between socially responsible investing (SRI) and green investing? There isn't a huge difference between socially responsible investing (SRI) and green investing; green investing is actually ... Read Full Answer >> 3. What is socially responsible investing? In the financial world, where profit and return are often the priorities of the average investor, the vehicles we use to ... Read Full Answer >> 4. Do working capital funds expire? 5. Does working capital include inventory? 6. Which socially responsible retailers appeal most to ethical investors? Ethical investors have many reasons to consider companies in the retail sector. The sector is broad and features an abundance ... Read Full Answer >> You May Also Like Hot Definitions 1. Quick Ratio 2. Black Tuesday 3. Black Monday 4. Monetary Policy 5. Indemnity 6. Discount Bond Trading Center
TIP! Become a more well-rounded person by developing your leadership qualities. Leadership can be defined in many ways, but most people like to define it as “influence. Often, the enemy of happiness is stress. Stress in the mind hurts us both mentally and physically across our body. To start thinking clearly and reaching for calm, purposeful goals, destroy your mind’s stress. Make time in your schedule each day to take a few minutes alone and clear your mind. Taking a little bit of time for yourself will help you stay calm, and keep your goals in mind. TIP! Determine the things that you value so that you can better come up with an excellent personal development strategy for your needs. If you go against your values, you are shooting yourself in the foot. TIP! If you care for your body, you will get the most out of your personal development. Put yourself on the road to success with personal development by making sure your basic need are met, including adequate amounts of sleep, nutritious food and a regular fitness regimen. Shying away from a major decision could cheat you of an opportunity to become a better person. Never back down from an opportunity. If you lack knowledge about a specific area, you should still be willing to make the most informed decision you can with the information at hand. Do not rely entirely on your instinct. Even mistakes are valuable as they are instructive learning decisions. If you make the wrong choice today, you are almost certain to make a better choice the next time around. TIP! Exercising should be part of everyone’s life, not just those who are looking to lose weight. There are so many different physical and emotional reasons to exercise. Leadership is the cornerstone of personal development. Leadership has many definitions, but many people think of it as “influence.” Understand the events in your life that make up your leadership journey. What events have you been most impacted by in your life? What aspects of your life did those events bring about? What is your best attribute that makes you a team player? It is through these questions that you can best determine your role in a team environment. TIP! Instead of boasting about trophies, awards, and medals, try asking others about their achievements. You will find that you will learn more about those around you better by doing this. While many people can identify possible areas for improvement in their lives, few know exactly how to make those changes. This article has some great advice to get you launched into the journey, but you still have a lot of hard work to do. Go back over these tips and remember the basics when you find yourself lacking motivation. You’ve come to the right place! You have just arrived at some excellent tips and tricks for juicing. Juicing is a fantastic strategy for incorporating essential nutrients, such as vitamins and minerals, into your everyday diet. This article provides several fantastic juicing tips and techniques that will help you maximize the potential of juicing. Be informed and do it right! Don’t depend on luck. TIP! When juicing for the health benefits, look to using ingredients from greens such as: broccoli, chard, parsley, kale and spinach for the greatest effectiveness. Use these vegetables often when juicing, with over a 50% concentration. If you’re juicing for reasons related to your health, try using dark green vegetables as the main ingredient for your juice. Your goal should be for the juice to approximately somewhere around 50-75% of the broccoli, spinach, or other in order maximize the health benefits. The rest of your juice should be made up of fruits you like. Enjoy each drop of this juice. Make time to truly enjoy the juice so that you can taste every single flavor. Roll the juice through your mouth, allowing your entire tongue to experience the individual tastes, and begin the process of digestion. Let color guide you. The full color spectrum of fresh fruits and vegetables, from reds to greens to oranges, is an indication the variety of nutrients that are available. This enables a culinary experience that is high on nutrition value and bursting with flavor. TIP! You will use your juicer more if it stays out in sight. This will ensure that you remember to use it often and get the most benefits from it. Coffee Filter TIP! When purchasing a juicer, chose one that is easy to dismantle and clean. If you need twenty minutes or more to assemble your juicer, make your juice and then clean up, you will quickly tire of the process. TIP! When juicing for good health, try adding a handful of cranberries to your regular selection to help bladder or urinary tract problems. Use them soon after finding you have these issues. Besides making a food that increases your nutritional intake and overall health, juicing leaves you with a tasty drink as well. By following our very helpful tips, you will be juicing like a professional in no time, thus getting the most out of your time and money. Many people have depression but are unaware of it. They might think that it’s just a part of life because things are going rough and they aren’t aware that they’re really just depressed. After reading this, you will have the ability to tell if you or someone else has the symptoms of depression. TIP! Take a bath if you have depression symptoms that are not passing quickly. A nice soak with a beloved book or a favorite album on the stereo will elevate your spirits. An antidepressant may help you overcome depression. You can begin feeling much better by taking the right antidepressant. There are many available, which means you may have to try several before finding one that is effective. Look for support where you can get it. People who have battled through depression on their own may be able to help you the most by sharing their knowledge and experiences. Always remember that you control your own thoughts. Adjust your vocabulary to remove negative words such as depressed. The word does nothing except cause you to refocus on negative thoughts and circumstances. Try using a phrase, such as “low mood,” for describing those feelings for a better outlook. Find an activity you enjoy such as a concert, time with friends who make you laugh or a funny movie. Socialization can often lessen the feelings of depression. TIP! Do not use the words “depression” or “depressed” in your vocabulary. The words “depressed” and “depression” have much baggage in connection with feelings of hopelessness. Many things can cause depression; therefore, you need to attempt to discover the single root cause of your depression. If you have an idea of what is causing the problem, you can either go about eliminating the cause or deal with them in a different way. TIP! If you have depression that’s clinical, it won’t go away right away. This is going to be a hard fought battle that takes time to win. You need to be supportive of anyone battling depression within your family or social circle. Depression sufferers need to feel safe and comfortable during their bouts. A number of resources are available, such as books and online sources, that can provide you with the guidance you need to help your loved one. TIP! Sadness is a normal emotion that comes along with difficult situations, but clinical depression is usually created by a chemical imbalance. Depression may only a down mood, so before you jump to any conclusions, it helps to seek professional counsel to correctly diagnose your symptoms. If you are worried that someone you care about is in a depressive episode, seek professional help immediately. It can be hard to face depression alone, so seek the advice of a professional to guide you. Unfortunately, millions of people annually have to deal with arthritis, whether it be rheumatoid or osteoarthritis. Some tips will help you understand how to accept the disease and understand how to limit its negative effects. TIP! If you have arthritis, make sure to get a good nights sleep. By not getting enough shut-eye, your body won’t get the chance to fight the painful symptoms of arthritis. Though it may be a challenge, you really do need to exercise as much as you can if you deal with arthritis. By not regularly moving arthritic joints, they will become more stiff and painful with time, exacerbating your condition and making it more difficult to treat. Range of motion can be increased by doing flexibility exercise. TIP! Relaxation techniques, such as meditation and yoga, can provide relief from chronic arthritis. The techniques of yoga have been shown to reduce the painful symptoms of arthritis by relaxing the mind and body. Acupuncture can reduce the pain you get from arthritis. Studies have shown that acupuncture offers real pain relief for arthritis sufferers. This method should not be considered a one time deal, as consistency is needed to show true benefits. TIP! If you are unable to cut your own toe nails, consider having a pedicure. This allows you to not have to use your fingers. You need to exercise, but find out what is good for you first. Exercise will improve both your flexibility and your strength, reducing the impact that arthritis has on your body. Low impact exercises can prevent your joints from becoming inflamed, but you have to take care not to overexert yourself. When exercise begins to be painful, it is time to take a break. Pain Killers TIP! You should get tests done to expose any deficiencies. If you are chronically deficient in certain vitamins, especially vitamin B-12 and iron, your arthritis flare-ups will be more frequent and more painful. Try to stay away from pain killers when coping with the pain from arthritis. Lots of pain killers can quickly lead to addiction. Take any prescription medication exactly as directed by your physician. TIP! Let your family and friends understand exactly what you’re going through with arthritis. If the people close to you know exactly what you are experiencing, they will easier understand your situation and be able to help. If you suffer from psoriatic arthritis, do not try to do everything. Your energy is compromised. Ignoring your symptoms and pretending that everything is the same as it used to be will only cause you to experience more pain. Just put all of your energy into the most important things to you. Remember, nobody expects you to do everything all the time. TIP! Castor oil can be rubbed on your joints to make the stiffness go away. The oil has therapeutic properties, and the act of massage also helps. As the article you’ve just read has mentioned, all different types of people suffer from arthritis. Once you know more about arthritis and its symptoms, you will find that it is easier to manage. Apply these tips to transform your lifestyle in a way that corresponds to your condition. It can be difficult to accomplish your goal of building muscle. You must dedicate yourself to a diligent diet and maintain an intense level of working out. Sometimes, in the absence of immediate results, it can be easy to become discouraged. The article below has many tips that will help you improve your workouts and increase your muscles. Make sure you have enough vegetables in your diet. A lot of diets that promote weight training put a lot of emphasis on consuming proteins and carbohydrates; however, vegetables are usually ignored. Vegetables give you important nutrients which aren’t in foods that usually have a lot of protein or carbs. Veggies are also good sources of fiber. Fiber allows your body to use the protein you consume more efficiently. TIP! Wwarming up is vital to your success in increasing muscle mass. As you build muscle and get stronger, you can actually be vulnerable to injury. TIP! Keep the “big three” in mind and incorporate them in your exercise routine. These mass building exercises include dead lifts, bench presses and squats. Not all exercises are created equal, so be sure to do the exercises that address your specific goals. You should know different exercises allow you to focus on different groups, toning or building. Make sure you use the correct exercise techniques to build muscle for specific muscle groups. When attempting to put on muscle, you’ll have to ensure you are consuming enough calories. You need to eat the amount necessary to pack on one more pound each week. Find healthy ways to get anywhere from 250 to 500 more calories daily. If you don’t see any weight change, consider altering your eating habits. If you are going to use creatine supplements to assist with your muscle gain, you should use caution, especially when taking them for an extended period of time. These supplements can be harmful if you have any sort of kidney issues. Also, they have been implicated in causing heart arrhythmia’s, muscle compartment syndrome, and muscle cramps. Young people should not take these supplements. Be sure you keep your creatine intake at or below suggested safety levels. TIP! Compound exercises are crucial when building muscle. Compound exercises work more than one muscle group at once. Building up muscles can take a lot of time and effort. You need to work out on a regular basis, and the intensity of the workouts can be unforgiving. You also have to pay close attention to your nutrition. Don’t let poor choices undermine your efforts. Follow the advice provided in this article to ensure that your weight training efforts will be successful. About Me
Sentence Examples • I get the hint this Ronnie and Howie don't dance to the same fiddle player. • Molly, why don't you go in my office and fiddle with my computer? • She could play him like a fiddle - or was Alex merely that amiable? • 2.-Marble Idols, Amorgos; 6-II Fiddle And Mallet Types, 12-14, Developed Types. • Its panelled front was in the likeness of a ship's bluff bows, and the Holy Bible rested on a projecting piece of scroll work, fashioned after a ship's fiddle-headed beak. How would you define fiddle? Add your definition here. comments powered by Disqus
Backing vocalist From Wikipedia, the free encyclopedia   (Redirected from Backing vocals) Jump to: navigation, search One of The Wives, the backing vocalists for English singer Ebony Bones Backing vocalists are singers who provide vocal harmony with the lead vocalist or other backing vocalists. In some cases, a backing singer may sing alone as a lead-in to the main vocalist's entry or to sing a counter-melody. Backing vocalists are used in a broad range of popular music, traditional music and world music styles. Solo artists may employ professional backing vocalists in studio recording sessions as well as during concerts. In many rock and metal bands (e.g., the power trio), the musicians doing backup vocals also play instruments, such as guitar, electric bass, drums, or keyboards. In Latin or Afro-Cuban groups, backup singers may play percussion instruments or shakers while singing. In some pop and hip-hop groups and in musical theater, the backup singers may be required to perform elaborately choreographed dance routines while they sing through headset microphones. The style of singing used by backup singers varies according to the type of song and the genre of music the band plays. In pop and country songs, backup vocalists may perform vocal harmony parts to support the lead vocalist. In hardcore punk or rockabilly, other band members who play instruments may sing or shout backup vocals during the chorus (refrain) section of the songs. Other terms include backing singers, or backing vocals or, especially in the U.S. and Canada, backup singers or sometimes background singers or harmony vocalists. While some bands use performers whose sole on-stage role is performing backing vocals, it is common for backup singers to have other roles. Two notable examples of band members who sang back-up are The Beach Boys and The Beatles. The Beach Boys were well known for their close vocal harmonies, occasionally with all five members singing at once such as "In My Room" and "Surfer Girl". All five members would sing lead, although most often Brian Wilson or Mike Love would sing lead with guitarists Carl Wilson and Al Jardine and drummer Dennis Wilson singing background harmonies. The Beatles were also known for their close style of vocal harmonies[opinion] – all Beatles members sang both lead and backup vocals at some point, especially John Lennon and Paul McCartney, who frequently supported each other with harmonies, often with fellow Beatle George Harrison joining in. Ringo Starr, while not as prominent in the role of backup singer as his three bandmates, can be heard singing backing vocals in such tracks as "The Continuing Story of Bungalow Bill" and "Carry That Weight". Examples of three-part harmonies by Lennon, McCartney and Harrison include "Nowhere Man", "Because", "Day Tripper", and "This Boy" The members of Crosby, Stills, Nash & Young and Bee Gees all each wrote songs and sang back-up or lead vocals and played various instruments on their albums and various collaborations with each other. Former guitarist John Frusciante of the Red Hot Chili Peppers sang all backing vocals (few songs were recorded without backing vocals) often singing some parts without accompaniment from lead vocalist Anthony Kiedis. Frusciante usually sang one song by himself during concerts. Another example is "No Frontiers" by The Corrs, which is sung by Sharon and Caroline. Other backing vocalists include Sebastien Lefebvre (rhythm guitarist at the same time) and David Desrosiers (Bass guitarist, also at the same time ) for pop /punk band Simple Plan, John Petrucci, Per Wilberg and guitarists Zacky Vengeance and Synyster Gates of heavy metal band Avenged Sevenfold. Lead singers who record backup vocals[edit] In the recording studio, some lead singers record their own backing vocals by overdubbing, because the sound of their own harmonies will blend well with their main vocal. One famous example is Freddie Mercury singing the first part of Bohemian Rhapsody himself by overdubbing. Patrick Stump of Fall Out Boy, Wednesday 13, in his own band, and as the lead and backing vocalist of Murderdolls, also Ian Gillan of Deep Purple and Brad Delp of Boston recorded lead and backing vocals for their albums. With the exception of a few songs on each album, Dan Fogelberg, Eddie Rabbitt, David Bowie and Richard Marx sing all of the background vocals for their songs. Robert Smith of The Cure not only sings his own backing vocals in the studio, but also doesn't perform with backing vocalists when playing live Different approaches[edit] Many metalcore and some post-hardcore bands, such as As I Lay Dying, Alexisonfire, Haste the Day and Silverstein feature a main vocalist who performs using harsh vocals, whilst the backing vocalist sings harmonies (clean vocals) during choruses to create a contrast. Some bands, such as Hawthorne Heights and Finch have the backup singers do harsh vocals to highlight specific lyrics. Pop and R&B vocalists such as Diana Ross, Mariah Carey, Michael Jackson, Janet Jackson, Beyoncé Knowles, Brandy, Faith Evans, D'Angelo, Mary J. Blige and Amerie have become known specifically for not only recording their own backing vocals, but for arranging their own multi-tracked vocals and even contriving highly complex harmonies and arrangements. When they perform live, they may have backing vocalists that impersonate their voices. Unusual backing vocal styles[edit] Career paths[edit] Working as a backup singer can give a vocalist the onstage experience and vocal training they need to develop into a lead vocalist. A number of lead vocalists such as Ace Frehley, Richard Marx, Mariah Carey, Cher, Gwen Stefani, Pink, Whitney Houston, Phil Collins, Sheryl Crow, Trisha Yearwood, Dave Grohl, Jerry Only, Jerry Cantrell, Jason Newsted, and Elton John, learned their craft as backup singers, or singing backup vocals as part of a choir.[citation needed] See also[edit] External links[edit]
From Wikipedia, the free encyclopedia Jump to: navigation, search For the village of Dămăcuşeni in Romania, called Domokos in Hungarian, see Târgu Lăpuş. Domokos is located in Greece Coordinates: 39°08′N 22°18′E / 39.133°N 22.300°E / 39.133; 22.300Coordinates: 39°08′N 22°18′E / 39.133°N 22.300°E / 39.133; 22.300 Country Greece Administrative region Central Greece Regional unit Phthiotis  • Municipality 707.5 km2 (273.2 sq mi) Population (2011)[1]  • Municipality 11,495  • Municipality density 16/km2 (42/sq mi)  • Municipal unit 4,633  • Population 1531 Time zone EET (UTC+2)  • Summer (DST) EEST (UTC+3) Vehicle registration ΜΙ Domokos (Greek: Δομοκός), the ancient Thaumacus or Thaumacie,[2] is a town and a municipality in Phthiotis, Greece. The town Domokos is the seat of the municipality of Domokos[3] and of the former Domokos Province. The town is built on a mountain slope overlooking the plain of Thessaly, 36km from the city of Lamia. The ancient diocese of Domokos became Greek Orthodox in 1882, but was suppressed in 1899. A parallel series of Latin bishops was begun in 1204, and continued as a titular diocese down to 1943. The area of Domokos became part of Greece in 1881, when the Ottoman Empire ceded Thessaly and a few adjacent areas to Greece. Until 1899, it was part of the Larissa Prefecture. Battle of Domokos[edit] In 1897, during the Greco-Turkish War, about 2,000 Italian volunteers under the command of Giuseppe Garibaldi's son, Ricciotti Garibaldi, helped the Greeks in the battle of Domokos. Among them there was also one of the members of Italian Parliament, Antonio Fratti, who died in fightening. The Turkish Army was victorious over Greek Army. The municipality Domokos was formed at the 2011 local government reform by the merger of the following 3 former municipalities, that became municipal units:[3] The province of Domokos (Greek: Επαρχία Δομοκού) was one of the provinces of Phthiotis. It had the same territory as the present municipality.[4] It was abolished in 2006. External links[edit] 2. ^ Lewis and Short 3. ^ a b Kallikratis law Greece Ministry of Interior (Greek)
Kevin Lamarque / Reuters U.S. President Barack Obama sits with Speaker of the House John Boehner during a memorial service for former Speaker Tom Foley in the Capitol in Washington October 29, 2013. The Missing Middle in American Politics How Moderate Republicans Became Extinct Purchase Review After Lyndon Johnson’s victory over Barry Goldwater in the 1964 U.S. presidential election, the once-mighty Republican Party was reduced to a regional rump. The Democrats won overwhelming majorities in the House and the Senate, which they used to pass Johnson’s Great Society legislation. Republicans, meanwhile, were at one another’s throats, having endured the most divisive campaign in modern political history. Goldwater had managed to win the Republican presidential nomination over the impassioned opposition of moderate and progressive Republicans, who at the time may well have constituted a majority of the party’s members. Moderates blamed Goldwater’s right-wing views for the defection of millions of Republican voters.  To rebuild the party, a number of moderate Republican governors banded together to form the Republican Governors Association, designed to serve as a counterweight to the Republican National Committee, which had been captured by Goldwater conservatives. Shortly after the election, the association issued a statement, sponsored by Michigan Governor George Romney and other leading moderates, calling for a more inclusive GOP and criticizing Goldwater’s campaign. Stung by the failure of many moderates to actively support or even formally endorse his candidacy, Goldwater retorted that he needed no lessons in maintaining unity, having urged party members in 1960 to look past philosophical differences and pull together to support Richard Nixon’s presidential candidacy. Goldwater wrote a letter to Romney dripping with contempt: “Now let’s get to 1964 and ask ourselves who it was in the Party who said, in effect, if I can’t have it my way I’m not going to play? One of those men happens to be you.” Romney wrote a lengthy reply to Goldwater, warning against European-style polarization. “Dogmatic ideological parties tend to splinter the political and social fabric of a nation,” Romney wrote. Worse, he added, political parties with fixed ideological programs “lead to governmental crises and deadlocks, and stymie the compromises so often necessary to preserve freedom and achieve progress.”  Romney’s words seem particularly prescient today, as polarized politics have caused the U.S. government to seize up. But what would the elder Romney, who died in 1995, have made of his own son’s embrace of a more orthodox conservatism -- the very kind of politics the elder Romney feared would damage the country?  Mitt Romney began his political career very much in the moderate mold. In 1994, running for the U.S. Senate seat in Massachusetts held by Ted Kennedy, the popular liberal Democratic incumbent, Romney forcefully maintained that he had been an independent during the Reagan years. On abortion, he was firmly pro-choice. While Republican candidates across the country were rallying around Representative Newt Gingrich’s “Contract With America,” Romney distanced himself from it. “If you want to get something done in Washington,” he said in a debate during the campaign, “you don’t end up picking teams with Republicans on one side and Democrats on the other.”  Romney’s defeat that year did not quite cure him of his moderate impulses. During the battle for the 1996 Republican presidential nomination, Romney, as a private citizen, purchased newspaper advertisements in New Hampshire criticizing the publisher and candidate Steve Forbes’ call for a flat tax, deriding it as “a tax cut for fat cats.” And as a 2002 gubernatorial candidate in Massachusetts, Romney defeated a weak Democratic opponent in large part by touting his moderate bona fides.  Yet as a candidate for the Republican presidential nomination in 2008 and now 2012, Romney has shifted decisively to the right, embracing the party’s anti-tax consensus, reversing his decades-long support for abortion rights, and taking a much harder line on entitlement spending. He has been careful to avoid being outflanked on his right by his various GOP rivals, attacking Gingrich and Texas Governor Rick Perry for being insufficiently tough on immigration. And he has generally cheered on House Republicans in their fierce opposition to President Barack Obama’s domestic agenda. Departing from the more decorous tone of his previous campaigns, Romney has described the president as “a crony capitalist,” a “job killer” whose policies will “poison the very spirit of America and keep us from being one nation under God.” Like so many erstwhile moderates, Romney has survived in today’s more confrontational, ideological GOP by finally picking a team.  The dominant ideology and style of today’s Republican Party would have been utterly alien to Romney’s father. In Rule and Ruin, the historian Geoffrey Kabaservice’s vivid account of the pitched ideological battles that shaped the postwar Republican Party, George Romney is cast as the last hope of a moderate Republicanism that has all but vanished. Born into poverty in a Mormon colony in northern Mexico, Romney rose to become the chief executive of the American Motors Corporation. There, he succeeded in taking on the Big Three car companies, scoffing at their “gas-guzzling dinosaurs” and offering sleek, fuel-efficient compacts that anticipated the later triumphs of the Japanese automobile industry. Like many self-made business executives of the time, Romney felt a deep sense of moral obligation, which flowed in part from his devout religious faith. As poor African Americans from the Deep South settled in and around Detroit, Romney made it his mission to better their condition. Shortly after his election as governor in 1962, Romney pressed for a massive increase in spending on public education and on generous social welfare benefits for the poor and unemployed. During Romney’s first term alone, Michigan’s state government nearly doubled its spending, from $684 million in 1964 to $1.3 billion in 1968. To finance the increase, Romney fought for and won a new state income tax, which would become a thorn in the side of future Michigan Republicans.  What separated Romney from liberal Democrats who were similarly eager to expand government was his conviction that he was doing God’s work on earth. Today, it is entirely common for Republican presidential candidates to describe the Declaration of Independence and the Constitution as divinely inspired documents, as Romney did. But in the mid-1960s, as Kabaservice observes, such religiosity was unusual, at least for a moderate Republican. Kabaservice briefly speculates that Romney’s brand of moralistic progressivism might have resonated with many Christian voters who instead embraced a harder-edged form of conservatism infused with evangelical fervor. But Romney’s political program was badly undermined by the 1967 Detroit riots, which discredited the notion, fairly or not, that large-scale social spending was the most effective route to social uplift, at least among conservatives. Disagreements on race and the Vietnam War fueled the split in the late 1960s between the radical New Left and the liberal Democratic establishment. But the upheaval of the late 1960s also divided the Republicans. Conservatives of that era saw themselves as defending the United States’ founding ideals against communism abroad and radicalism at home. Moderates, in contrast, sought to modernize the GOP: to keep up with the baby boomers’ shifting sensibilities on social issues and to share in their embrace of a more diverse and dynamic society. Some even praised what they saw, perhaps naively, as the freedom-loving spirit of the antiwar movement.  Yet as Kabaservice relates, the moderates never coalesced into a movement with a coherent program and ideology, despite Dwight Eisenhower’s earlier attempts to build a modern party that embraced the New Deal and a vision of responsible American global leadership. This failure left moderate Republicans in an awkward position. Those who shared the Democratic faith in activist government, tempered by a desire for decentralization and fiscal rigor, found themselves gravitating to the left. Those who shared conservative skepticism of big government, tempered by a recognition that Social Security and Medicare were here to stay, found themselves gravitating to the right. There was no glue to hold the two sides together. Ultimately, Kabaservice argues, it was this lack of coherence that doomed the centrists within the Republican Party. The absence of a rigid ideology freed them to embrace creative solutions to emerging social problems, which proved useful when they were in power. But precisely because they were so allergic to ideology, the moderates were disinclined to rally the troops or to wage scorched-earth campaigns against their political enemies. Even when they had the advantage of numbers, as they did after Goldwater’s 1964 defeat, they routinely failed to coordinate their efforts, never managing to build the kind of grass-roots fundraising network that fueled the rise of the political right. Instead of offering a set of clear political commitments, moderate Republicans instead asked voters to trust their judgment, to have faith that intelligent, thoughtful, evenhanded leaders would govern well. After Vietnam and Watergate, however, Americans hungered for politicians with clear convictions, leaders who would never betray them. This was true on the left but even more so on the right. And the surest way to guard against betrayal was, and still is, to force politicians to commit themselves to a well-defined set of propositions. In the 1960s, that meant no recognition of communist China; today, it means no new taxes.  There is no question that such commitments reduce a politician’s room for maneuver and make legislative compromise difficult, if not impossible. But political commitments also increase democratic accountability, which is prized by many voters, especially educated ones. Although today’s political landscape might frustrate those who are eager for pragmatism and bipartisanship, there is no question that the Democratic and Republican Parties represent distinctive priorities and visions.  Kabaservice is searingly critical of the conservative movement that eventually triumphed within the GOP. His chief complaint is the distance between what conservatives have said and how they have governed. In a particularly vivid passage lamenting the failures of George W. Bush’s presidency, he writes that “a Republican Party without moderates was like a heavily muscled body without a head.” After Bush’s 2004 reelection, Republicans held majorities in the House and the Senate for the fifth straight election, but, Kabaservice observes, “conservatives proved unable to achieve their goals, largely because they lacked the ideas the moderates had once provided and the skill at reaching compromise with the opposition at which moderates had excelled.” The irony of the decline of the moderates is that it made the achievement of conservative goals all but impossible.  Indeed, as conservative rhetoric has grown increasingly hostile to government since the mid-1960s, the size of government has continued to expand, even when conservatives have been in power. Bush himself, having promised to restrain the growth of the government, presided over an increase in federal spending as a share of GDP from 18.2 percent in 2000 to 20.7 percent in 2008, reversing the trend under his Democratic predecessor. And between 1950 and 2009, state and local spending increased as a share of GDP from 7.7 percent to 15.5 percent. Even in states where conservatives have dominated, such as Nevada and Texas, spending has increased at an alarming rate as conservatives have aped their liberal foils, responding to a growing appetite for public services by increasing spending rather than by improving the productivity and efficiency of existing institutions. And at the federal level, conservatives have generally acquiesced to increased spending while refusing to levy taxes high enough to pay for it. In effect, this has meant delivering big government while only charging for small government -- a politically attractive proposition that has proved fiscally ruinous.  “We are all Keynesians now,” Nixon is sometimes reported to have said in 1971. (In fact, his remark was less sweeping: “I’m now a Keynesian in economics.”) But Nixon’s treasury secretary may have left a more lasting mark on the Republican Party than any economist. After decades of GOP support for subsidizing favored industries from defense to oil and gas to Sunbelt housing construction, a cynic might argue that Republicans are all Connallys now.  The rise of the Tea Party movement briefly seemed like an intriguing exception to this general drift. The movement has often been interpreted as a brand of populist conservatism virtually indistinguishable from the supply-side conservatism of the Reagan era. But supply-side economics was an optimistic creed that rejected the idea of the market as a zero-sum game and celebrated a vision of a flourishing society in which everyone should, could, and would be richer, freer, and happier if taxes were low and GDP growth robust. The Tea Party movement offers a far less sunny worldview. Far from inheriting the optimism of the Reagan-era supply-siders, the Tea Party shares more with the Old Right, the earlier form of conservatism that Reaganite supply-siders derided as “root-canal economics” for its emphasis on spending cuts -- and, in some cases, tax increases -- as instruments of hard-nosed fiscal discipline. Like the Old Right, the Tea Party conceives of the United States as divided between those who work hard and play by the rules and those who game the system, whether by engaging in petty welfare fraud or by seeking government favors through lobbying and campaign contributions. This sentiment has not led to a compelling critique of the country’s broken financial and political systems, however. The fierce opposition of the libertarian Republican congressman Ron Paul to the Federal Reserve has earned him considerable standing among some grass-roots conservative activists. But for the most part, more realistic proposals to constrain the power of big banks and reduce the implicit and explicit subsidies that flow to them have fallen on deaf ears. Indeed, the Tea Party movement, like the conservative movement of the 1960s and 1970s, seems deeply hostile to technocratic proposals of any kind, even those that could foster a more decentralized and market-oriented society.  In The Tea Party and the Remaking of Republican Conservatism, the political scientist Theda Skocpol and her co-author, Vanessa Williamson, draw on a wide range of sources to describe the movement’s origins and worldview. Although anchored by extended conversations with individual Tea Party activists, the book adds little to the thousands of newspaper and magazine articles that have been written about the Tea Party in the past few years, retracing an already familiar portrait. Skocpol and Williamson observe that Tea Party activists tend to be relatively affluent and middle-aged or older. The vast majority vote Republican, although some identify as conservative-leaning independents. They tend to be wary of claims of technocratic expertise and prefer citizen engagement over deference to elites. Reverence for the U.S. Constitution is an essential aspect of the Tea Party’s ideology, and members of the movement often invoke the founding documents. Skocpol and Williamson also anatomize the three main components of the Tea Party movement: grass-roots organizations; well-funded national advocacy groups, such as FreedomWorks; and a media nexus of Fox News and conservative talk radio.  Skocpol and Williamson attempt to maintain a disinterested tone. But they often cannot conceal their hostility to the Tea Party, the GOP, and conservatism more generally, as when they warn that Republicans “will continue to talk about ‘America going broke’ and the ‘need to slash spending’ and ‘cut taxes,’ without getting overly specific until just before they seize the chance -- if one presents itself -- to push through major restructurings of Medicare and Social Security.” The reader is left to conclude that Skocpol and Williamson believe that there is something sinister about trying to reduce the national deficit and that efforts to restructure Medicare and Social Security are wholly unrelated to the federal government’s fiscal woes.  Still, Skocpol and Williamson rightly diagnose a major weakness of contemporary Republican reform efforts. Because conservatives have so strenuously made the case against government and the welfare state, they have undermined their credibility as champions of reform. Scholars and voters alike are now skeptical when conservative Republican reformers, such as Representative Paul Ryan of Wisconsin, promise that they intend to put the U.S. social safety net on a sounder footing, not to destroy it.  There is no doubt that a reliance on antigovernment rhetoric has created a troubling vacuum at the heart of the conservative project. The Tea Party movement and its rejectionism now define public perceptions of the post-Bush Republican Party. And it is true that for years, congressional Republicans have been extremely reluctant to take on issues such as tax reform and health care -- the kind of issues that consumed moderate Republicans in an earlier era -- because conservatives see them as a political and intellectual dead end. Now, however, some Republicans, led primarily by Ryan, have advanced a number of significant proposals, including a sweeping Medicare reform and a base-broadening overhaul of the tax code. Ryan has shown an openness to the ideas of the avowedly moderate Bipartisan Policy Center and even to raising tax revenues, a move that has long been anathema to conservatives. Late last year, Ryan signaled a willingness to compromise by joining with Senator Ron Wyden of Oregon, a Democrat, to advance a Medicare reform proposal -- one that specifically addresses Democratic objections to an earlier plan Ryan had proposed.  Around the same time, congressional Republicans experienced a sharp political reversal in a showdown with Obama over extending a temporary payroll tax cut. Republican brinkmanship, which had earlier threatened chaos during a battle over increasing the debt limit, was met with near-universal opprobrium from the voting public. After the Republicans gave in to Democratic and popular demands that the payroll tax cut be extended, Obama experienced an immediate surge in his approval ratings.  Conservative Republicans and their Tea Party supporters were chastened by this defeat, and the Tea Party’s grip on the GOP shows some signs of loosening. But moderate Republicanism will not return as a bona fide movement anytime soon, despite the efforts of right-of-center public intellectuals such as David Frum and David Brooks. The social group that contributed so heavily to the moderate movement of yesteryear -- upper-middle-class social liberals who live in big cities and their suburbs -- has shifted overwhelmingly to the Democratic Party, and it seems unlikely that those voters will ever return to the GOP. Yet the moderates’ flexibility and pragmatism are experiencing a tentative renaissance, as younger conservatives, led by figures such as Ryan, face up to their movement’s shortcomings. Moderate Republicans may no longer exist, but their legacy persists, and conservative Republicans will need to recapture the moderates’ creativity and problem-solving impulses if they ever hope to take power, hold on to it, and govern effectively. In This Review Geoffrey Kabaservice Oxford University Press, USA, 2012 504 pp. $29.95 The Tea Party and the Remaking of Republican Conservatism Theda Skocpol,Vanessa Williamson Oxford University Press, USA, 2012 264 pp. $24.95 Browse Related Articles on {{}} {{ | number}} Articles Found • {{bucket.key_as_string}}
Inventing the Computer 1st edition Inventing the Computer 0 9780778728382 0778728382 Details about Inventing the Computer: Ages 8 to 14 years. It is hard to imagine that something so important to our daily lives has been around for less than 70 years. People use computers today to calculate, store information, send messages, control other machines, and even to play games. Inventing the Computer tells the incredible story of computers and their inventors, from the early days when computers performed simple functions and filled entire rooms to today's compact, super-fast supercomputers. Topics include: computers as machines that add and analyse; Colossus I and the race to break enemy codes in World War II; computer codes, transistors and chips, software and peripherals; the personal computer; computers of the future. Back to top Rent Inventing the Computer 1st edition today, or search our site for Marsha textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Crabtree Publishing Company.
This article was posted on 02/01/2013 Managing power through the multicore revolution Multicore processors enable mobile and non-mobile multimedia devices to deliver dramatically increased performance and functionality As consumers continue to demand new functions and increased performance from devices such as smartphones, tablets and PCs, multicore processors have superseded traditional single-core devices. The latest families of multimedia application processors from leading vendors implement advanced architectures such as the ARM Cortex-A9 or Cortex-A15 core, and cover a broad performance spectrum with single, dual or quad-core variants. The launch of ARM’s asymmetrical big.LITTLE system builds on the multicore philosophy and also moves it forward, optimising power efficiency across all processing loads by combining a high-performance core such as the Cortex-A15 with an extremely energy efficient and architecturally identical core such as the Cortex-A7. The emerging families of multicore application processors also integrate peripherals such as the DRAM controller and a media/graphics co-processor like the ARM Neon, which deliver further performance gains. Power management evolution When dual-core application processors entered the market, around 2011, the power architecture typically used with single-core devices was simply extended to power both processor cores from common supply rails. As the multicore roadmap has progressed, with quad-core devices already in the market, reports of octo-core devices in development and the prospect of even more complex processors in the future, extra flexibility is needed to control power to the cores individually to help optimise energy efficiency. This calls for extra complexity in the power management architecture, separating each core into its own power domain supplied by an individual regulator, as shown in Fig. 1. This approach allows the use of smaller regulators, sized to supply a lower worst-case current demand.  Fig. 1: Separating cores into individual power domains enhances flexible and energy efficient power management Another important factor driving changes in the power architecture of multicore systems is the adoption of deep nanometre fabrication processes at 40-nm, 32-nm, and, more recently, 28-nm nodes. These cannot support the 5-V battery voltage (VBAT) connected to the input of each regulator, as smaller CMOS feature sizes demand lower operating voltages effectively reducing the maximum voltage that can be applied. For this reason, it is now necessary to migrate the power management functions out of the application processor into a separate device. This contrasts with the approach taken in first-generation mobile devices, in which power management was typically integrated with the application processor in a single-chip implementation. With the trend towards a more complex multi-regulator architecture, implemented off-chip in a separate device, a new generation of more advanced and sophisticated power management ICs (PMICs) is emerging. The features and capabilities of these PMICs are evolving to help improve energy efficiency across the numerous usage modes of today’s consumer mobile and multimedia products. Typically, multiple switching regulators are implemented including buck regulators to supply low voltages such as the processor core and I/O voltages (which can be as low as 1 and 2 V, respectively, for processors fabricated at 28 nm), memory ICs and other peripheral devices. A boost converter may also be implemented for supplying LED strings such as the screen backlight. In addition, integrated low dropout (LDO) regulators can be used for powering important subsystems such as sensors, indicator LEDs or motors. A variety of battery charging functions may be implemented, from a small power supply of a few milliamps for charging a backup coin cell or super-capacitor to a digitally controlled multi-mode lithium battery charger capable of connecting to a variety of sources such as a wall charger, USB 5-V supply or a car charger. Additional features such as an analog/digital converter for monitoring external voltages and temperature may also be available. Moreover, power supervision and control intelligence implemented on-chip allows the PMIC to handle important functions such as power-up/power-down sequencing, reset generation and interrupt handling. This can help designers improve overall system reliability and energy efficiency. PMIC in focus As an example of the emerging generation of PMICs optimised for multicore applications, the DA9063 from Dialog Semiconductor has six buck regulators which operate at a fixed 3-MHz switching frequency. This allows the use of 1-µH inductors that are only 1-mm high, thereby supporting ultra-low-profile dimensions for mobile devices while allowing the regulators to supply high peak-current demands. Dynamic voltage control (DVC) allows adaptive adjustment of the supply voltage according to the processor load. Three of the buck regulators are capable of supplying up to 2.5 A, while the remainder deliver up to 1.5 A. Connecting regulators in parallel creates 5 A or 3 A rails to satisfy the core-current demands of today’s highest performing processors. Hence designers can scale or adapt the configuration to suit a variety of system requirements. There are also 11 programmable LDO regulators, with output current ratings ranging from 100 to 300 mA. Support for remote capacitor placement and operation from a low input voltage of 1.5/1.8 V allow cascading with a suitable buck supply to improve overall system efficiency. A number of the LDOs can also be configured as current-limited bypass switches to support other peripherals such as memory cards or externally connected accessories. Moreover, some are optimized for low-noise applications, and one can be configured as a 6-bit PWM-controlled vibration motor driver for implementing haptic user controls. The block diagram of Fig. 2 illustrates the six buck controllers, 11 LDOs, backup battery charger, and power-management and supervisory functions integrated in the DA9063. Fig. 2: Power supply and management functions integrated in the DA9063 PMIC from Dialog Semiconductor  Improving system efficiency Consolidating voltage regulators and intelligent power management functions in a separate PMIC such as the DA9063 provides the opportunity to implement a number of power-saving features that operate autonomously without intervention from the application processor. The power manager block has a startup sequencing engine that allows programmable start-up of internal and external regulators and rail switches. The PMIC has multiple operating modes including five low-power modes that draw as little as 20 μA or lower, which give designers extra flexibility to minimise system power in all usage scenarios. Among these is a 1.5-μA real-time clock (RTC) mode with alarm and wakeup, allowing the system to operate in deep sleep with very low power consumption. Also, by using the PMIC’s rail-switch controllers to drive external FET switches, designers can reduce the leakage current from cores that are powered down. In addition, on-key button press detection allows configurable key-lock and application shut-down functions according to button press time. GPIO pins enable designers to implement many other power-saving functions including keypad supervision, application wake-up and timing controlled enable of external regulators, power switches or other ICs. Inside the PMIC, dynamic voltage scaling in one or more switched-mode power domains helps to optimise processor energy per task, leading to higher efficiency. In addition buck quiescent current and LDO dropout voltage are generally low compared to similar discrete solutions. This not only enhances efficiency but also lowers internal power dissipation. Integrating the lithium battery charger in a PMIC can deliver even more significant power savings. The extra efficiency of a switching charger with intelligence to track the battery charge can reduce internal power dissipation by more than 80% in a1.3-A/5-V implementation. Impact on future generations PMICs such as this enable the latest consumer multimedia products to achieve the performance increases needed to deliver the experiences today’s buyers are demanding, while using battery power efficiently to achieve acceptable recharge intervals. In addition, a PMIC is essential for simplifying power distribution to the growing number of subsystems such as dual high-megapixel cameras, multiple radios supporting Bluetooth, Wi-Fi, NFC and 3G or 4G LTE cellular wireless links, and various LED strings for lighting and status indicators. Migrating power management from the baseband/application processor into a separate PMIC also provides greater freedom for designers to satisfy market demands for features such as larger screens with capacitive multi-touch control, and improved audio capabilities such as better speakerphone performance and high-definition audio playback. Some PMICs, such as the DA9059, integrate the audio subsystem comprising DSP, codecs, class-D speaker amplifier and class-G headphone amplifier in a single chip. This can yield bill-of-material savings of around 43%. In the future, devices such as 4G smartphones are expected to drive this architectural trend even further forward by implementing two complex PMICs serving the baseband and the application processor individually. Learn more about Dialog Semiconductor Add Comment Text Only 2000 character limit Please login to comment on the article
@article {LaRocca:1994-11-01T00:00:00:1061-4303:905, author = "LaRocca, Christine A. and Francisco, Donald E. and DiGiano, Francis A.", title = "Effects of diet on survival, reproduction, and sensitivity of Ceriodaphnia dubia ", journal = "Water Environment Research", volume = "66", number = "7", year = "1994-11-01T00:00:00", abstract = " ABSTRACT: The effect of diet on the health and robustness of Ceriodaphnia dubia was investigated. C dubia were raised on three diets for 19 generations to evaluate survival and reproduction. The three diets used to culture C. dubia were the EPA-recommended one containing the green alga, Selenastrum capricornutum, plus a mixture of yeast, cereal leaves (Sigma Chemical), and trout chow (YCT); one containing the green alga, Chlamydomonas reinhardtii, plus YCT; and one with a combination of the two species of algae plus YCT. C. dubia also were subjected to various copper concentrations to evaluate the relative sensitivity to toxicants of animals raised on different diets. The levels of survival and reproduction of C. dubia raised on all three diets satisfied the both the EPA and North Carolina Division of Environmental Management Mini-chronic Pass/Fail Ceriodaphnia Effluent Toxicity Test minimum standards for control animals used in a chronic toxicity test. Survival was not significantly different in any diet tested. Reproduction was higher in the S. capricornutum/C reinhardtii/YCT diet than in the other diets. C dubia raised on the S. capricornutum/C. reinhardtii/YCT diet were also less sensitive to copper than animals raised on the single algae diets. The results suggest that a diet that includes multiple species of algae is nutritionally superior to one that includes a single species of algae. The nature of the diet may determine whether a particular toxicant or effluent is toxic at a particular concentration.", pages = "905-911", url = "http://www.ingentaconnect.com/content/wef/wer/1994/00000066/00000007/art00007", doi = "doi:10.2175/WER.66.7.7", keyword = "toxicity, Ceriodaphnia dubia, effluent, algae, bioassay" }
Compartir esta entrada denominal División en sílabas: de·nom·i·nal Pronunciación: /dēˈnämənl/ Definición de denominal en inglés: (Of a word) derived from a noun. Oraciones de ejemplo • Phase is basically a noun, though of course there is a denominal verb meaning to carry something out by phases, etc. Volver al principio   A verb or other word that is derived from a noun. Oraciones de ejemplo • For a charted illustration of denominals and their opposites, please see the Appendix. • Thirty-seven children with language deficits were identified and assessed on both standardized language measures and experimental tasks involving the generation of regular and irregular past tenses together with denominals. • This seemed misguided to me; I think it would make more sense to say that the denominals are part of a VP-shell configuration, and the subject comes from the outer VP (which would mean, according to their theory, that it is in ‘agent’ position, and would be seen as ‘causing’ the action). 1930s: from de- + nominal. Definición de denominal en: Compartir esta entrada ¿Qué te llama la atención de esta palabra o frase? Suscribirse para eliminar anuncios y acceder a los recursos premium
Discuss Prosperos journey in Shakespeare's 'The Tempest' in relation to his magic and his humanity and the changes he undergoes. Only available on StudyMode • Download(s): 274 • Published: March 5, 2010 Read full document Text Preview Shakespeares so called late plays including works such as The Tempest and The Winters Tale present the audience with a world of incomparable wealth of interest in the unseen world of magic and adventure, all the while conveying Shakespeares unique capabilities with the English language and his risk-taking attitude towards theatre. Although this sudden change in attitude towards a risky side of presenting his plays, Shakespeare still maintains the overall product found in many of his plays; that of the journey of a character often ending in self-realisation and eventually death. All of these journeys are neither of magical or even fantastical nature, but simply of human nature and, in the end, it is the human aspect of theatre, and of life, that Shakespeare attempts to convey. The journey of Prospero presents the story of a rogue, untrustworthy man who once chose self-benefit over serving his country and consequently paid the price, but he is, abnormally, given a second chance. Although initially presented to the audience as a tragedy, Shakespeare writes The Tempest with a much more realistic take on events, combing both tragedy and comedy in a representation of what can be considered to be real life. Beginning the tale in the midst of a frantic scene upon a ship in a storm; a tempestuous noise of thunder and lightning, certainly creates the tone of a tragedy. Prosperos daughters statement; If by your art, my dearest father, you have put the wild waters in this roar, initiates the magical side of the story, but also signposts the beginning of the turnaround in Prosperos so far tragic journey. Although revenge is still in his mind due to his usurpation by his brother some time ago in Milan, for the sake of his daughter Prospero ensures that Theres no harm done, revealing very early on the more human, possibly caring, side to the man. However his intentions are made clear through the description of his past; Twelve years, since thy father was the Duke of Milan,... tracking img
Return to the talk Return to talk Subtitles and Transcript Select language 00:11 The oceans cover some 70 percent of our planet. And I think Arthur C. Clarke probably had it right when he said that perhaps we ought to call our planet Planet Ocean. And the oceans are hugely productive, as you can see by the satellite image of photosynthesis, the production of new life. In fact, the oceans produce half of the new life every day on Earth as well as about half the oxygen that we breathe. In addition to that, it harbors a lot of the biodiversity on Earth, and much of it we don't know about. But I'll tell you some of that today. That also doesn't even get into the whole protein extraction that we do from the ocean. That's about 10 percent of our global needs and 100 percent of some island nations. 00:49 If you were to descend into the 95 percent of the biosphere that's livable, it would quickly become pitch black, interrupted only by pinpoints of light from bioluminescent organisms. And if you turn the lights on, you might periodically see spectacular organisms swim by, because those are the denizens of the deep, the things that live in the deep ocean. And eventually, the deep sea floor would come into view. This type of habitat covers more of the Earth's surface than all other habitats combined. And yet, we know more about the surface of the Moon and about Mars than we do about this habitat, despite the fact that we have yet to extract a gram of food, a breath of oxygen or a drop of water from those bodies. 01:26 And so 10 years ago, an international program began called the Census of Marine Life, which set out to try and improve our understanding of life in the global oceans. It involved 17 different projects around the world. As you can see, these are the footprints of the different projects. And I hope you'll appreciate the level of global coverage that it managed to achieve. It all began when two scientists, Fred Grassle and Jesse Ausubel, met in Woods Hole, Massachusetts where both were guests at the famed oceanographic institute. And Fred was lamenting the state of marine biodiversity and the fact that it was in trouble and nothing was being done about it. Well, from that discussion grew this program that involved 2,700 scientists from more than 80 countries around the world who engaged in 540 ocean expeditions at a combined cost of 650 million dollars to study the distribution, diversity and abundance of life in the global ocean. 02:15 And so what did we find? We found spectacular new species, the most beautiful and visually stunning things everywhere we looked -- from the shoreline to the abyss, form microbes all the way up to fish and everything in between. And the limiting step here wasn't the unknown diversity of life, but rather the taxonomic specialists who can identify and catalog these species that became the limiting step. They, in fact, are an endangered species themselves. There are actually four to five new species described everyday for the oceans. And as I say, it could be a much larger number. 02:46 Now, I come from Newfoundland in Canada -- It's an island off the east coast of that continent -- where we experienced one of the worst fishing disasters in human history. And so this photograph shows a small boy next to a codfish. It's around 1900. Now, when I was a boy of about his age, I would go out fishing with my grandfather and we would catch fish about half that size. And I thought that was the norm, because I had never seen fish like this. If you were to go out there today, 20 years after this fishery collapsed, if you could catch a fish, which would be a bit of a challenge, it would be half that size still. So what we're experiencing is something called shifting baselines. Our expectations of what the oceans can produce is something that we don't really appreciate because we haven't seen it in our lifetimes. 03:28 Now most of us, and I would say me included, think that human exploitation of the oceans really only became very serious in the last 50 to, perhaps, 100 years or so. The census actually tried to look back in time, using every source of information they could get their hands on. And so anything from restaurant menus to monastery records to ships' logs to see what the oceans looked like. Because science data really goes back to, at best, World War II, for the most part. And so what they found, in fact, is that exploitation really began heavily with the Romans. And so at that time, of course, there was no refrigeration. So fishermen could only catch what they could either eat or sell that day. But the Romans developed salting. And with salting, it became possible to store fish and to transport it long distances. And so began industrial fishing. 04:13 And so these are the sorts of extrapolations that we have of what sort of loss we've had relative to pre-human impacts on the ocean. They range from 65 to 98 percent for these major groups of organisms, as shown in the dark blue bars. Now for those species the we managed to leave alone, that we protect -- for example, marine mammals in recent years and sea birds -- there is some recovery. So it's not all hopeless. But for the most part, we've gone from salting to exhausting. 04:39 Now this other line of evidence is a really interesting one. It's from trophy fish caught off the coast of Florida. And so this is a photograph from the 1950s. I want you to notice the scale on the slide, because when you see the same picture from the 1980s, we see the fish are much smaller and we're also seeing a change in terms of the composition of those fish. By 2007, the catch was actually laughable in terms of the size for a trophy fish. But this is no laughing matter. The oceans have lost a lot of their productivity and we're responsible for it. 05:08 So what's left? Actually quite a lot. There's a lot of exciting things, and I'm going to tell you a little bit about them. And I want to start with a bit on technology, because, of course, this is a TED Conference and you want to hear something on technology. So one of the tools that we use to sample the deep ocean are remotely operated vehicles. So these are tethered vehicles we lower down to the sea floor where they're our eyes and our hands for working on the sea bottom. So a couple of years ago, I was supposed to go on an oceanographic cruise and I couldn't go because of a scheduling conflict. But through a satellite link I was able to sit at my study at home with my dog curled up at my feet, a cup of tea in my hand, and I could tell the pilot, "I want a sample right there." And that's exactly what the pilot did for me. That's the sort of technology that's available today that really wasn't available even a decade ago. So it allows us to sample these amazing habitats that are very far from the surface and very far from light. 05:56 And so one of the tools that we can use to sample the oceans is acoustics, or sound waves. And the advantage of sound waves is that they actually pass well through water, unlike light. And so we can send out sound waves, they bounce off objects like fish and are reflected back. And so in this example, a census scientist took out two ships. One would send out sound waves that would bounce back. They would be received by a second ship, and that would give us very precise estimates, in this case, of 250 billion herring in a period of about a minute. And that's an area about the size of Manhattan Island. And to be able to do that is a tremendous fisheries tool, because knowing how many fish are there is really critical. 06:32 We can also use satellite tags to track animals as they move through the oceans. And so for animals that come to the surface to breathe, such as this elephant seal, it's an opportunity to send data back to shore and tell us where exactly it is in the ocean. And so from that we can produce these tracks. For example, the dark blue shows you where the elephant seal moved in the north Pacific. Now I realize for those of you who are colorblind, this slide is not very helpful, but stick with me nonetheless. 06:56 For animals that don't surface, we have something called pop-up tags, which collect data about light and what time the sun rises and sets. And then at some period of time it pops up to the surface and, again, relays that data back to shore. Because GPS doesn't work under water. That's why we need these tools. And so from this we're able to identify these blue highways, these hot spots in the ocean, that should be real priority areas for ocean conservation. 07:20 Now one of the other things that you may think about is that, when you go to the supermarket and you buy things, they're scanned. And so there's a barcode on that product that tells the computer exactly what the product is. Geneticists have developed a similar tool called genetic barcoding. And what barcoding does is use a specific gene called CO1 that's consistent within a species, but varies among species. And so what that means is we can unambiguously identify which species are which even if they look similar to each other, but may be biologically quite different. 07:48 Now one of the nicest examples I like to cite on this is the story of two young women, high school students in New York City, who worked with the census. They went out and collected fish from markets and from restaurants in New York City and they barcoded it. Well what they found was mislabeled fish. So for example, they found something which was sold as tuna, which is very valuable, was in fact tilapia, which is a much less valuable fish. They also found an endangered species sold as a common one. So barcoding allows us to know what we're working with and also what we're eating. 08:18 The Ocean Biogeographic Information System is the database for all the census data. It's open access; you can all go in and download data as you wish. And it contains all the data from the census plus other data sets that people were willing to contribute. And so what you can do with that is to plot the distribution of species and where they occur in the oceans. What I've plotted up here is the data that we have on hand. This is where our sampling effort has concentrated. Now what you can see is we've sampled the area in the North Atlantic, in the North Sea in particular, and also the east coast of North America fairly well. That's the warm colors which show a well-sampled region. The cold colors, the blue and the black, show areas where we have almost no data. So even after a 10-year census, there are large areas that still remain unexplored. 09:00 Now there are a group of scientists living in Texas, working in the Gulf of Mexico who decided really as a labor of love to pull together all the knowledge they could about biodiversity in the Gulf of Mexico. And so they put this together, a list of all the species, where they're known to occur, and it really seemed like a very esoteric, scientific type of exercise. But then, of course, there was the Deep Horizon oil spill. So all of a sudden, this labor of love for no obvious economic reason has become a critical piece of information in terms of how that system is going to recover, how long it will take and how the lawsuits and the multi-billion-dollar discussions that are going to happen in the coming years are likely to be resolved. 09:38 So what did we find? Well, I could stand here for hours, but, of course, I'm not allowed to do that. But I will tell you some of my favorite discoveries from the census. So one of the things we discovered is where are the hot spots of diversity? Where do we find the most species of ocean life? And what we find if we plot up the well-known species is this sort of a distribution. And what we see is that for coastal tags, for those organisms that live near the shoreline, they're most diverse in the tropics. This is something we've actually known for a while, so it's not a real breakthrough. 10:06 What is really exciting though is that the oceanic tags, or the ones that live far from the coast, are actually more diverse at intermediate latitudes. This is the sort of data, again, that managers could use if they want to prioritize areas of the ocean that we need to conserve. You can do this on a global scale, but you can also do it on a regional scale. And that's why biodiversity data can be so valuable. 10:24 Now while a lot of the species we discovered in the census are things that are small and hard to see, that certainly wasn't always the case. For example, while it's hard to believe that a three kilogram lobster could elude scientists, it did until a few years ago when South African fishermen requested an export permit and scientists realized that this was something new to science. Similarly this Golden V kelp collected in Alaska just below the low water mark is probably a new species. Even though it's three meters long, it actually, again, eluded science. Now this guy, this bigfin squid, is seven meters in length. But to be fair, it lives in the deep waters of the Mid-Atlantic Ridge, so it was a lot harder to find. But there's still potential for discovery of big and exciting things. This particular shrimp, we've dubbed it the Jurassic shrimp, it's thought to have gone extinct 50 years ago -- at least it was, until the census discovered it was living and doing just fine off the coast of Australia. And it shows that the ocean, because of its vastness, can hide secrets for a very long time. So, Steven Spielberg, eat your heart out. 11:22 If we look at distributions, in fact distributions change dramatically. And so one of the records that we had was this sooty shearwater, which undergoes these spectacular migrations all the way from New Zealand all the way up to Alaska and back again in search of endless summer as they complete their life cycles. We also talked about the White Shark Cafe. This is a location in the Pacific where white shark converge. We don't know why they converge there, we simply don't know. That's a question for the future. 11:48 One of the things that we're taught in high school is that all animals require oxygen in order to survive. Now this little critter, it's only about half a millimeter in size, not terribly charismatic. But it was only discovered in the early 1980s. But the really interesting thing about it is that, a few years ago, census scientists discovered that this guy can thrive in oxygen-poor sediments in the deep Mediterranean Sea. So now they know that, in fact, animals can live without oxygen, at least some of them, and that they can adapt to even the harshest of conditions. 12:16 If you were to suck all the water out of the ocean, this is what you'd be left behind with, and that's the biomass of life on the sea floor. Now what we see is huge biomass towards the poles and not much biomass in between. We found life in the extremes. And so there were new species that were found that live inside ice and help to support an ice-based food web. 12:37 And we also found this spectacular yeti crab that lives near boiling hot hydrothermal vents at Easter Island. And this particular species really captured the public's attention. We also found the deepest vents known yet -- 5,000 meters -- the hottest vents at 407 degrees Celsius -- vents in the South Pacific and also in the Arctic where none had been found before. So even new environments are still within the domain of the discoverable. 13:00 Now in terms of the unknowns, there are many. And I'm just going to summarize just a few of them very quickly for you. First of all, we might ask, how many fishes in the sea? We actually know the fishes better than we do any other group in the ocean other than marine mammals. And so we can actually extrapolate based on rates of discovery how many more species we're likely to discover. And from that, we actually calculate that we know about 16,500 marine species and there are probably another 1,000 to 4,000 left to go. So we've done pretty well. We've got about 75 percent of the fish, maybe as much as 90 percent. But the fishes, as I say, are the best known. 13:35 So our level of knowledge is much less for other groups of organisms. Now this figure is actually based on a brand new paper that's going to come out in the journal PLoS Biology. And what is does is predict how many more species there are on land and in the ocean. And what they found is that they think that we know of about nine percent of the species in the ocean. That means 91 percent, even after the census, still remain to be discovered. And so that turns out to be about two million species once all is said and done. So we still have quite a lot of work to do in terms of unknowns. 14:04 Now this bacterium is part of mats that are found off the coast of Chile. And these mats actually cover an area the size of Greece. And so this particular bacterium is actually visible to the naked eye. But you can imagine the biomass that represents. But the really intriguing thing about the microbes is just how diverse they are. A single drop of seawater could contain 160 different types of microbes. And the oceans themselves are thought potentially to contain as many as a billion different types. So that's really exciting. What are they all doing out there? We actually don't know. 14:35 The most exciting thing, I would say, about this census is the role of global science. And so as we see in this image of light during the night, there are lots of areas of the Earth where human development is much greater and other areas where it's much less, but between them we see large dark areas of relatively unexplored ocean. The other point I'd like to make about this is that this ocean's interconnected. Marine organisms do not care about international boundaries; they move where they will. And so the importance then of global collaboration becomes all the more important. 15:05 We've lost a lot of paradise. For example, these tuna that were once so abundant in the North Sea are now effectively gone. There were trawls taken in the deep sea in the Mediterranean, which collected more garbage than they did animals. And that's the deep sea, that's the environment that we consider to be among the most pristine left on Earth. And there are a lot of other pressures. Ocean acidification is a really big issue that people are concerned with, as well as ocean warming, and the effects they're going to have on coral reefs. On the scale of decades, in our lifetimes, we're going to see a lot of damage to coral reefs. 15:35 And I could spend the rest of my time, which is getting very limited, going through this litany of concerns about the ocean, but I want to end on a more positive note. And so the grand challenge then is to try and make sure that we preserve what's left, because there is still spectacular beauty. And the oceans are so productive, there's so much going on in there that's of relevance to humans that we really need to, even from a selfish perspective, try to do better than we have in the past. So we need to recognize those hot spots and do our best to protect them. 16:02 When we look at pictures like this, they take our breath away, in addition to helping to give us breath by the oxygen that the oceans provide. Census scientists worked in the rain, they worked in the cold, they worked under water and they worked above water trying to illuminate the wondrous discovery, the still vast unknown, the spectacular adaptations that we see in ocean life. So whether you're a yak herder living in the mountains of Chile, whether you're a stockbroker in New York City or whether you're a TEDster living in Edinburgh, the oceans matter. And as the oceans go so shall we. 16:32 Thanks for listening. 16:34 (Applause)
The interactive transcript could not be loaded. Rating is available when the video has been rented. Uploaded on Oct 7, 2011 LFTR does not produce transuranic waste. It burns up essentially all of the fuel because we don't remove fuel from the reactor until it's a fission product. And because LFTR (Liquid Fluoride Thorium Reactor) uses LIQUID fuel, it is far easier to partition the "waste" to extract valuable by-products, such as medical isotopes for cancer treatment. Up next to add this to Watch Later Add to Loading playlists...
Sign up × So the Monty Hall Problem itself is widely known and understood. Nonetheless, a friend of mine and I were wondering whether the the same strategy could affectively be applied by a participant of Who wants to be a Millionaire? when using the 50/50 Joker. Let's imagine the following scenario: The participant P has no clue about the correct answer $ x \in \{A,B,C,D\} $ and wants to use the 50/50 Joker (eliminating two wrong answers). But instead of immediately going for it he first "preselects" one of the answers in his mind. There is no need to tell Quizmaster Q about his "imaginary preselection". Now P tells Q that he wants to use his joker and Q lets the computer eliminate two wrong answers. (1) In case the answer P had preselected is eliminated he has no choice but to choose between the remaining two answers, effectively leaving him with a 50% chance of success - no magic happening here. (2) But what about the other case when the answer P had preselected survives the elimination? According to the Monty Hall Problemit seems as if changing the selection (i.e. choosing the other remaining option P had not preseleted) seems to give him a 0.75 chance of success. Nevertheless, I find it hard to believe that this actually holds true, since the so called 50/50 (!) Joker would then not be p(success) = 0.5 after all. Additionally it seems unlikely that making an "imaginary preselection", no one else is told about, actually increases your odds. I know this problem is not exactly the same as Monty Hall since the the quizmaster does not always eliminate answers only from the ones the participant had not "preselected", meaning that the preselection itself could be eliminated, too, as it happens in (1). Still the second case seems to actually be a just variation of it. So are we right and making a preselection and then going for the other remaining option is a valid strategy that increases the participant's odds of winning? If not, please help us understand our misconception. share|cite|improve this question Funny you should ask, I was wondering just the same myself last I watched the show. I think the essential difference is that (1) in Monty Hall the contestant never has knowledge whether his initial guess is right or wrong - it's just a random pick not informed by subject knowledge (2) the joker process in WWTBAM is (in theory) unaware of his initial guess. – Tom Collinge Feb 20 '14 at 13:58 Thank you, Tom Collinge! – feaDawn Feb 20 '14 at 14:02 2 Answers 2 up vote 7 down vote accepted Monty Hall does not give you information about your preselection. Therefore the probability that your first choice is right given that it is still available after the intervention, is not changed. The 50/50 Joker does give information about it (esp. when it gets eliminated). Note that many of the misunderstandings of the Monty Hall problem arise from the missing assumption that the host always willfully opens a goat-door other than the preselected door. If you modify the Monti Hall problem so that the host opens any not preselected door at random, the general misconception that both remaining doors are "equal" becomes correct. share|cite|improve this answer Thank you very much! You really helped me! :-) – feaDawn Feb 20 '14 at 14:01 If the contestant has no clue about the correct answer, then the "preselection" can change nothing. On the other hand, if the contestant has information about the probabilities of the four answers, then depending upon which answers are shown the player may gain considerable information. For example, suppose the player estimates the probabilities at 10%, 20%, 30%, and 40%. If after two answers are eliminated, those that remain are the ones the contestant thought was most likely (the worst case), the more likely of the remaining answers will be correct 4/7 of the time. If the remaining answers are those the contestant thought were the most likely and the least likely (the best case), the more likely of the remaining answers will be correct 4/5 of the time. Other combinations of eliminated answers will yield intermediate probabilities. BTW, from watching the show, I suspect the best strategy, if one can do it convincingly, would be to pretend that one was dithering between two answers which one believed to be most likely. It appears that the "50/50" doesn't select randomly, but tries to include the wrong answer the contestant favors most highly. Thus, a contestant who convincingly claimed to believe that a low-probability answer was correct might be able to coax the 50/50 into offering up one of the most favorable scenarios. share|cite|improve this answer Your Answer
Da Quiz | Four Week Quiz A Buy the Da Lesson Plans Name: _________________________ Period: ___________________ Multiple Choice Questions 1. What does Mary do after she says that she doesn't want to know where her father is? (a) She stomps off. (b) She cries. (c) She goes silent. (d) She changes the subject. 2. What does Mary tell Young Charlie after Da leaves? (a) He's a great guy. (b) He's senile. (c) He's nosy. (d) He's an idiot. 3. Who is Charlie's younger self in the play? (a) Young Charlie. (b) Charlie Jr. (c) Chip. (d) Chuckie. 4. What does Charlie say that Da had been for all of Charlie's life? (a) A blessing. (b) A nuisance. (c) A millstone. (d) A real inspiration. 5. Where does Drumm suggest that Da and Mother send Young Charlie? (a) America. (b) England. (c) Boarding school. (d) College. Short Answer Questions 1. Which of the following does Mary not do when Charlie rebuffs her? 2. What does Mary become in Young Charlie's eyes? 3. Which of the following does Drumm not say about Da? 4. Young Charlie tells Charlie that Mother and Da always _____________ around Drumm. 5. Da reappears and talks about the funeral and _________________. (see the answer key) This section contains 192 words (approx. 1 page at 300 words per page) Buy the Da Lesson Plans Follow Us on Facebook
Hedging - The Basis The Basis The basis tends to be more stable than either the cash or futures price. It also tends to differ from one exchange to another. Among conditions factoring into the basis are: • Current supply and demand locally, • Market expectations of changes in supply and demand, • Availability of substitutes, • Interest, • Handling costs (that is, middlemen's profit margins), • Storage availability and its associated costs, and • Transportation bottlenecks or disruptions. The basis varies seasonally, but these swing fairly predictable year-over-year patterns. Determining the basis The basis is determined by subtracting the futures price from cash price, but that raises the question, "Which futures price?" The general rule is to not use a contract's price during its delivery month. In a perfect market, as we described, the basis would always be at or approaching zero, if the basis were figured that way. A calculation that compares prices for September cash corn with September futures corn in September won't really measure anything except distortions from logistical hiccups. The transaction would be for naught. Instead, the basis would be calculated in September by subtracting the December futures price (there is no market for October or November corn) from the September cash price. The basis tends to be negative for most agricultural commodities, or "under" the futures price, due to the cost of carry. Conversely, a positive basis is said to be "over" the future price. If the basis is more negative or less positive than usual, it is considered "weaker" than the historical norm. If it is less negative or more positive, then, it is said to be "stronger." Say that September cash corn is trading at 228 cents/bushel and December futures corn is trading at 248 cents/bushel. In market parlance: "The basis is 20 under December." The basis indicates current local demand. A strong basis means the market wants the grain sooner rather than later. A weak basis suggests that the market doesn't want grain now, although demand may grow over a matter of months. Strong basis trends toward a less negative and more positive number. Cash prices increase relative to futures prices. This works to the seller's benefit. He or she would sell at the cash price, benefitting from the aforementioned increased local demand. Weak basis trends toward a less positive and more negative number. Cash prices decrease relative to futures prices. This works to the buyer's benefit. He or she would enter into a futures contract, benefitting from anticipated future demand. The basis is a localized phenomenon. It could be under and weakening in a Midwestern farm community, but over and strengthening in Chicago or New York. All this helps the producers of the commodities – those likely to be short-the-basis, to decide how to sell. As the basis strengthens, the producer's incentive is to sell for cash. As the basis weakens, the producer's incentive is to enter into a futures contract. How Hedgers Utilize Basis Related Articles 1. Chart Advisor Copper Continues Its Descent 2. Stock Analysis What Exactly Does Warren Buffett Own? 3. Markets Are EM Stocks Finally Emerging? 4. Markets What Slow Global Growth Means for Portfolios 5. Options & Futures Terrorism's Effects on Wall Street 6. FA CIPM: The Key To A Niche Career In Finance CIPM designates usually work as investment performance analysts. 7. Investing Basics Explaining the Liquidity Preference Theory 8. Investing News Defensive Investing: Learn from a Hedge Fund Pro 9. Investing Will Eni’s Discovery Affect Oil Prices? 10. Investing Basics Water Investments: Will They Sink or Swim? Will these water-related investments sink or swim? 1. Swap A derivative contract through which two parties exchange financial ... 2. Hedge 3. Convergence 4. Crude Oil 5. Futures Market 6. Implied Volatility - IV The estimated volatility of a security's price. 1. Which mutual funds made money in 2008? 2. Do hedge funds invest in commodities? 3. Is a financial advisor required to have a degree? 4. Can mutual funds invest in options and futures? Hot Definitions 1. Quick Ratio 2. Black Tuesday 3. Black Monday 4. Monetary Policy 5. Indemnity 6. Discount Bond Trading Center
/ | Unraveling the mystery of male birds’ missing members How the chicken lost its penis: It sounds like a weird cousin of one of Rudyard Kipling’s “Just So Stories for Little Children” from 1902, which featured “How the Leopard Got His Spots” and “How the Camel Got His Hump.” But weird or not, this month we explore why almost all birds lack that flagpole of masculinity: the penis. Back when I was a postgraduate student, a biologist in the department where I worked had a quiet word with my adviser: “Your student,” he said, “is obsessed with animal genitalia. There’s something wrong with him.” My professor laughed it off. “That’s part of his PhD,” he said. “I’d be worried if he wasn’t thinking about sex all the time.” I’ll happily accept that there may well be something wrong with me. After all, I did spend several years looking at the sperm and the genital morphology of many different kinds of animals. But in my defense I’d say this: If that’s the case then there’s also something wrong with most evolutionary biologists. The evolution of sex, of male and female genders — and of the organs that accompany the act of procreation — pose fascinating questions for biologists. So they think about it a lot. That’s their excuse, anyway. I was reminded of that episode last week when I came across a scientific paper reporting on a study of how birds lost their penises. And yes indeed, in 97 percent of bird species, the males don’t have a member. It’s a vexing question for biologists, let alone our feathered friends. Nonetheless, these individuals are just as randy and strutting as males of other more well-endowed species — though the poor wretches have to make do with a cloaca, which is basically just an opening where you’d expect to find a penis. Now Martin Cohn and Ana Herrera of the Howard Hughes Medical Institute in Chevy Chase, Maryland, and colleagues, have found that chickens possess normally developing penises as early embryos. But then, as the birds develop, a genetic program swings into action and effectively chops off their little budding appendages. Think of it as a grim reaper just for males — though it’s achieved through the programmed, deliberate dying-off of cells. “Our discovery,” says Cohn, “shows that reduction of the penis during bird evolution occurred by activation of a normal mechanism of programmed cell-death in a new location, the tip of the emerging penis.” You may well ask: Why do we care? Well, it’s important to understand how this automatic cell-death mechanism operates because, if it fails to work when needed in the human body, it can lead to excessive or unregulated cell growth — and this can spell cancer. And there’s another reason: We care because we are “obsessed with animal genitalia” and quite simply want to understand why birds don’t have penises. Herrera, the grad-student lead author of the study, who probably has a fixation like the one I had, says it’s not clear why chickens and other birds would have lost their penises. It might be down to aerodynamics: A flapping male phallus would not help in graceful flight. But Herrera has another interesting idea. It may be, she says, that the loss of a penis gives hens greater control over their reproductive lives. The natural evolutionary selection of females of the many species may, in other words, have driven the loss of the male’s pride and joy. How can that be? To find out, let’s turn to ducks. As alert readers will have noticed — and will likely be troubling over right now — I said that most male birds don’t have a penis. Some do — and boy, do they. Male ducks have giant, corkscrew-shaped penises. They use them to force females into having sex — you might have seen mallards chasing females of their species and jumping on them — which is what happens when the drake has an organ he can forcibly insert into the female. There’s not much a female duck can do to avoid a drake’s penetrative intentions, but over evolutionary time they have come up with something very clever to inhibit them. Hence the female’s reproductive tract is also corkscrew-shaped — but it twists the other way: The penis twists counter-clockwise, the vagina goes clockwise. So the females have evolved some measure of control over exactly where the male can put his penis, even if she can’t stop him inserting it in the first place. That’s because there are pockets and dead-end alleys inside her that prevent the penis growing to its full size. (If you’re a female duck you really don’t want that, as some drakes’ “manhood” can be as long as their body.) So female ducks have evolved quite elaborate countermeasures in response to the questing, unwanted penis. Yet how much easier if there wasn’t a penis at all! That is the case in the vast majority of bird species. Herrera and Cohn found that chickens have a gene known as Bmp4 that switches on at a certain stage in embryonic growth and culls the bird’s penis. In ducks and other enpenised avians, this gene stays switched off — as the results of their work, published in the journal Current Biology, show (DOI: dx.doi.org/10.1016/j.cub.2013.04.062) It appears that Bmp4 operates in mammals, too. What would happen — oh, horror — if it were switched on in the development of a human male? One can imagine a kind of reverse of Margaret Atwood’s dystopian 1985 Arthur C. Clarke Award-winning novel “The Handmaid’s Tale” — a version in which some crazed ruling class of women would have activated Bmp4 in men, causing them to lose their penises. The plot could center on a group of rebel women genetic engineers who restore men to their natural glory. I’d enjoy such a book — just as long as it remained in the realms of fiction.
Sunflowers for Lead. Spider Plants for Arsenic. July 8, 2012 13 comments Open printer friendly version of this article Print Article Environmental poisoning, whether though building practices or industrial waste is one of the universal issues of urban life. There is no historic or urban neighborhood in the United States unaffected by the ground and water poisoning left behind by more than two centuries of industrialization and unintentional poisoning through lead based paints and common waste management practices. The choices on how best to deal with the problems have been problematic to say the least: For twenty years, business and citizen based efforts and research have increasingly shown the effectiveness of a process called 'bio remediation' (or 'phytoremediation')----the use of plants to suck the poisons and metals out of the ground for safe disposal. Today, the Environmental Protection Agency recommends this process for many cleanup and remediation strategies. Fields of Sunflowers are being used to clean up the hundred year patterns of heavy metal poisoning surrounding old manufacturing plants and even the fields of Chernobyl, the worst official nuclear power disaster in history. Join us as we explore this exciting new option. After looking at the facts, bioremediation might just become an important element to the redevelopment and safety of the city's historic and urban neighborhoods. This is a little story about toxic dirt that might be in your yard in Springfield if you are part of the controversial new Ash Contamination Site zone.   But put on your thinking caps, boys and girls. There are EPA (Environmental Protection Agency) Superfund sites in Jacksonville full of nasty stuff like arsenic, lead, polychlorinated biphenyls (PCBs), and dioxins.  I don’t want to get super technical, because I’m  an avid gardener, not a chemist, but we’re basically talking about crud that can make your DNA look like alphabet soup from Planet X,  so that you  die of cancer after giving birth to a litter of stupid, ugly mutant babies.  Yes,  I’m exaggerating, but not by much.  According to soil testing, this poisonous death dirt has been found all through patches of Durkeeville, Springfield and the Eastside. But the truth is that any neighborhood built before the mid 70s, (when lead based paints were banned in the US) is just as likely to be contaminated and toxic---usually with lead.   In the process of testing yards for contaminated ash, the ever helpful EPA decided to go ahead and conduct unannounced tests for lead contamination.  The results were shocking, but not really surprising, if you think about it. In Springfield, which appears to have several sites where homeowners looking for fill dirt unwittingly imported a poisonous loam, the testing is finding statistically few yards with clear ash contamination, but almost 80% of the locations were still contaminated by lead.  Any restorationist can tell you where the lead concentrations were found:  In the ground surrounding three feet of the house, where generations of paint has been scraped and reapplied and flaked off to the ground with the fateful metal.  This identical finding could be replicated in any wood house of the urban core, and depending on the age of the home, even under the painted eaves of brick and stone houses. Industrial Contamination. The whole issue is of special concern within the urbanist movement.  Older urban areas were often developed before the advent of zoning, or any kind of regulation that kept toxic industries from operating in the heart of residential areas. The current plan is to clean up the mess by means of soil replacement; the city spends $94 million to dig up two feet of dirt, replace it with non-contaminated dirt and sod it over, which sounds extremely messy and disruptive, and rather unsatisfactory, since the bad dirt still has to go somewhere else. However, another means of cleaning up soil is bioremediation, which can be defined as any process that uses microorganisms, fungi, green plants, or their enzymes to return the natural environment altered by contaminants to its original condition.  Speaking technically, the use of living green plants is called 'phytoremediation', although it is a form of bioremediation.  Other processes that people might be hearing about in the wake of the Gulf Oil Disaster use bacteria species to break down oil and other chemical poisions. This article is about using plants as a way to clean up toxins from our ground soil, including a bunch of heavy metals like lead and mercury. The low cost of phytoremediation (up to 1000 times cheaper than excavation and reburial) is the main advantage of phytoremediation. You can learn about phytoremediation (NOT a  fancy new high-tech thing) on the EPA website so we have no idea why they want to dig up the entire Northern Urban Core instead of trying a less expensive and invasive solution first. Maybe somebody’s cousin Bubba is poised to make $94 million wrecking the neighborhood.   Miss Janice Price, aka DeadGirlsDontDance What is Phytoremediation How Does It Work? Phytoremediation of metal contaminated sites Phytoextraction (Phytoaccumulation) Understanding How It Works: (Uptake, Translocation, and Accumulation in Shoot) Harvest the Shoot and Recover Metal Understanding How It Works: Direct Transformation by Exudates Organic contaminants in the soil are: absorbed by the plant roots and broken down into their component parts by "exudates" in the plant root system. Phytoremediation of organic polluted sites Phytodegradation (Phytotransformation) Understanding How It Works: (Uptake, Translocation and Metabolism) Understanding How It Works: Microbially Mediated (plant assisted microbial biodegradation) Understanding How It Works: (Uptake, Translocation and Volatilization) Hydraulic control of Pollutants Hydraulic control is the term given to the use of plants to control the migration of subsurface water through the rapid uptake of large volumes of water by the plants. The plants are effectively acting as natural hydraulic pumps which---when a dense root network has been established near the water table--- can transpire up to 300 gallons of water per day. This fact has been utilised to decrease the migration of contaminants from surface water into the groundwater (below the water table) and drinking water supplies. There are two such uses for plants: Riparian corridors Riparian strips are used along the banks of rivers and streams: Vegetative cover Where has Phytoremediation Been Used? Location  Application Pollutant Medium plant(s) Anderson, ST  Phytostabilisation  Heavy Metals  Soil  Hybrid poplar, grasses Ashtabula, OH  Rhizofiltration  Radionuclides  Groundwater  Sunflowers Upton, NY  Phytoextraction  Radionuclides  Soil  Indian mustard, cabbage Milan, TN  Phytodegradation  Expolsives waste  Groundwater  Duckweed, parrotfeather Amana, IA  Riparian corridor, phytodegradation  Nitrates  Groundwater  Hybrid poplar Pro's & Con's of Phytoremediation As with most new technologies phytoremediation has many pro's and cons. When compared to other more traditional methods of environmental remediation it becomes clearer what the detailed advantages and disadvantages actually are. Advantages of phytoremediation compared to classical remediation Disposal sites are not needed Disadvantages of phytoremediation compared to classical remediation Large scale operations require access to agricultural equpment and knowledge Success is dependant on the tolerance of the plant to the pollutant Contaminants may be collected in woody tissues used as fuel Text by Janice Price and Stephen Dare * EPA Contaminated Site Clean-up Information - * EPA Superfund Jacksonville Ash Site Information - * A Citizen’s Guide to Phytoremediation -
The Weavers The Weavers The Weavers were an American folk music quartet based in the Greenwich Village area of New York City. They sang traditional folk songs from around the world, as well as blues, gospel music, children's songs, labor songs, and American ballads, and sold millions of records at the height of their popularity. Their hard-driving string-band style inspired the commercial "folk boom" that followed them in the 1950s and 1960s, including such performing groups as The Kingston Trio, Peter, Paul, and Mary, The Rooftop Singers and Bob Dylan. The Weavers were formed in November 1948 by Ronnie Gilbert, Lee Hays, Fred Hellerman, and Pete Seeger. In 1940 and 1941, Hays and Seeger had co-founded a previous group, the Almanac Singers, which had promoted peace and isolationism during the Second World War, working with the American Peace Committee. It featured many songs opposing entry into the war by the U.S. In June 1941, the same month Germany invaded the Soviet Union, the APC changed its name to the American People's Committee and altered its focus to supporting U.S. entry into the war. The Almanacs supported the change and produced many pro-war songs urging the U.S. to fight on the side of the Allies. The group disbanded after the U.S. entered the war.The new group took its name from a play by Gerhart Hauptmann, Die Weber (The Weavers 1892), a powerful play depicting the uprising of the Silesian weavers in 1844, containing the lines, "I'll stand it no more, come what may". After a period of being unable to find much paid work, they landed a steady and successful engagement at the Village Vanguard jazz club. This led to their discovery by arranger-bandleader Gordon Jenkins and their signing with Decca Records. The group had a big hit in 1950 with Lead Belly's "Goodnight, Irene", backed with the 1941 song "Tzena, Tzena, Tzena", which in turn became a best seller. The recording stayed at number one on the charts for a lengthy 13 weeks. In keeping with the audience expectations of the time, these and other early Weavers' releases had violins and orchestration added behind the group's own string-band instruments. Because of the deepening Red Scare of the early 1950s, their manager, Pete Cameron, advised them not to sing their most explicitly political songs and to avoid performing at "progressive" venues and events. Because of this, some folk song fans criticized them for watering down their beliefs and commercializing their singing style. But the Weavers felt it was worth it to get their songs before the public, and to avoid the explicit type of commitment which had led to the demise of the Almanacs. The new approach proved a success, leading to many bookings and increased demand for the group's recordings.The Weavers' successful concerts and hit recordings helped introduce to new audiences such folk revival standards as "On Top of Old Smoky" (with guest vocalist Terry Gilkyson), "Follow the Drinking Gourd", "Kisses Sweeter than Wine", "The Wreck of the John B" (aka "Sloop John B"), "Rock Island Line", "The Midnight Special", "Pay Me My Money Down", and "Darling Corey". The Weavers encouraged sing-alongs in their concerts, and Seeger would sometimes shout out the lyrics in advance of each line in lining out style.In a 1968 interview, in response to claims that record companies found the Weavers difficult to classify, Seeger told the Pop Chronicles music documentary to "leave that up to the anthropologists, the folklorists. ... For you and me, the important thing is a song, a good song, a true song. ... Call it anything you want."Film footage of the Weavers is relatively scarce. The group appeared as a specialty act in a B-movie musical, Disc Jockey (1951), and filmed five of their record hits that same year for TV producer Lou Snader: "Goodnight, Irene", "Tzena, Tzena, Tzena", "So Long", "Around the World", and "The Roving Kind".During the Red Scare, however, Pete Seeger and Lee Hays were identified as Communist Party members by FBI informant Harvey Matusow (who later recanted) and ended up being called up to testify to the House Committee on Un-American Activities in 1955. Hays took the Fifth Amendment. Seeger, however, refused to answer, claiming First Amendment grounds, the first to do so after the conviction of the Hollywood Ten in 1950. Seeger was found guilty of contempt and placed under restrictions by the court pending appeal, but in 1961 his conviction was overturned on technical grounds. Because Seeger was among those listed in the entertainment industry blacklist publication, Red Channels, all of the Weavers were placed under FBI surveillance and not allowed to perform on television or radio during the McCarthy era. Decca Records terminated their recording contract and deleted their songs from its catalog in 1953, and their records were denied airplay, which curtailed their income from royalties. Right-wing and anti-Communist groups protested at their performances and harassed promoters. As a result, the group's economic viability diminished rapidly and in 1952 it disbanded. After this, Pete Seeger continued his solo career, although like all of them he continued to suffer from the effects of blacklisting.In December 1955, the group reunited to play a sold-out concert at Carnegie Hall. The concert was a huge success. A recording of the concert was issued by the independent Vanguard Records, and this led to their signing by that record label. By the late 1950s, folk music was surging in popularity and McCarthyism was fading. Yet the media industry of the time was so timid and conventional that it wasn't until the height of the revolutionary '60s that Seeger was able to end his blacklisting by appearing on a nationally distributed U.S. television show, The Smothers Brothers Comedy Hour, in 1968.When in the late fifties The Weavers agreed to provide the vocals for a TV cigarette commercial, Pete Seeger, opposed to the dangers of tobacco and discouraged by the group's apparent sell-out to commercial interests, decided to resign. He spent his last year with the Weavers honoring his commitments, but described himself as feeling like a prisoner. He left the group on April 1, 1958.Seeger recommended Erik Darling of The Tarriers as his replacement. Darling remained with the group until June 1962, leaving to pursue a solo career and eventually to form the folk-jazz trio The Rooftop Singers. Frank Hamilton, who replaced Darling, stayed with the group nine months, giving his notice just before the Weavers celebrated the group's 15th anniversary with two nights of concerts at Carnegie Hall in March 1963. Folksinger Bernie Krause, later a pioneer in bringing the Moog synthesizer to popular music, was the last performer to occupy "the Seeger chair." The group disbanded in 1964, but Gilbert, Hellerman, and Hays occasionally reunited with Seeger during the next 16 years. In 1980, Lee Hays, ill and using a wheelchair, wistfully approached the original Weavers for one last get-together. Hays' informal picnic prompted a professional reunion, and a triumphant return to Carnegie Hall on November 28, 1980, which was to be the band's last ever performance. A documentary film, The Weavers: Wasn't That a Time! (1982), was released after Hays's death, and chronicled the history of the group, and the events leading up to the reunion. Following the dissolution of the band, Ronnie Gilbert toured America as a soloist and Fred Hellerman worked as a recording engineer and producer. The group was inducted into the Vocal Group Hall of Fame in 2001.In February 2006 The Weavers received the Lifetime Achievement Award given out annually at the Grammy awards show. Represented by members Ronnie Gilbert and Fred Hellerman, they struck a chord with the crowd as their struggles with political witch hunts during the 1950s were recounted. "If you can exist, and stay the course -- not a course of blind obstinacy and faulty conception -- but one of decency and good sense, you can outlast your enemies with your honor and integrity intact," Hellerman said. Some commentators see the reference to 'blind obstinacy' as a veiled criticism of those who believed blindly in all the actions of the Communist Party.Lee Hays died in 1981, aged 67, and his biography, Lonesome Traveler by Doris Willens, was published in 1988. Erik Darling died August 3, 2008, aged 74, in Chapel Hill, North Carolina, from lymphoma. After a very long career in music and activism, Pete Seeger died at the age of 94 on January 27, 2014, in New York City. Radio stations playing it No stations playing The Weavers now Hot tracks The Weavers - We Wish You A Merry Christmas We Wish You A Merry Christmas London Philharmonic Orchestra - Sofie - We Wish You A Merry Christmas We Wish You A Merry Christmas
Boeing 377 Stratocruiser From Wikipedia, the free encyclopedia Jump to: navigation, search Boeing 377 Stratocruiser Pan Am Stratocruiser San Francisco.jpg A Pan Am Stratocruiser over San Francisco Role Long range piston airliner National origin United States Manufacturer Boeing Commercial Airplanes First flight July 8, 1947 Introduction April 1, 1949 with Pan American World Airways Retired 1963 Status Retired Primary user Pan American World Airways Number built 76[1] Unit cost $1,225,000 (1945) Developed from Boeing C-97 Stratofreighter Variants Pregnant Guppy Super Guppy Mini Guppy The Boeing 377 Stratocruiser was a large long-range airliner developed from the C-97 Stratofreighter military transport, a derivative of the B-29 Superfortress. The Stratocruiser's first flight was on July 8, 1947.[2] Its design was advanced for its day, its innovative features included two passenger decks and a pressurized cabin, a relatively new feature on transport aircraft. It could carry up to 100 passengers on the main deck plus 14 in the lower deck lounge; typical seating was for 63 or 84 passengers or 28 berthed and five seated passengers. The Stratocruiser was larger than the Douglas DC-6 and Lockheed Constellation and cost more to buy and operate. Its reliability was poor, chiefly due to problems with the four 28-cylinder Pratt & Whitney Wasp Major radial engines and their four-blade propellers. Only 55 Model 377s were built for airlines, along with the single prototype. Design and development[edit] The Boeing 377 production line The prototype Boeing 377, ca. 1947 Berths and seating aboard a 377 The Boeing 377 Stratocruiser was a civil derivative of the Boeing Model 367, the Boeing C-97 Stratofreighter, which first flew in late 1944. William Allen, who had become President of The Boeing Company in September 1945, sought to introduce a new civilian aircraft to replace reduced military production after Second World War.[3] Although in a recession in late 1945, Allen ordered 50 Stratocruisers, spending capital on the project without an airline customer.[4] On November 29, 1945 Pan American World Airways became the launch customer with the largest commercial aircraft order in history, a $24.5 Million order for 20 Stratocruisers, about $324.3 Million in 2014 dollars.[4][5] Earlier in 1945 the Boeing C-97 had flown from Seattle to Washington, D.C. nonstop in six hours and four minutes; with this knowledge, and with Pan Am President Juan Trippe's high regard for Boeing after their success with the Boeing 314 Clipper, Pan Am was confident in ordering the expensive plane.[4] The 377 shared the distinctive design of the C-97, with a "double-bubble" fuselage cross-section, resembling a figure-8, with 6,600 ft³ (187 m³) of interior space, allowing for pressurization of a large cabin with two passenger decks. Outside diameter of the upper lobe was 132 inches, compared to 125 inches for the DC-6 and other Douglas types (and 148 inches for today's 737). The lower deck served as a lounge, seating 14. The 377 had innovations such as higher cabin pressure and air conditioning; the superchargers on the four Pratt & Whitney R-4360 engines increased power at altitude and allowed consistent cabin pressure.[6] The wing was the Boeing 117 airfoil, regarded as the "fastest wing of its time". In all, 4,000,000 man-hours went into the engineering of the 377.[7] First flight of the 377 was on July 8, 1947, two years after the first commercial order. The flight test fleet of three 377s underwent 250,000 mi (217,000 nmi; 402,000 km) of flying to test its limits before certification.[8] Operational history[edit] BOAC Stratocruiser As the launch customer, Pan Am was the first to begin scheduled service, from San Francisco to Honolulu in April 1949. At the end of 1949 Pan Am, BOAC, and AOA were flying 377s transatlantic, while Northwest was flying in the United States; in January 1950 United began flights from San Francisco to Honolulu. For a short time Pan Am flew their 377s to Beirut, but after 1954 no 377 was scheduled east of Europe or west of Singapore. In 1955 BOAC B377s had 50 First Class seats (fare $400 one way New York to London) or 81 Tourist seats (fare $290).[9] Despite a service record[10] plagued by one or two early disasters arising from the Curtiss Electric propellers fitted to early production aircraft,[citation needed] the 377 was one of the most advanced, and capable of the propeller-driven transports, and among the most luxurious.[7] A total of 56 were built, one prototype (later reconditioned) and 55 production aircraft. Within six years of first delivery, the Stratocruiser had carried 3,199,219 passengers; it had completed 3,597 transcontinental flights, and 27,678 transatlantic crossings, and went between the United States and South America 822 times. In these first six years, the Stratocruiser fleet had flown 169,859,579 miles (273,362,494 km).[6] It was also one of but a few double deck airliners, another being its French contemporary, the Breguet Deux-Ponts, as well as Boeing's own 747 and the Airbus A380. The last 377 was delivered to BOAC in May 1950. On this delivery flight, Boeing engineer Wellwood Beall accompanied the final 377 to England, and returned with news of the De Havilland Comet, the first jet airliner, and its appeal.[6] The last flight of the 377 with United was in 1954, the last with BOAC was in 1959, and the last Northwest was in September 1960. By November 1960 only a weekly Pan Am Honolulu to Singapore flight remained, and the 377 was retired by Pan Am in 1961. In 1953, "United's Ray Ireland ... described the Stratocruiser as unbeatable in luxury attraction but is uneconomical. Ireland said PAA's Stratocruiser competition to Hawaii induced United to buy the plane originally."[11] In 1950 United's seven B377s averaged $2.46 "direct operating cost" per plane-mile, and "Indirect costs are generally considered to be equal or greater than the direct costs."[12] So a 57-passenger B377 was unlikely to make money, in 1950 anyway. At the end of 1954 the six United B377s were sold to BOAC, which was short of aircraft after the grounding of the Comet 1. Boeing set never-exceed speed at 351 miles per hour (565 km/h) IAS or Mach 0.62, but on test the B377 reached 409 miles per hour (658 km/h) IAS in a 15–20 degree dive at 13,500 feet (4,100 m), Mach 0.67 and about 500 miles per hour (800 km/h) TAS.[13] Typical airline cruise was less than 300 miles per hour (480 km/h); in August 1953 Pan Am and United B377s (and United DC-6s) were scheduled between Honolulu and San Francisco (2,398 miles (3,859 km)) in 9 h 45 min each way. The longest (by distance) B377 nonstops were Pan Am Tokyo to Honolulu in four winters starting in 1952–1953. In January 1953 two nonstops a week were scheduled 11 hr 1 min on the strong tailwinds; the following August all flights took 19 hours via Wake. By 1960 Stratocruisers were being superseded by jets such as the de Havilland Comet, Boeing 707, and Douglas DC-8. A few were sold to smaller airlines, used as freighters; or converted by Aero Spacelines into outlandishly enlarged freighters called Guppies.[1] As the airlines began to upgrade so did the military services. The Boeing 377 was mainly used by two militaries, the United States and Israel. Prototype Stratocruiser; one built. Later brought up to 377-10-26 standard and sold to Pan American World Airways in 1950. 20 delivered to Pan American World Airways with round windows and a rear galley. 10 refitted with more powerful engines and a larger fuel capacity for Pan American transatlantic flights. Called the "Super Stratocruiser". Four ordered by the Scandinavian Airlines System, but taken up by BOAC after SAS cancelled the order. Aircraft had similar features to the 377-10-26. Eight delivered to American Overseas Airlines with round windows for the main cabin and rectangular windows for the lower cabin as well as an aft galley. AOA was merged with Pan Am the year after their delivery. Ten for Northwest Orient Airlines with all rectangular windows and an aft galley. Six for the British Overseas Airways Corporation (BOAC). Had a midships galley and all cabin windows were circular. Seven for United Air Lines. Rectangular windows on the main cabin and circular windows on the lower cabin. Sold to BOAC circa 1954. Freighter conversion. 377M Anak, Israeli Air Force Museum (2007) In the early 1960s the Israeli Air Force wanted to upgrade to the C-130 Hercules, which could lift larger payloads, but it was expensive and sales were embargoed by the United States. Israeli Aircraft Industries at Ben Gurion International Airport offered to modify Boeing 377 Stratocruisers. It had a stronger cabin floor which could handle cargo, plus a C-97 military Statocruiser tail section, which included a clamshell cargo door. These were dubbed Anak (or Giant in Hebrew) and entered service in 1964. Three of these were modified by the use of a swing tail section, similar to the Canadair CL44D-4 airliner. Two others served as aerial tankers with underwing hose reel refuelling pods. Two others were ELINT-platforms for electronic reconnaissance, surveillance and ECM (Electronic Counter Measures) missions. These were later joined by four KC-97G's with the flying boom system. Aero Spacelines Guppy[edit] The Pregnant Guppy heavy lifter. The turbine-powered Super Guppy of NASA In addition to the Israeli Anaks a company called Aero Spacelines was converting old 377s to aircraft called Guppys in the 1960s. There were three types: the Pregnant Guppy, Super Guppy, and Mini Guppy.[2] They had an extension to the top of the fuselage to enable them to carry large aircraft parts between manufacturing sites. The first was the Pregnant Guppy, followed by the Super Guppy, and finally the Mini Guppy. The Super Guppy and the Mini Guppy had turboprop engines. Aero Spacelines 377PG Pregnant Guppy Conversion of one 377-10-26, incorporating an enlarged upper deck and a fuselage lengthened by 16 feet to carry sections of the Saturn V rocket. One converted. Aero Spacelines 377SG Super Guppy A single heavy-lift transport similar to the Pregnant Guppy built by Aero Spacelines. The aircraft contained parts of a YC-97J Stratofreighter and a 377-10-26 mated with a larger main fuselage, larger tail and Pratt & Whitney T34 turboprops. Aero Spacelines SGT-201 Super Guppy Turbine Originally designated the 377SGT, it was similar to the 377SG, but with a more aerodynamic fuselage, a Boeing 707 nosewheel, wingspan stretched by 23 feet, and four Allison 501-D22C turboprops. Four were built and were used by Airbus to carry aircraft parts between its factories. In the 1990s Airbus retired them due to rising operational costs and they have been replaced with Airbus Belugas. Three of the former Airbus Industrie Super Guppys remain in the U.K., Germany, and France, while the fourth aircraft was acquired by NASA as part of a barter agreement with ESA for its role as a partner with the International Space Station. Aero Spacelines 377MG Mini Guppy Conversion of a 377-10-26, it featured a larger main cabin for oversize cargo, stretched wing and a hinged tail. Aero Spacelines MGT-101 Mini Guppy Turbine Originally designated the 377MGT. Similar to the 377MG but powered by Allison 501-D22C turboprop engines. One built. A Northwest Airlines Stratocruiser sits on the tarmac American Overseas Airways Stratocruiser N90947 "Flagship Denmark" in 1949 or 1950 President of Pan American World Airways, Juan Trippe, stands in front of a Stratocruiser.  United Kingdom  United States This aircraft type suffered 13 hull-loss accidents between 1951 and 1970 with a total of 139 fatalities. The worst single accident occurred on April 29, 1952. September 12, 1951 United Air Lines Flight 7030, a Stratocruiser 10-34 (N31230, named Mainliner Oahu), was being used for a semi-annual instrument check of a captain. At 10:39, the flight was cleared for an ILS approach to the San Francisco Airport. The aircraft, with No. 4 propeller feathered, stalled and abruptly dived from an altitude of approximately 300 feet and was demolished upon impact in San Francisco Bay. All three crew aboard were killed. The probable cause was an inadvertent stall at low altitude.[18] April 29, 1952 Pan Am Flight 202, a Stratocruiser 10-26 (N1039V, named Clipper Good Hope) en route from Buenos Aires-Ezeiza and Rio de Janeiro-Galeão to New York via Port of Spain crashed in the jungle in the south of the State of Pará. Probable causes are the separation of the second engine and propeller from the aircraft due to highly unbalanced forces followed by uncontrollability and disintegration of the aircraft. All 50 passengers and crew died in the worst-ever accident involving the Boeing 377.[19] July 27, 1952 Pan Am Flight 201, a Stratocruiser 10-26 (N1030V) en route from New York and Rio de Janeiro-Galeão to Buenos Aires-Ezeiza following pressurization problems during climb from Rio de Janeiro, a door blew open, a passenger was blown out and the cabin considerably damaged. One passenger (of 27 on board) died.[20] December 25, 1954 A BOAC Stratocruiser 10-28 (G-ALSA, named RMA Cathay) crashed on landing at Prestwick at 0330 hours, killing 28 of the 36 passengers and crew on board. The aircraft had been en route from London to New York City, when, on approach to Prestwick, it entered a steep descent before levelling-out too late and too severely, hitting the ground short of the runway.[21] March 26, 1955 Pan Am Flight 845/26, a Stratocruiser 10-26 (N1032V, named Clipper United States), ditched 35 miles (56 km) off the Oregon coast after the no. 3 engine and propeller tore loose from the wing, causing severe control difficulties. The aircraft sank after 20 minutes in water of about 1,600 m (5,200 ft) deep. There were four fatalities out of the 23 occupants, including two of the crew. April 2, 1956 Northwest Orient Airlines Flight 2, a Stratocruiser 10-30 (N74608, named Stratocruiser Tokyo), ditched into Puget Sound after the flight engineer mistakenly failed to close the cowl flaps on the plane's engines, an error attributed to confusing instrument layout. Although all aboard escaped the aircraft after a textbook ditching, four passengers and one flight attendant succumbed either to drowning or to hypothermia before being rescued. Pan Am Flight 6 ditches near Hawaii October 16, 1956 Pan Am Flight 6, a Stratocruiser 10-29 (N90943, named Clipper Sovereign of the Skies), ditched northeast of Hawaii after two of its four engines failed. The aircraft was able to circle around USCGC Pontchartrain until daybreak, when it ditched; all 31 on board survived. November 8, 1957 Pan Am Flight 7, a Stratocruiser 10-29 (registered N90944, named Clipper Romance of the Skies), left San Francisco for Hawaii with 38 passengers and 6 crew. The 377 crashed around 5:25 p.m. in the Pacific Ocean. There were no survivors and the entire wreckage has never been found. Only 19 bodies and bits of debris were recovered. There is speculation that two passengers had a motive to bring the plane down. Eugene Crosthwaite, a 46-year-old purser, had shown blasting powder to a relative days prior to the flight, and had cut a stepdaughter from his will only one hour before the flight. William Payne, an ex-Navy demolitions expert, had taken out large insurance policies on himself just before the flight, and had a $10,000 debt he was desperate to pay off. The insurance investigator later suspected him of never being on the plane. His wife received at least $125,000 in payouts. June 2, 1958 A Pan Am Stratocruiser 10-26 (registration N1023V, named Clipper Golden Gate) was on a flight from San Francisco to Singapore with some in-between stops. As the aircraft touched down at Manila (runway 06) in a heavy landing in rainy and gusty conditions, the undercarriage collapsed (as a result of the hard landing). The plane skidded and swerved to the right, coming to rest 2850 feet past the runway threshold and 27 feet from the edge of the runway. One of the passengers was killed when one of the blades of the number 3 prop broke off, penetrating the passenger cabin.[22] April 10, 1959 At the conclusion of a flight from Seattle to Juneau, Alaska, a Pan Am Stratocruiser 10-26 (N1033V, named Clipper Midnight Sun) undershot on final and collided with an embankment. The aircraft caught fire and was destroyed but all 10 passengers and crew survived.[23] July 9, 1959 A Pan Am Boeing Stratocruiser 10-29 (N90941, named Clipper Australia) was on final for Haneda Airport when the gear was extended, showing three greens. When power was reduced prior to touchdown, the gear unsafe warning horn sounded and a red gear unsafe warning light illuminated. The captain first called for a go-around, but noticed that the airspeed was too low. The gear was retracted quickly and a belly landing was carried out. All 59 passengers and crew on board survived, but the aircraft was written off.[24] August 1967 An Aero Spacelines Stratocruiser 10-29 (N90942) suffered a ground collision with Stratocruiser 10-32 N402Q at Mojave, California; the aircraft was damaged beyond repair.[25] May 12, 1970 The Aero Spacelines 377MGT was a converted Boeing Stratocruiser. Prototype N111AS first flew on March 13, 1970. In the following period flight testing was carried out, a.o. at Edwards AFB. The accident occurred during the sixth takeoff of Flight Number 12 following the scheduled shutdown of the engine number one at about 109 knots IAS (indicated air speed). The takeoff was being made on runway 22 and the wind was from approximately 200 degrees at about 10 knots. Rotation occurred at about 114 knots and several seconds after rotation, according to one witness, the aircraft turned and rolled to the left, settling as it did so. The left wingtip subsequently contacted the ground, causing a severe yaw. The forward fuselage struck the ground, causing the flight deck to be destroyed. All four crew aboard were killed.[26] Specifications (377)[edit] Data from Airliners of the World[27] General characteristics See also[edit] Related development Aircraft of comparable role, configuration and era 1. ^ a b Wilson (1998), p.16 2. ^ a b "Boeing History: Stratocruiser Commercial Transport". 1947-07-08. Retrieved 2012-06-18.  3. ^ Redding and Yenne 1997, p. 68. 4. ^ a b c Redding and Yenne 1997, p. 69. 5. ^ 6. ^ a b c Redding and Yenne 1997, p. 71. 7. ^ a b Redding and Yenne 1997, p. 70. 8. ^ Pushing the Envelope: The American Aircraft Industry by Donald M. Pattillo[page needed] 9. ^ Flight 28 October 1955 p671 11. ^ Aviation Week 31 August 1953 p57. The article discusses CAB rulings, and Ireland was perhaps speaking at a hearing. 12. ^ American Aviation 23 July 1951 p37 13. ^ American Aviation 8 January 1951, p23 14. ^ Flight Simulation is Stimulation - Boeing 377 Stratocruiser Retrieved 3/31/11 15. ^ - Anak(Boeing 377) Retrieved 3/31/11 16. ^ Flickriver - Israel07 Israel Air Force Museum by brewbooks Retrieved 3/31/11 17. ^ All About Retrieved 4/1/11 18. ^ Accident description for CCCP-M25 at the Aviation Safety Network. Retrieved on 17 November 2013. 19. ^ Accident description for N1039V at the Aviation Safety Network. Retrieved on 15 September 2011. 20. ^ Accident description for N1030V at the Aviation Safety Network. Retrieved on 24 September 2011. 22. ^ Accident description for N1023V at the Aviation Safety Network. Retrieved on 17 November 2013. 23. ^ Accident description for N1033V at the Aviation Safety Network. Retrieved on 17 November 2013. 24. ^ Accident description for N90941 at the Aviation Safety Network. Retrieved on 17 November 2013. 25. ^ Accident description for N90942 at the Aviation Safety Network. Retrieved on 17 November 2013. 26. ^ Accident description for N111AS at the Aviation Safety Network. Retrieved on 17 November 2013. External links[edit]
Execution by firing squad From Wikipedia, the free encyclopedia Jump to: navigation, search "Firing squad" and "shot at dawn" redirect here. For other uses, see Firing squad (disambiguation). For the UK memorial, see Shot at Dawn Memorial. Military significance[edit] Blank cartridge[edit] In some cases, one or more members of the firing squad may be issued a weapon containing a blank cartridge instead of one housing a live round.[1] No member of the firing squad is told beforehand if she/he is using live ammunition. This is believed to reinforce the sense of diffusion of responsibility among the firing squad members, making the execution process more reliable. It also allows each member of the firing squad to believe afterward that he did not personally fire a fatal shot[2]—for this reason, it is sometimes referred to as the "conscience round". However, according to a Private W. A. Quinton, who served in the British Army during the First World War and had the experience of being in a firing squad in October 1915, he and eleven colleagues were relieved of any live ammunition and their own rifles, before being issued with replacement weapons. The firing squad was then given a short speech by an officer before they fired a volley at the condemned man. He said about the episode, "I had the satisfaction of knowing that as soon as I fired, the absence of any recoil, [indicated] that I had merely fired a blank cartridge".[3] In more recent times, such as in the execution of Ronnie Lee Gardner in the American state of Utah in the United States in 2010, a rifleman may be given a "dummy" cartridge containing wax instead of a bullet, which provides a more realistic recoil.[4] By country[edit] The revolt of Czech units in Austria in May 1918 was brutally suppressed. On 1 April 1916, Belgian woman Gabrielle Petit was executed by a German firing squad at Schaerbeek after being convicted of spying for the British Secret Service during World War I. During the Battle of the Bulge in World War II, three captured German spies were tried and executed by a U.S. firing squad at Henri-Chapelle on 23 December 1944. Thirteen other Germans were also tried and shot at either Henri-Chapelle or Huy.[5] These executed spies took part in Waffen-SS commando Otto Skorzeny's Operation Greif, in which English-speaking German commandos operated behind U.S. lines, masquerading in U.S. uniforms and equipment.[5][6] The Brazilian Constitution of 1988 expressly prohibits the usage of capital punishment in peacetime, but authorizes the use of the death penalty for military crimes committed during wartime.[7] War needs to be declared formally, in accordance with international law and article 84, item 19 of the Federal Constitution, with due authorization from the Brazilian Congress. The Brazilian Code of Military Penal Law, in its chapter dealing with wartime offences, specifies the crimes that are subject to the death penalty. The death penalty is never the only possible sentence for a crime, and the punishment needs to be imposed by the military courts system. Per the norms of the Brazilian Code of Military Penal Procedure, the death penalty is carried out by firing squad. Cuba, as part of its penal system, still utilizes death by firing squad. In January 1992, a Cuban exile convicted of "terrorism, sabotage and enemy propaganda" was executed by firing squad.[8] The Council of the State noted that the punishment served as a deterrent and stated that the death penalty "fulfills a goal of overall prevention, especially when the idea is to stop such loathsome actions from being repeated, to deter others and so to prevent innocent human lives from being endangered in the future."[8] During the Cuban Revolution, both sides employed death by firing squads. According to Humberto Fontova, a refugee from Castro's Cuba, Che Guevara was responsible for hundreds of deaths by firing squad.[9] A Soviet infiltrator being executed by a firing squad during the Continuation War. The death penalty was widely used during and after the Finnish Civil War (January–May 1918); some 9,700 Finns and an unknown number of Russian volunteers on the Red side were executed during the war or in its aftermath.[10] Most executions were carried out by firing squads after the sentences were given by illegal or semi-legal courts martial. Only some 250 persons were sentenced to death in courts acting on legal authority.[11] The death penalty was abolished by Finnish law in 1949 for crimes committed during peacetime, and in 1972 for all crimes.[12] Finland is party to the Optional protocol of the International Covenant on Civil and Political Rights, forbidding the use of the death penalty in all circumstances.[13] Execution at Verdun at the time of the mutinies in 1917. Private Thomas Highgate was the first British soldier to be convicted of desertion and executed by firing squad in September 1914 at Tournan-en-Brie during World War I. In October 1916, Private Harry Farr was shot for cowardice at Carnoy, which was later suspected to be acoustic shock. Highgate and Farr, along with 304 other British and Imperial troops who were executed for similar offenses, were listed at the Shot at Dawn Memorial which was erected to honor them.[14][15] On 15 October 1917, Dutch exotic dancer Mata Hari was executed by a French firing squad at Château de Vincennes castle in the town of Vincennes under charges of espionage for Germany during World War I.[16] During World War II, on 24 September 1944, Josef Wende and Stephan Kortas, two Poles drafted into the German Army, entered across the Moselle Rivers behind U.S. lines in civilian clothes, posing as Polish slave laborers, to observe Allied strength and were to rejoin their own army on the same day. However, they were discovered by the Americans and captured. On 18 October 1944, Wende and Kortas were found guilty of espionage by a U.S. military commission and sentenced to death.[17] On 11 November 1944, they were shot in the garden of a farmhouse at Toul. The footage of Wende's execution[18] as well as Kortas[19] are shown in these links.[20] On 15 October 1945, Pierre Laval, the puppet leader of Nazi-occupied Vichy France, was executed by firing squad at Fresnes Prison in Paris for treason against France.[21][22] The memorial "Shoes on the Danube Promenade" created to honor the Jews who were lined up on the banks of the Danube and shot dead by fascist Arrow Cross militiamen in Budapest during World War II. Fabianus Tibo, Dominggus da Silva and Marinus Riwu were executed in 2006. Nigerian drug smugglers Samuel Iwachekwu Okoye and Hansen Anthoni Nwaolisa were executed in June 2008 in Nusakambangan Island.[23] Five months later, three men convicted for the 2002 Bali bombingAmrozi, Imam Samudra, and Ali Ghufron—were executed on the same spot in Nusakambangan.[24] In January 2013, a 56-year-old British woman was sentenced to execution by firing squad for importing a large amount of cocaine; she lost her appeal against her sentence in April 2013.[25][26][27] While on 18 January 2015, under the new leadership of Joko Widodo, 6 people who were convicted of producing and smuggling drugs into Indonesia who had been sentenced to death were executed at Nusa Kambangan Penitentiary shortly after midnight.[28] Following the 1916 Easter Rising in Ireland, 15 of the 16 leaders that were executed were shot by British military authorities under martial law. The executions have often been cited as a reason for how the Rising managed to galvanise public support in Ireland after the failed rebellion.[29] On 1 December 1945, Anton Dostler, the first German general to be tried for war crimes, was executed by a U.S. firing squad in Aversa after being found guilty by a U.S. military tribunal for ordering the killing of 15 U.S. prisoners of war in Italy during World War II. Execution of Emperor Maximilian of Mexico, by Édouard Manet, 1868 During the Mexican Independence War, several Independentist generals (such as Miguel Hidalgo and José María Morelos) were executed by Spanish firing squads.[30] Also, Emperor Maximilian I of Mexico and several of his generals were executed in the Cerro de las Campanas after the Juaristas took control of Mexico in 1867.[30] Manet immortalized the execution in a now-famous painting, The Execution of Emperor Maximilian; he painted at least three versions. Firing-squad execution was the most common way to carry out a death sentence in Mexico, especially during the Mexican Revolution and the Cristero War.[30] After these events, the death sentence was reduced to some events in Article 22 of the Mexican Constitution; however, in 1917 capital punishment was abolished completely.[31] During World War II, some 3,000 persons were executed by German firing squads. The victims were sometimes sentenced by a military court; in other cases the victims were hostages or arbitrary people passing by who were executed publicly to intimidate the population and as reprisals against the resistance movements. After the attack on high-ranking German officer Hanns Albin Rauter, about 300 persons were executed publicly as reprisal. Rauter himself was shot near Scheveningen on 12 January 1949, following his conviction for war crimes. On 13 May 1945, five days after the capitulation of Adolf Hitler's Wehrmacht, a German firing squad executed two of their Navy sailors on the wall of an air raid shelter near the Ford plant in Amsterdam for desertion. At the time, the execution was supervised and under Canadian control in Amsterdam.[32] Nigeria executed criminals that committed armed robberies, the likes of Ishola Oyenusi, Lawrence Anini, Monday Osunbor, as well as military officers who were accused of plotting coups against the governments, officers such as Buka Suka Dimka, major Gideon Okar by firing squad. It has not been used since the advent of democracy in recent years. Vidkun Quisling, the leader of the collaborationist Nasjonal Samling Party and of Norway during the German occupation in World War II, was sentenced to death by firing squad for treason and executed on 24 October 1945, at the Akershus Fortress.[34] Jose Rizal was executed by firing squad on the morning of 30 December 1896, in what is now Luneta Park, where his remains have since been placed.[35] During the Marcos administration, drug trafficking was punishable by firing-squad execution, as was done to Lim Seng. Execution by firing squad was later replaced by electric chair then lethal injection. By 24 June 2006, President Gloria Macapagal-Arroyo abolished capital punishment by Republic Act 9346. Existing death row inmates, who numbered in the thousands, were eventually given life sentences or reclusion perpetua instead.[36] Serbian prisoners of war are arranged in a semi-circle and executed by an Austrian firing squad, 1917 (World War I). Nicolae Ceaucescu was executed by firing squad while singing[37] the Communist Internationale following a show trial; bringing an end to the Romanian Revolution. South Africa[edit] Australian soldiers Harry "Breaker" Morant and Peter Handcock were executed by a British firing squad in the South African Republic on 27 February 1902, for war crimes during the Second Boer War; questions have since been raised in Australia as to whether they received a fair trial. Both sides in the ongoing Syrian civil war have employed firing squads. In January 2013, a Syrian civilian described how he narrowly survived a firing squad assembled by government supporters that resulted in the deaths of some 20 people.[38] United Arab Emirates[edit] In the United Arab Emirates, firing squad is the preferred method of execution.[39] United Kingdom[edit] The Tower of London was used during both World Wars for executions: during World War I, 11 captured German spies were shot between 1914 and 1916. All spies executed on British soil during the First World War were buried in East London Cemetery, in Plaistow, London.[40] On 15 August 1941, German Cpl. Josef Jakobs was shot for espionage during World War II. When the U.S. Army took over Shepton Mallet prison in Somerset in 1942, renaming it Disciplinary Training Center No.1 and housing troops convicted of offences across Europe, two men were executed by firing squad for murder; Private Alexander Miranda on 30 May 1944 and Private Benjamin Pigate on 28 November 1944. Locals complained about the noise, as the executions took place in the open air at 1am. United States[edit] In 1913, Andriza Mircovich became the first and only inmate in Nevada to be executed by shooting.[41] After the warden of Nevada State Prison was unable to find five men to form a firing squad,[42] a shooting machine was built to carry out Mircovich's execution.[43] John Deering allowed an electrocardiogram recording of the effect of gunshot wounds on his heart during his 1938 execution by firing squad.[44] Utah's 1960 execution of James W. Rodgers became the last execution by firing squad in the United States for nearly two decades. Since 1960, there have been three executions by firing squad, all in Utah: 1. Gary Gilmore was executed in 1977. 2. John Albert Taylor chose firing squad for his 1996 execution, in the words of the New York Times, "to make a statement that Utah was sanctioning murder."[45] However, a 2010 article for the British newspaper The Times quotes Taylor justifying his choice because he did not want to "flop around like a dying fish" during a lethal injection.[46] Execution by firing squad was banned in Utah in 2004, but as the ban was not retroactive,[48] three inmates on Utah's death row will be executed by firing squad.[49] Idaho banned execution by firing squad in 2009,[50] temporarily leaving Oklahoma as the only state in the union utilizing this method of execution (and only as a secondary method). In March 2015, Utah enacted legislation allowing for execution by firing squad if lethal injection drugs are unavailable.[51] See also[edit] 1. ^ p. 208 Huie, William Bradford The Execution of Private Slovik 1954 Duell, Sloan & Pearce 3. ^ Carver, Field Marshal Lord. Britain's Army in the 20th Century (1998) pp 126-128 Macmillan Publishers Ltd. ISBN 0 330 37200 9 4. ^ "How and why Gardner was shot". BBC News. June 18, 2010.  5. ^ a b Pallud, p. 15 6. ^ Military police execute German spies in Belgium. 7. ^ Article 5 of Brazilian Constitution (See Paragraph XLVII-a) 8. ^ a b "Cuban Firing Squad Executes Exile". New York Times. 21 January 1992.  9. ^ Jamie Weinstein (4 September 2013). "Author: Hemingway watched Che's firing squad massacres 'while sipping Daiquiris'". Daily Caller.  10. ^ War Victims of Finland 1914-1922 at the Finnish National Archives 11. ^ a b Yliopistolehti 1995 12. ^ Kuolemantuomio kuolemantuomiolle at Statistics Finland (in Finnish) 13. ^ Finnish public treaty number SopS 49/1991 14. ^ a b The Shot at Dawn Campaign The New Zealand government pardoned its troops in 2000; the British government in 1998 expressed sympathy for the executed, and in 2006 the Secretary of State for Defence announced a full pardon for all 306 executed soldiers from the First World War. 16. ^ Mata Hari is executed 18. ^ A German spy is executed by U.S. Military Police firing squad in Toul, France, during World War II 19. ^ A German spy is executed by the Military Policemen and is carried away covered in white sheet in Toul France 21. ^ Vichy leader executed for treason 22. ^ Pierre Laval 26. ^ Karishma Vaswani (2013-01-22). "BBC News - Bali drugs: Death sentence for Briton Lindsay Sandiford". M.bbc.co.uk. Retrieved 2013-07-25.  28. ^ AFP (2015-01-18). "Fury as Indonesia executes foreigners by firing squad". http://www.dailymail.co.uk/. Retrieved 2015-01-18.  30. ^ a b c Known history of the Mexican Revolution 31. ^ Mexican Constitution, Article 22 32. ^ [ The Execution of German Deserters by Surrendered German Troops Under Canadian Control in Amsterdam, May 1945] 33. ^ "Dutch Nazi Executed," Amarillo Globe, May 7, 1946, p1 36. ^ Sun Star Cebu. 25 June 2006. Arroyo kills death law 37. ^ http://www.theguardian.com/world/2014/dec/07/nicolae-ceausescu-execution-anniversary-romania 38. ^ Alexander Marquardt (10 January 2013). "Syrian Describes Surviving Firing Squad". ABC News.  40. ^ British Military & Criminal History in the period 1900 to 1999 -- German Spies caught in the UK during the First World War (1914-18) 41. ^ "Nevada State Prison Inmate Case Files: Andriza Mircovich". Nevada State Library and Archives. Retrieved November 8, 2010.  43. ^ Cafferata, Patty (June 2010). "Capital Punishment Nevada Style". Nevada Lawyer (State Bar of Nevada). Retrieved November 8, 2010.  46. ^ Giles Whittell (2010-04-24). "Utah death row inmate Ronnie Lee Gardner elects to die by firing squad". The Times (London).  51. ^ Utah Brings Back the Firing Squad, So How Does It Work?. The New York Times. Retrieved March 24, 2015. Further reading[edit] External links[edit]
skip navigation link What is CrowdMag? In CrowdMag project, we explore whether digital magnetometers built in modern mobile smartphones can be used as scientific instruments. With CrowdMag mobile apps, phones all around the world send magnetometer data to us. At our server, we check quality of the magnetic data and make data available to the public as aggregate maps and charts. We have two long-term goals: 1. Create near-real-time models of Earth's time changing magnetic field by combining crowdsourced magnetic data with real-time solar wind data. 2. Map local magnetic noise sources (for e.g. power transformer and iron pipes) to improve accuracy of the magnetic navigation systems. Success of CrowdMag project depends on participation by citizen scientists like you. Why ? In this era of GPS and other geospatial technologies, what is the need of a compass? For a stationary device, GPS does not provide pointing direction. Satellite signals can be jammed or masked. For example, it is difficult to get GPS signals underwater. Earth's magnetic field (geomagnetic field) provides us an all-weather referencing system.Earth acts like a great spherical magnet and its magnetic field resembles, in general, the field generated by a dipole magnet (i.e., a straight magnet with a north and south pole) located at the center of Earth. The geomagnetic field has been observed and used for navigation since ancient times. Today, magnetic navigation is implemented on most planes, ships and even on your smartphone for safe and reliable navigation. As the geomagnetic field changes with time and space, it is important to monitor its changes. Scientists use observatories, satellites and ship/airborne surveys to keep track of the change.Due to gaps in coverage - both in time and space - scientists are always looking for alternative ways to obtain geomagnetic data. CrowdMag mobile applications can potentially improve magnetic field models and magnetic navigation by filling data gaps with existing technologies that capitalize on citizen science. Science quality magnetic data are collected with low-noise sensors in relatively noise free environment. With such practices, an accuracy of about 1 nT is routinely achieved. However, a phone's magnetometer also senses noise from currents flowing its electronic circuits. Man-made magnetic noises sources (e.g. electric transformers, power lines and car) can also overwhelm the natural magnetic field. Additionally, a phone's magnetometer has significantly lower sensitivity than a sensor used for measuring science quality data( ~300 nT compared to ~0.1 nT). All these factors make it difficult to separate noise from the geomagnetic field in phone's magnetic measurements. This is where we need your help. By sourcing magnetic data from a large number of users, we hope to reduce noise in the data. Who are we? The geomagnetism group of NOAA's National Geophysical Data Center (NGDC) conducts original research on the magnetic field of Earth. Our primary goal is to create and update models of the geomagnetic field to keep pace with Earth's constantly changing magnetic field. Our magnetic models are integrated into millions of smartphones, car and aircraft navigation systems and GPS so that users know which way is north. Frequently asked questions on geomagnetism. CrowdMag Privacy Policy In addition to the NOAA National Geophysical Data Center’s privacy policy., the CrowdMag app implements the following measure: CrowdMag data are collected "anonymously". We do not collect personally identifiable information, including your name, email address, or your device's unique identification number. If you enable the “send data” option in the CrowdMag app, the following information will be sent to NOAA. • Time of measurement • Location (Are you on Earth or Moon?) • Location accuracy • Magnetic data from Phone's Magnetic Sensor (This is the key!) • Phone’s model (Very many sensors!) CrowdMag data are stored (for a foreseeable future) in an internal, non-public database at the NOAA’s National Geophysical Data Center, Boulder, CO, U.S.A . Our magnetic field research team will use these data to assess the utility of using crowdsourced magnetic data for modeling the magnetic field. They will periodically make science products such as maps, graphs and/or mathematical models using CrowdMag data. In order to further magnetic field research, these products may be presented at meetings, be included in publications, or be made available to public via Internet. CrowdMag app will store your data on your phone’s database. This is true even if you do not share the data with us. Data older than seven days are automatically deleted. In order to delete the data, go to “My Data” and tap “Clear Data”. Crowdsourced magnetic data This map shows data collected from phones around the world! Displayed is the Crowdsourced magnetic data collected in the past 24 hours within a tolerance level of prediction by World Magnetic Model We have added some uncertainty to each data point shown to ensure the privacy of our contributors. This map is updated every hour. F channel represents Total Strength, H channel represents Horizontal Component, and Z channel represents Vertical Component. Use the menu on the top-right part of the map to get data for a different date range. Be a citizen scientist! Use your phone as a magnetic sensor! Here is an opportunity for you to be a part of our research on the geomagnetic field. Install CrowdMag app and share your magnetic data. You can also view maps and graphs shared by other citizen scientists. Please send your feedback to appstore Get it on Google Play iosscreenshot1 iosscreenshot2 iosscreenshot3 iosscreenshot4 iosscreenshot5 androidscreenshot1 androidscreenshot2 androidscreenshot3 androidscreenshot4 androidscreenshot5 androidscreenshot6
Broad crested weirs 2 HOME page for Main menu The information on this was extracted from Practical hydraulics by Melvyn Kay, published in 1998 by E & FN Spon. It explains very nicely the fundamental aspects of broad crested weirs with some very clear diagrams. Solid weirs  These are much more robust than sharp-crested weirs and are used extensively for , flow measurement and water level regulation in rivers and canals (Figure 7.5a  below).   Height of weir and critical flow  All solid weirs work on the principle that the flow over the weir must go through the critical depth.  It is the height of a weir that determines whether or not the flow goes critical. Once this happens a formula for discharge can be developed using the concept of specific energy and the special conditions that occur at the critical point.  The following formula links the channel discharge (Q) with the upstream water depth measured above the weir crest (H):                                      Q = CLH1.5 where C is the weir coefficient, L is length of the weir crest (m) and H is head on the weir measured from the crest (m).  To see how this formula is developed you need to refer to the lab experiment page on broad crested weirs.  As there is some draw-down close to the weir, the head is usually measured a few metres upstream where the water level is unaffected by the weir.  Note that strictly speaking H is the measurement from the weir crest to the energy line as it includes the kinetic energy term.  In practice H is measured from the weir crest to the water surface.  The error involved in this is relatively small and can be taken into account in the value of the weir coefficient C.  Alternatively, the head H is measured in a stilling chamber by the side of the channel where the kinetic energy has been dissipated.  As the formula is based on critical depth it is not dependent on the shape of the weir.  So the same formula can be used for any critical depth weir and not just for broad crested weirs.  Only the value of C changes to take account of the different weir, shapes.  See the diagrams below.  Determining the height of a weir  Just how high a weir must be for the flow to go critical is determined from the specific energy diagram.  The effect of constructing a weir in a channel is the same as building a step up on the bed.  In the case of a step up, the depth of water on the step decreases and the velocity increases (Figure 7.6a, see below).   A worked example would show that for a O.3m high step up, the depth of water was reduced from 0.99m upstream to 0.67m on the step (this is summarised in Figure 7.6b).  This is still well above the critical depth of 0.29m.   Now assume that the step up on the bed is a weir and the intention is to make the flow go critical on the weir crest.  This can be achieved by raising the crest level.  Raising it from O.3m to O.56m further reduces the depth on the weir from 0.67m to 0.29m, which is the critical depth (Figure 7.6c.  This is the minimum weir height required for critical flow.  Note that although the weir height has increased by 0.26m, the upstream depth remains unchanged at 0.99m.  If the weir height is increased beyond 0.56m the flow will still go critical on the crest and remain at the critical depth of 0.29m.  It will not and cannot fall below this value.  The difference will be in the upstream water level which will now rise.  Remember there is a unique relationship between the head on a weir and the discharge.  So if the weir is raised by a further 0.1m to 0.66m the upstream water level will also be raised by 0.lm to maintain the correct head on the weir (Figure 7.6d).  The operation of weirs is often misunderstood and it is believed that they cause the flow to back up and so raise water levels upstream.  This only happens once critical conditions are achieved on the weir.  When a weir is too low for critical flow it is the water level on the weir that drops.  The upstream level is unaffected.  But once critical flow is achieved, raising the weir more than is necessary will have a direct effect on the upstream water level. Being sure of critical flow  Critical flow must occur for the discharge formula to work.  But in practice it is not always possible to see critical flow and so some detective work is needed.  Figure 7.7, below,  shows the changing flow conditions as water flows over a weir.  Upstream the flow is sub-critical, it then goes critical over the weir and then super-critical downstream.  It changes back to sub-critical through a hydraulic jump.  When this sequence of changes occurs it can be reasoned that critical flow must have occurred and so the weir is working properly.  The changes are best verified in reverse from the downstream side.  Remember a hydraulic jump can only form when the flow is super-critical and so if there is a hydraulic jump in the downstream channel, the flow over the weir must be super-critical.  If the upstream flow is sub-critical, which can be verified by the water surface dropping as water flows over the weir, then somewhere in between the flow must have gone critical.  So a hydraulic jump downstream is good evidence that critical flow has occurred.  Note that it is not important to know exactly where critical flow occurs.  It is enough just to know that it has occurred for the formula to work.  In the laboratory, the depth of flow above the weir can be measured so that critical depth flow can be determined. Broad-crested weirs are very common structures used for flow measurement.  They have a broad rectangular shape with a level crest rounded at the edge (Figure 7.5b).  The value of C for a broad-crested weir is 1.6 and so the formula becomes:                                                  Q = 1.6LH1.5 The formula derived from the total energy equation (Bernoulli equation) is:                                                  Q = 1.705LH1.5 So the coefficient of discharge determined in the laboratory from   should be around 0.94 as long as critical depth flow has occurred. One disadvantage of this weir is the region of dead water just upstream.  Silt and debris can accumulate here and this can seriously reduce the accuracy of the weir formula.  Another is the head loss between the upstream and downstream levels.  Whenever a weir (or a flume) is installed in a channel there is always a loss of energy particularly if there is a hydraulic jump downstream. This is the hydraulic price to be paid for measuring the flow.  Crump weirs are commonly used in the UK for discharge measurement in rivers.  Like the broad-crested weir it relies on critical conditions occurring for the discharge formula to work.  It has a triangular shaped section (Figure 7.5c near the top of the page).  The upstream slope is 1 in 2 and the downstream is 1 in 5.  The sloping upstream face helps to reduce the dead water region that occurs with broad-crested weirs.  It can also tolerate a high level of submergence.  Its crest can also be constructed in a vee shape so that it can be used accurately for both small and large discharges. Back to the Laboratory experiment page  horizontal rule Last Edited :  20 February 2015 12:29:04
Dental Composites Linked To Behavioral Issues In Children. Research published in the journal Pediatrics indicates that some dental composites -- long promoted as overall safer than mercury-based amalgams -- are having a significant negative impact on the psychosocial functioning of children. In fact, bisphenol-A based dental restorations were found to be worse than mercury-based amalgams when it came to learning impairment and behavioral issues. [i] The study used data from The New England Children's Amalgam Trial, which, surprisingly, found that children randomized to amalgam restorations had better psychosocial outcomes than those assigned to bisphenol-A based epoxy resin composites (bisGMA) for tooth restorations. The new analysis aimed to "examine whether greater exposure to dental composites is associated with psychosocial problems in children." The results of the study, which looked at a group of 534 children, 6 to 10 years old, were as follows: Children with higher cumulative exposure to bisGMA-based composite had poorer follow-up scores on 3 of 4 BASC-SR [self-reported Behavior Assessment System for Childre] global scales: Emotional Symptoms (β = 0.8, SE = 0.3, P = .003), Clinical Maladjustment (β = 0.7, SE = 0.3, P = .02), and Personal Adjustment (β = -0.8, SE = 0.2, P = .002). Associations were stronger with posterior-occlusal (chewing) surfaces, where degradation of composite was more likely. Moreover, researchers found that "at-risk or clinically significant scores were more common among children with greater exposure for "total problem behavior," and and numerous "BASC-SR syndromes." They noted "No associations were found with [non BPA-based] compomer, nor with amalgam exposure levels among children randomized to amalgam." In conclusion Greater exposure to bisGMA-based dental composite restorations was associated with impaired psychosocial function in children, whereas no adverse psychosocial outcomes were observed with greater urethane dimethacrylate-based compomer or amalgam treatment levels. It should be emphasized that this study should not be misinterpreted to mean that amalgams are safe. So, let us dispel the myth of a "safe" or "safer" amalgam in the following way. What follows is an article published in the journal The Science of the Total Environment, well worth reading in its entirety, which covers disturbing facts about amalgams quite thoroughly: Dental amalgam is 50% metallic mercury (Hg) by weight and Hg vapour continuously evolves from in-place dental amalgam, causing increased Hg content with increasing amalgam load in urine, faeces, exhaled breath, saliva, blood, and various organs and tissues including the kidney, pituitary gland, liver, and brain. The Hg content also increases with maternal amalgam load in amniotic fluid, placenta, cord blood, meconium, various foetal tissues including liver, kidney and brain, in colostrum and breast milk. Based on 2001 to 2004 population statistics, 181.1 million Americans carry a grand total of 1.46 billion restored teeth. Children as young as 26 months were recorded as having restored teeth. Past dental practice and recently available data indicate that the majority of these restorations are composed of dental amalgam. Employing recent US population-based statistics on body weight and the frequency of dentally restored tooth surfaces, and recent research on the incremental increase in urinary Hg concentration per amalgam-filled tooth surface, estimates of Hg exposure from amalgam fillings were determined for 5 age groups of the US population. Three specific exposure scenarios were considered, each scenario incrementally reducing the number of tooth surfaces assumed to be restored with amalgam. Based on the least conservative of the scenarios evaluated, it was estimated that some 67.2 million Americans would exceed the Hg dose associated with the reference exposure level (REL) of 0.3 μg/m(3) established by the US Environmental Protection Agency; and 122.3 million Americans would exceed the dose associated with the REL of 0.03 μg/m(3) established by the California Environmental Protection Agency. Exposure estimates are consistent with previous estimates presented by Health Canada in 1995, and amount to 0.2 to 0.4 μg/day per amalgam-filled tooth surface, or 0.5 to 1 μg/day/amalgam-filled tooth, depending on age and other factors. While bisphenol-A is actually better known for its endocrine disrupting properties, as it mimics and/or interferes with estrogen receptors and pathways in the body, research indicates that it moves freely through the blood-brain barrier, due to its lipophilic (fat-loving) properties.[ii] This means that whatever bisphenol-A is released from the composite material will eventually have direct access to the children's developing central nervous system. There is also preliminary research indicating that bisphenol A may result in central nervous system hyperactivity. BisGMA-based dental composites have also been studied in vitro experiements to be highly toxic to human DNA, raising concern that the the adverse effects of these dental restorations stretch far beyond behavioral/cognitive problems to increasing childhood cancer risk. Unfortunately, BPA (and similar bisphenol analogs, such as Bisphenol-S), are omnipresent in food liners, thermal printer papers, and paper currencies, to name but a few common sources of exposure, making the issue far larger than dental restorations. While reducing or eliminating exposure should be the first priority, there are natural substances which have been studied to reduce bisphenol A toxicity either through enhancing its elimination from the body, or by enhancing its degradation. One of the most interesting examples are probiotics. Read Probiotics Destroy Toxic Chemicals In Our Gut For Us to learn more. [i] Nancy N Maserejian, Felicia L Trachtenberg, Russ Hauser, Sonja McKinlay, Peter Shrader, Mary Tavares, David C Bellinger. Dental Composite Restorations and Psychosocial Function in Children. Pediatrics. 2012 Jul 16. Epub 2012 Jul 16. PMID: 22802599 [ii] C S Kim, P P Sapienza, I A Ross, W Johnson, H M D Luu, J C Hutter. Distribution of bisphenol A in the neuroendocrine organs of female rats. Toxicol Ind Health. 2004 Jun ;20(1-5):41-50. PMID: 15807407
More titles to consider Shopping Cart You're getting the VIP treatment! The first popular history of the Emancipation of Europe’s Jews in the eighteenth and nineteenth centuries—a transformation that was startling to those who lived through it and continues to affect the world today. Freed from their ghettos, Jews ushered in a second renaissance. Within a century Marx, Freud, and Einstein created revolutions in politics, human science, and physics that continue to shape our world. Proust, Schoenberg, Mahler, and Kafka redefined artistic expression. Emancipation reformed the practice of Judaism, encouraged some to imagine a modern nation of their own, and within decades led to the dream of Zionism. People who read this also enjoyed Get a 1 year subscription for / issue • IOS
Excavating the History of Collaboration July 2, 2008 Volume 6 | Issue 7 | Number 0 Excavating the History of Collaboration Heonik Kwon Collaboration in War and Memory in East Asia: A Symposium This symposium on war and collaboration in East Asia and globally features contributions by Timothy Brook, Prasenjit Duara, Suk-Jung Han, Heonik Kwon, a response by Brook and a further contribution by Margherita Zanasi. The authors examine war and collaboration in China, Korea, Vietnam, and Manchukuo, in history and memory and in comparative perspective. The symposium includes the following articles: 4. Heonik Kwon, Excavating the History of Collaboration 5. Timothy Brook, Collaboration in the Postwar 6. Margherita Zanasi, New Perspectives on Chinese Collaboration In southern France, there was a group of people who lived through the time of the Vichy regime somewhat differently from most of their neighbors. A few of them still survive, in France or in Vietnam, but most have passed away. In 1937-1938, the French colonial authority in Indochina conscripted numerous laborers from the central region of Vietnam and shipped them to the great Mediterranean city of Marseilles. There, the two thousand Vietnamese were brought to the notorious poudrerie—the powdery of Marseilles. The conscripts manufactured gunpowder for the French army and, under the Vichy regime, for the German army under French management. A number of these Vietnamese laborer-soldiers objected to their situation and joined the French résistance, whereas others continued to endure the appalling working conditions in the powdery. After sharing the humiliating experience of German occupation with the French citizens, these foreign conscripts found themselves in a highly precarious situation after their return home in 1948: the cadres in the Vietnamese revolutionary movement distrusted them, indeed looked upon them as collaborators with the colonial regime; the French took no interest in their past service to their national economy or their contribution to the resistance movement against the German occupiers. Many of these returnees perished in the ensuing chaos of war, and many of their children joined the revolutionary resistance movement in the following era, which the Vietnamese call the war against America. One returnee who survived the carnage has an extraordinary story of survival to tell: how he rescued his family in 1953 from the imminent threat of summary execution by pleading to French soldiers in their language, and again in 1967 thanks to the presence of an American officer in the pacification team who understood a few words of French as a result of having fought in Europe during World War II. The man’s youngest brother died unmarried and without a descendent, and so the man’s eldest son now performs periodic death-anniversary rites on behalf of the deceased. His brother was killed in action during the Vietnam War as a soldier of the South Vietnamese army, and his eldest son is a decorated former partisan fighter belonging to the national liberation front. French forces parachute into Dienbienphu, 1954 Anyone who studies the reality of a modern war, especially life under prolonged military occupation, will surely encounter stories of collaboration between the subjugated locals and the occupying power. No matter how brutal and unjust the process, military occupation is distinct from conquest in which some form of ties are constructed between the conquered and the conqueror, not least for rebuilding a functioning social order and security after the devastation. The cooperation is often a coerced one; people may have no choice but to cooperate. Since the authority that demands cooperation may have brutally harmed the locals in the process of conquest, collaborating with this authority can be a morally explosive issue. Nevertheless, when a war of conquest develops to become a politics of occupation, or when the conquering power is defeated, the history of war inevitably involves stories of collaboration, and understanding that history remains critically incomplete without knowledge of these stories. The last is the message of Timothy Brook’s gripping account of collaboration in wartime China. Brook’s approach to the Chinese encounter with Japanese invasion and occupation is not merely about the reactions this devastating encounter triggered on the Chinese side, but equally about how to approach this important yet sensitive subject free from the dominant national historical narrative in China, which fails to acknowledge the existence of collaboration with the occupying power. The mere mention of collaboration can still set off charged emotional reactions. Brook explains that his intention is to recover the deeper “political landscape” of occupation, which he contrasts to the “moral landscape” of historical denial and misrepresentation. This dual scheme of historical knowledge is expressed in various other terms, such as surface knowledge/deep reality, simplicity/complexity, and clarity/ambiguity, and it constitutes an organizing principle in Brook’s alternative narrative that features fascinating case studies. The moral landscape of occupation enforces a clear, uncontested boundary between the victims and the perpetrators of injustice; the political landscape was a much more complex one consisting of myriad transgressions and ambiguities as well as repression and resistance. Brook’s political/moral divide is therefore a way of making an authoritative claim of empirical knowledge of the past over an ideological and selective misrepresentation of it. The analytical divide is understandable, considering the sensitivity of the subject, and may be necessary for engaging with a history marred by national truth claims and national denials. The damage caused by colonial occupation is far from a settled topic, but remains a haunting subject not only between China and Japan but throughout the wider Pacific Asian region. However, Brook’s moral/political divide raises a few conceptual issues, both in terms of political theory and in view of a wider horizon of collaboration in the region’s modern political history. As illustrated in the story from Marseilles’ Powdery, the history of collaboration is not limited to the time of colonialism, which Brook focuses on, but continues to the subsequent era of the global cold war and beyond. I say cold war era with some reservation, being aware that this particular reference to the epochal political form that permeated the second half of the twentieth century is at odds with how nations in the postcolonial world experienced the epoch of radical political bipolarity, in terms of vicious and often protracted civil wars, international wars, and other organized violence rather than a “cold” imaginary war of containment and deterrence, as was the case in Western Europe and North America. We know that the unresolved questions of political collaboration with the colonial power were closely intertwined with the complications of postcolonial nation building and the political bipolarization that frequently characterized it. In the context of an ideologically charged civil war waged as part of a global bipolar confrontation, people were driven to take sides with one or the other political force and, when the frontline moved, those who had cooperated with the other side—whether a foreign power or a domestic force—were severely and brutally punished. In the experience of many communities, the frontline moved as often and as regularly as night changed to day. This was patently the case in the theatre of the Vietnam War as well as in the Korean War, and the punishment of collaborators often targeted not only individuals accused of culpability but often entire families or communities to which these individuals belonged. The politics of collaboration in this historical context was about the coerced mobilization of labor and resources by the bifurcated political forces. It was also about the devastation of communal norms and relations when the coerced collaboration with one side called in brutal actions from the other side, and when this reciprocal violence extended to retaliatory actions within the community and between groups of people who suffered violence from different sides. Seen against this tortuous, chaotic historical background, the moral and political landscapes of collaboration suggested by Brook take on new significance. The conceptual separation between the moral and political landscape assumes a certain clarity in the friend/enemy antithesis—the contrast which Carl Schmidt defines in The Concept of the Political as foundational to the sovereignty of the modern state. The moral discourse described by Brook radicalizes this clarity by denying that a zone of ambiguity existed between friends and enemy, thereby generating a sense of absolute internal moral solidarity and purity in opposition to an absolute notion of external enemy. The political landscape Brook paints challenges this discursive representation and, in doing so, aims to shed critical light on the propensity to base political sovereignty on a radical clarity of friend/enemy contrast. The problem is, however, that what appears to Brook to be a moral and moralizing discursive practice is in fact a highly political practice relating to the construction of state sovereignty. If the moral discourse of collaboration is actually a political practice, the political reality of collaboration, in turn, can be considered in moral or ethical terms. The Vietnamese family introduced above has a multiple history of cooperating with the wrong side of the political divide, according to how this is defined by the postwar political community. One grandfather worked for the French colonial army, and his brother fought in opposition to the Vietnamese revolutionary movement, or in postwar Vietnamese classification the ben kia (the “American” side, as against bent ta, “our side”). This history of collaboration coexists in a family with a history of patriotic contribution, such as that embodied by his eldest son, and these two histories interact with each other within the family in ways that differ from how they play out in the wider society: the man’s experience of working in France helped to save his family from annihilation; his son’s record of revolutionary merit helped to rescue his family from the stigmatic status of a collaborator or “reactionary” family, which many other families had to endure in postwar years. Seen within the family history and context, therefore, there is another history of collaboration emerging, related to but distinct from the political history of collaboration detailed by Brook. This moral history of collaboration is about how historical actors cared for each other, and how they together strived to survive the prevailing political divide and maintain a normative life amidst the polarizing divide through collaborating with each other. If we look closely enough, we will probably find similar histories of collaboration across political divides existing in the wider social field, between families and communities, and perhaps in an even broader horizon. The above agenda entails recognition of the fact that beneath the political landscape of collaboration there is another spectrum of collaborative human actions that exists within and against the extreme polarization that is the product of war and occupation. In excavating the muddy political history of collaboration, it will be important to dig further and to try to touch the bedrock history of human collaboration. In conducting this archaeology of history, it will be equally instructive to compare the bodies of unearthed objects from different sites and from different layers of a site. The comparison of materials from European and Asian sites is important, as Brook shows, yet so will be comparisons among different Asian sites as well as comparing materials discovered from the layer of colonial history to those emerging from the layer of bipolar national history. Brook’s work makes an important, decisive step towards this hopeful prospect of discovery. Heonik Kwon teaches social anthropology at the University of Edinburgh. His new book is Ghosts of War in Vietnam (Cambridge University Press, 2008). Heonik Kwon wrote this article for Japan Focus. Posted July 4, 2008.
Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist. Otto and self-driving trucks -- what do they mean? Today sees the un-stealthing of a new company called Otto which plans to build self-driving systems for long haul trucks. The company has been formed by a skilled team, including former members of Google’s car team and people I know well. You can see their opening blog post My entire focus on this blog, and the focus of most people in this space, has been on cars, particularly cars capable of unmanned operation and door-to-door service. Most of those not working on that have had their focus on highway cars and autopilots. The highway is a much simpler environment so much easier to engineer, but it operates at higher speeds so the cost of accidents is worse. That goes doubly true for trucks that are fast, big and massive. At the same time, 99% of truck driving is actually very straightforward — stay in a highway lane, usually the slow one, with no fancy moving about. Some companies have done exploration of truck automation. Daimler/Freightliner has been testing trucks in Nevada. Volvo (trucks and cars together) has done truck and platooning experiments, notably the Sartre project some years ago. A recent group of European researchers did a truck demonstration in the Netherlands, leading up to the Declaration of Amsterdam which got government ministers to declare a plan to modify regulations to make self-driving systems legal in Europe. Local company Peloton has gone after the more tractable problem of two-truck platoons with a driver in each truck, aimed primarily at fuel savings and some safety increases. While trucks are big and thus riskier to automate, they are also risky for humans to drive. Even though truck drivers are professionals who drive all day, there are still around 4,000 killed every year in the USA in truck accidents. More than half of those are truck drivers, but a large number of ordinary road users are also killed. Done well, self-driving trucks will reduce this toll. Just as with cars, companies will not release the systems until they believe they can match and beat the safety record of human drivers. The Economics Self-driving trucks don’t change the way we move, but they will have a big economic effect on trucking. Driver pay accounts for about 25-35% of the cost of truck operation, but in fact early self-driving won’t take away jobs because there is a serious shortage of truck drivers in the market — companies can’t hire enough of them at the wages they currently pay. It is claimed that there are 50,000 job openings unfilled at the present time. Truck driving is grueling work, sometimes mind-numbing, and it takes the long haul driver away from home and family for over a week on every long-haul run. It’s not very exciting work, and it involves long days (11 hours is the legal limit) and a lot of eating and sleeping in truck stops or the cabin of the truck. Average pay is about 36 cents/mile for a solo trucker on a common route. Alternately, loads that need to move fast are driven by a team of two. They split 50 cents/mile between them, but can drive 22 hours/day — one driver sleeps in the back while the first one takes the wheel. You make less per mile per driver, but you are also paid for the miles you are sleeping or relaxing. A likely first course is trucks that keep their solo driver who drives up to 11 hours — probably less — and have the software drive the rest. Nonstop team driving speed with just one person. Indeed, that person might be an owner-operator who is paying for the system as a businessperson, rather than a person losing a job to automation. The human would drive the more complex parts of the route (including heavy traffic) while the system can easily handle the long nights and sparse heartland interstate roads. The economics get interesting when you can do things that are expensive for human drivers and teams. Aside from operating 22 or more hours/day at a lower cost, certain routes will become practical that were not economic with human drivers, opening up new routes and business models. The Environment Computer driven trucks will drive more regularly than humans, effectively driving in “hypermile” style as much as they can. That should save fuel. In addition, while I would not do it at first, the platooning experimented with by Peloton and Sartre does result in fuel savings. Also interesting is the ability to convert trucks to natural gas, which is domestic and burns cleaner (though it still emits CO2.) Automated trucks on fixed routes might be more willing to make this conversion. Road wear There is strong potential to reduce the damage to roads (and thus the cost of maintaining them, which is immense and seriously in arrears) thanks to the robotruck. That’s because heavy trucks and big buses cause almost all the road wear today. A surprising rule of thumb is that road damage goes up with the 4th power of the weight per axle. As such an 80,000lb truck with 34,000lb on two sets of 2 axles and 6,000lb on the front axle does around 2,000 times the road damage of a typical car!  read more » I was investigated by the feds for taking a picture of the sun A week ago, a rather strange event took place. No, I’m not talking about just the Transit of Mercury in front of the sun on May 9, but an odd result of it. That morning I was staying at the Westin Waterfront in Boston. I like astrophotography, and have shot several transits. I am particularly proud of my gallery of the 2004 Transit of Venus which is unusual because I shot it in a hazy sunrise where it was a naked eye event, so I have photos of the sun with a lake and birds. Indeed, since the prior transit of Venus was in 1882, we may have been among the first alive to deliberately see it as a naked eye event. I did not have my top lenses with me but I decided to photograph it anyway with my small size Sony 210mm zoom and a welding glass I brought along. I shot the transit, holding the welding glass over the lens, with all mounted on my super-light “3 legged thing” portable tripod. Not wanting to leave the lens pointed at the sun when I removed the glass, I pulled the drape shut, looked at photos and then tilted the camera away. I went off to my meetings in Boston. At 10am I got a frantic call from the organizer of the Exponential Manufacturing conference I would be speaking at the next day. “You need to talk to the FBI!” he declared. Did they want my advice on privacy and security? “No,” he said, “They saw you taking photos of the federal building with a tripod from your hotel window and want to talk to you.” (Note: It probably wasn’t the FBI, that was just a first impression. The detectives would not name who had reported it.) Of course, I had no idea there was any federal building out the window and I did not take any photos of the buildings. In fact, I’m not quite sure what the federal facility is, though I presume it’s at the Barnes Building at 495 Summer St. — they never told me. Anybody know what’s there? Google maps shows a credit union and a military recruiting office, and there was suggestion of a Navy facility. Amusingly the web page for the recruiting center features a (small) photo of the building. Nothing to justify them having a surveillance crew constantly looking into the hotel rooms of guests and going nuts when they see a camera on a mini-tripod. I talked to hotel security. Turns out they had gone into my room! Sadly, though police can’t enter your room without a warrant, hotel staff usually can. Two Boston detectives were put on the case. After talking to hotel security, I thought it was over, but no, the next day after my talk, I had the detectives waiting for me in the hotel. First of all, I was concerned the hotel had given them my name. The hotel insisted the Boston innkeeper statutes require they do this. In reality, such statutes were found facially unconstitutional last year by the Supreme Court in City of Los Angeles v. Patel. In a facial challenge, the law is declared inherently invalid regardless of the specific facts of a case. The Boston police don’t believe this ruling applies to their law yet. So now my name is in police records over photographing the sun. Yes, when they met me, they realized I was just an astro-nerd and not a terrorist casing out the sun for an attack. (General conclusion, it’s too bright, so do it at night.) To scare me, and to justify their actions, they said the unnamed complainers (probably not FBI) had been “unsure if it was just a camera” (ie. pretending it might be a gun) even though it looks nothing like it. And when I closed the drape — they were watching me live — they imagined it was because I had seen them and was hiding. Mostly I laugh but the other part of me asks, “what the hell has gone wrong with this country?” Feds peering into our hotel rooms? Being afraid of a cheap lens (on an expensive camera, admittedly) on an ultralight tripod? Getting a police record for taking a photo out your hotel window, not even of the nondescript building that I would have no idea is a federal building? Having to demonstrate to not one, but two detectives that you’re just a harmless nerd? Not good. (They did Google me but did not clue in that I was on the board of the organization suing the NSA and other intelligence groups over the illegal mass wiretapping going on.) Above you will find my evil picture of the sun — not that bad for a $150 lens, actually — and a picture of my room when I returned to it, with the camera pointing up and into the room. Yes, I took a picture of the buildings after all this, though I did not take one in the morning. That’s Mercury in the lower left corner of the solar disk. The dark area in the middle is a sunspot, another good location for an attack. Welcome to the new America. And of course I need to add “don’t search my room or give my name to police without contacting me” to my list of things a good hotel should do. (BTW, I see many duplicate comments pointing to the story of the Economics professor pulled from a plane for doing some diffEQs on paper in the plane seat on his way to a conference. I think the whole nerd world saw that story already.) What should be in every hotel or AirBNB? My recent efforts in consulting and speaking have led to a lot more travel — which is great sometimes, but also often a drain. I’ve been staying in so many hotels that I thought it worth enumerating some of the things I think every hotel room should have, and what I often find missing. Most of these things are fairly inexpensive to do, though a few have higher costs. The cheaper ones I would hope can be just included, I realize some might incur extra charges or a slightly more expensive room, or perhaps they can be offered as a perk to loyalty program members. Desk space for all occupants Most rooms usually only have a workspace for one, even if it’s a double room. The modern couple both have computers, and both need a place to work, ideally not crammed together. That’s also true when two co-workers share a room. And in a perfect room, both desk spaces share the other attributes of a good desk, namely: • The surface is not glass. I would say more than half the desks in hotel rooms are glass, which don’t work well with optical mice. Sure, you put down some papers, but this seems kinda silly. • Of course, 2 or even 3 power outlets, on the desk or wall above it. Ideally the “universal” kind that accept most of the world’s plugs. (Sure, I bring adapters but this is always handy.) Don’t make me crawl under the desk to plug things in, have to unplug something else. To my horror, Marriott has been building some new hotels with no desk space at all. Some person (I would say some idiot) decided that since millennials use fewer laptops and just want to sit on a couch with their tablet, it was better to sacrifice the desk. Those hotels had better have folding desks you can borrow, in fact all hotels could do that to fix the desk space shortage, particularly if rooms are small. Another option would be a leaf that folds down from the wall. Surfaces/racks for luggage and other things for everybody. Many rooms are very lacking in table or surface space beyond the desk. Almost every hotel room comes with only one luggage holder, where a couple might find themselves with 3 or in rare case 4 bags. I doubt these folding luggage holders are that expensive, but if you can’t put more than one in every room, then watch people as they check in, and note how many bags they have, and have somebody automatically send up some extra holders to their room. At the very least make it easy for them to ask. I mean these things are under $30 quantity one. Get more! Bathrooms need surface space, too. Too often I’ve seen sinks with nowhere to put your toiletries and freedom bag. In fact, I want space everywhere to unpack the things I want to access. Power by the bed (and other places) Sure, I get that older hotel rooms did not load up with power outlets, and modern ones do. But aside from the desk, most people want power by the bed now, for their phone charger if nothing else. If you just have one plug by the bed, put a 3-way splitter (global plug, of course) on that plug so that people can plug things in without unplugging the light or clock. And ideally up high, so I don’t have to crawl behind things to get at it. A little more controversial is the idea of offering USB charging power. Today, we all carry chargers, but the hope is that if charging becomes commonplace, then like the travel hair dryer people used to carry and no longer do, we might be able to depend on finding a charger. Problem is, charging standards are many and change frequently — we now have USB regular (useless) and fast-charge, along with Qualcomm quick-charge and USB C. More will come. On top of this, strictly you should not plug your device into a random USB port which might try to take it over. You can get what’s called a “USB Condom” to block the data lines, but those might interfere with the negotiation phase of smarter power standards. A wireless “Qi” charging plate could be a useful thing. As a couple, we have had up to 8 things charging at the same time, when you include phones, cameras, external batteries, headphones, tablets and other devices. So I bring a 5-way USB fast charger and rely on laptops or other chargers to go the distance. Let me access the HDTV as a monitor, or give me a monitor. Some rooms block you from any access to the TV. Some have a VGA or HDMI port built into a console on the desk. The latter is great, but usually the TV is mounted in a way that makes it not very useful as a computer monitor for working. It’s primarily useful for watching video. I pretty much never watch video in a hotel room, so given the choice, I would put the monitor by the desk, and it should be 1080p or better — in fact 4K should be the norm for any new installations. If you don’t have one, have one I can call down for, even at a modest fee.  read more » Did a Tesla self-crash in self-park mode? A recent news story from Utah describes a Tesla which entered self-park (“summon”) mode and drive itself into the back of a flatbed truck raises some interesting issues. Tesla says that the owner of the vehicle initiated auto-Summon, which requires pressing the gear selector stalk twice and then shifting into park, then leaving the vehicle. After that the car goes into its self-park mode in 3 seconds, and the driver is supposed to be watching because the feature is a beta. The owner says he never activated the self-park, and if somehow he did by accident, he was standing by the car for 20 seconds showing it off to a stranger, and as such he claims he is absolutely certain the car did not begin moving 3 seconds after he got out. Tesla says the logs say otherwise. Generally, one believes log files over human memory, though these stories are surprisingly at odds. When doing Summon, the Tesla is flashing its hazard lights and moving, so it’s not exactly subtle. And it’s not supposed to work unless the keyfob is close to the car. No doubt there will be back and forth on just what happened. However, there are some things that are less disputed: 1. Unless the owner is out and out lying, there is a problem which allowed an owner to activate the auto-summon feature by accident, and to do so when not close to the car. (When you activate it the hazards start blinking and it shows auto-park on the screen.) 2. The car should not have hit the metal bars on the back of the flatbed. However, Tesla warns that the feature may not detect thin objects or hanging objects. These bars are quite low, but are sticking off the end of the truck by a large amount. Clearly the obstacle detection is indeed very “beta” if it could not see these. Apparently auto-park is done using the ultrasonic sensors, not the camera. Bumper based ultrasound is not enough. This also adds some fuel to the ongoing debate about maps. The car was in a place where there would be no reason to initiate Tesla’s self-park, which is designed for it to drive straight into narrow parking spaces. In this case, it is not necessary to have a map of all the spaces a car might self-park, but even a fairly coarse and inaccurate map could allow the car to say, “This seems like an odd place to use the self-park feature, are you sure?” And pretty much all parallel parking spaces on the side of the road qualify as a place you would not use this particular self-park function. So is the owner lying? Was he playing with auto-summon and screwed up? (You have to screw up royally as it drives quite slowly and any touch on the door handles or the fob will stop it.) The problem is that he claims that the car did it while he was not present, which is not supposed to happen, and if he was present, why did he not stop it? Google develops a Chrysler minivan If you had asked me recently what big car company was the furthest behind when it came to robocars, one likely answer would be Fiat-Chrysler. In fact, famously, Chrysler ran ads several years ago during the superbowl making fun of self-driving cars and Google in particular: Now Google has announced a minor partnership with Chyrsler where they will be getting Chrysler to build 100 custom versions of their hybrid minivans for Google’s experiments. Minivans are a good choice for taxis, with their spacious seating and electric sliding doors — if you want a vehicle to come pick you up, it probably should have something like this. This is a pretty minor partnership, something closer to a purchase order than a partnership, but it will be touted as a great deal more. My own feeling is it’s unlikely a major automaker will truly partner with a big non-auto player like Google, Uber, Baidu or Apple. Everybody is very concerned about who will own the customer and the brand, and who will be the “Foxconn” and the big tech companies have no great reason to yield on that (because they are big) and the big car companies are unlikely to yield, either. Instead, they will acquire or do deals they control with smaller companies (like the purchase of Cruise or the partnership with Lyft from GM.) Still, what may change this is an automaker (like FCA) getting desperate. GM got desperate and spent billions. FCA may do the same. Other companies with little underway (like Honda, Peugeot, Mazda, Subaru, Suzuki) may also panic, or hope that the Tier 1 suppliers (Bosch, Delphi, Conti) will save them. Google custom designed a car for their 3rd generation prototype, with 2 seats, no controls and and electric NEV power train. This has taught them a lot, but I bet it has also taught them that designing a car from scratch is an expensive proposition before you are ready to make many thousands of them. The coming nightmare for the car industry I have often written on the challenge facing existing automakers in the world of robocars. They need to learn to completely switch their way of thinking in a world of mobility on demand, and not all of them will do so. But they face serious challenges even if they are among the lucky ones who fully “get” the robocar revolution, change their DNA and make products to compete with Google and the rest of the non-car companies. Unfortunately for the car companies, their biggest assets — their brands, their experience, their quality and their car manufacturing capacity — are no longer as valuable as they were. Their brands are not valuable Today if you summon a car with a company like Uber, you don’t care about what brand of car it is, as long as it’s decent. Even with the “luxury” variants of Uber, you don’t care which type of luxury car shows up, as long as it meets certain standards. For companies who have most of their value in their nameplate, this is nightmare #1. The taxi service (Uber or otherwise) becomes the brand that is seen and valued by the customer. When you are buying a car for 5 years at the dealership, you care a lot about the brand, both for what it means, and for what it says about you when you show up driving it. When you buy a car by the ride, you don’t care a lot about the brand, because you are only going to use it for a short time. Their brands might be tarnished There will be accidents in Robocars, unfortunately. Those accidents will cost money, but they will also cause problems in public image. The problem is, “Mercedes runs over grandmother” is a headline that will make people less likely to buy any type of Mercedes. As such, Mercedes has plans to market self-driving car service under their Car2Go brand. You may not even know that Car2Go is Daimler, and they might like it that way. “Google car runs over grandmother” is bad news for the Google car project, but is not going to make anybody stop doing web searches with Google. (Except the grandmother…) The non-car companies don’t have a car brand to tarnish, but they do have famous brands. They can use those brands to attract customers without the same risk. Big car companies have famous brands but may be afraid to use them. They might just be the contract manufacturer Companies like Uber, Google, Apple and others don’t plan to manufacture cars. Why would they? There is tons of car manufacturing capacity out there. They can just go to carmakers and say, “here’s a purchase order for 100,000 cars — built to our spec with our logo on them.” It will be very hard to turn down such an order. Still, some companies will be too proud to do this, or too unwilling to sign their own suicide note. If they don’t accept the order, somebody else will. If nobody in the west does, somebody in China will. China is the world’s #1 car manufacturing country, but the cars are rarely exported to the west. They would love to change that. A likely model for this is the relationship of Apple and Foxconn. Foxconn makes your iPhone, but many don’t know that. Foxconn makes good money, but Apple makes much more, designing the product and owning the customer. The car companies don’t want to be Foxconn in the world of the future, but the alternative may be to be much smaller. (BTW, Foxconn has said it is interested in making cars.) First-rate quality might not be that important Chinese manufacturers don’t have the quality of the current leaders. But they may not need to. Just as Apple taught Foxconn how to make good iPhones, they might follow the same pattern here. But they don’t need to. That’s because a less reliable robocar is not the same sort of problem an unreliable personal car is. Sure, it should not break down while you are riding in it — but even then the company can quickly send you a replacement to pick you up in just a few minutes. If it breaks down otherwise, it just goes out of service. This costs the fleet manager money, but they saved a lot of money with the lower quality manufacturer. When cars can move on demand to service customers, breakdowns are not the same sort of problem. When your own car breaks down it’s a nightmare, and you will pay a lot to avoid it. For a fleet, it’s just a cost. All cars are down for maintenance some of the time. Cheaper cars will be down more, but if they are cheap enough, it still saves money. Customer perception of quality is still important. The vehicle must maintain the level of comfort and interior quality the customer has paid for. Safety related failures are of course much less tolerable. New car designs will be radically different The robocar of the future will look quite different from the cars of the past. Existing car companies can handle this, but they lose some of the advantage that comes from decades of experience. The future robocars are probably electric and much simpler, with hundreds of parts rather than tens of thousands. It’s a new world and experience with the old may actually be a disadvantage. Only Nissan and Tesla have lots of electric car experience today, though GM is building it. Electric platforms are much simpler and ripe for creativity from new players. The challenge of robotaxis for the poor While I’m very excited about the coming robocar world, there are still many unsolved problems. One I’ve been thinking about, particularly with my recent continued thinking on transit, is how to provide robotaxi service to the poor, which is to say people without much money and without credit and reputations. In particular, we want to avoid situations where taxi fleet operators create major barriers to riding by the poor in the form of higher fees, special burdens, or simply not accepting the poor as customers. If you look at services like Uber today, they don’t let you ride unless you have a credit card, though in some cases prepaid debit cards will work. Today a taxi (or a bus or Uber style vehicle) has a person in it, primarily to drive, but they perform another role — they constrain the behaviour of the rider or riders. They reduce the probability that somebody might trash the vehicle or harass or be violent to another passenger. Of course, such things happen quite rarely, but that won’t stop operators from asking, “What do we do when it does happen? How can we stop it or get the person who does it to pay for any damage?” And further they will say, “I need a way to know that in the rare event something goes wrong, you can and will pay for it.” They do this in many similar situations. The problem is not that the poor will be judged dangerous or risky. The problem is that they will be judged less accountable for things that might go wrong. Rich people will throw up in the back of cars or damage them as much as the poor, perhaps more; the difference is there is a way to make them pay for it. So while I use the word poor here, I really mean “those it is hard to hold accountable” because there is a strong connection. As I have outlined in one of my examinations of privacy a taxi can contain a camera with a physical shutter that is open only between riders. It can do a “before and after” photograph, mostly to spot if you left items behind, but also to spot if you’ve damaged or soiled the vehicle. Then the owner can have the vehicle go for cleaning, and send you the bill. But they can only send you the bill if they know who you are and have a way to bill you. For the middle class and above, that’s no problem. This is the way things like Uber work — everybody is registered and has a credit card on file. This is not so easy for the poor. Many don’t have credit cards, and more to the point, they can’t show the resources to fix the damage they might do to a car, nor may they have whatever type of reputation is needed so fleet operators will trust them. The actions of a few damn the many. The middle class don’t even need credit cards. Those of us wishing to retain our privacy could post a bond through a privacy protecting intermediary. The robotaxi company would know me only as “PrivacyProxy 12323423” and I would have an independent relationship with PrivacyProxy Inc. which would accept responsibility for any damage I do to the car, and bill me for it or take money from my bond if I’m truly anonymous. Options for the poor People with some level of identity (an address, a job) have ways to be accountable. If the damage rises to the level where refusing to fix it is a crime at some level, fear of the justice system might work, but it’s unlikely the police are going to knock on somebody’s door for throwing up in a car. In the future, I expect just about everybody of all income levels will have smartphones, and plans (though prepaid plans are more common at lower income levels.) One could volunteer to be accountable via the phone plan, losing your phone number if you aren’t. Indeed, it’s going to be hard to summon a car without a phone, though it will also be possible using internet terminals, kiosks and borrowing the phones of others. More expensive rides A likely solution, seen already in the car rental industry, is to charge extra for insurance for those who can’t prove accountability another way. Car rental company insurance is grossly overpriced, and I never buy it because I have personal insurance and credit cards to cover such issues. Those who don’t often have to pay this higher price. Before we go do far, I predict the cost of robotaxi rides will get well below $1/mile, heading down to 30 cents/mile. Even with a 30% surcharge, that’s still cheaper than what we have today, in fact it’s cheaper than a bus ticket in many towns, certainly cheaper than an unsubsidized bus ticket which tends to run $5-$6. Still my hope for robotaxi service is that it makes good transportation more available to everybody, and having it cost more for the poor is a defect. In addition, as long as damage levels remain low, as a comment points out, perhaps the added cost on every ride would be small enough that you don’t need worry about this for poor or rich. (Though having no cost to doing so does mean more spilled food, drink and sadly, vomit.) Poor riders would still have to pay more to start, probably, or suffer the other indignities of the lower class ride. However, a poor rider who develops a sterling reservation might be able to get some of that early surcharge back later. (Not if it’s insurance. You can’t get insurance back if you don’t use it, it doesn’t work that way!) Unfortunately, poor who squander their reputation (or worse, just ride with friends who trash a car) could find themselves unable to travel except at high cost they can’t afford. It could be like losing your car. The government The alternative, after all, is needing to continue otherwise unprofitable transit services with human drivers just for the sake of these people who can’t get private robocar rides. Transit may continue (though without human drivers) at peak times, but it almost surely vanishes off-peak if not for this.  read more » How would a robocar handle an oncoming tsunami? Recently a reddit user posted this short video of an amazingly lucky driver in Japan who was able to turn his car around just in time to escape the torrent of the tsunami. The question asked was, how would a robocar deal with this? It turns out there are many answers to this question. For this particular question, as you’ll see by the end, the answer is probably “very well.” Let’s start with the bad news. On its own, built in a world where few thought about tsunamis, there is a good chance the vehicle would not handle it well. The instinct for most developers is to be conservative and cautious when facing an unknown situation. The most cautious thing is to do nothing, to just stop and perhaps ask for help from a person in the car or a remote center. Usually if you don’t understand the situation, doing something is much riskier than doing nothing. Usually — but clearly not here. This situation might be viewed as similar to something you might expect a car to have programming for — something is approaching fast towards you. Cars will probably have logic to deal with a car coming the wrong way down their lane, and this looks a bit like that. It’s actually stuff coming in both lanes. We can imagine the car might have logic to attempt to retreat in that situation, though this isn’t going to look too much like anything the sensors have seen before. With 3D sensors, though, it will be clear that something huge is coming fast. And with a map of what the road should look like, you will easily tell the wall of water and debris from what you should be seeing. The best reason the car might handle this however, is the very existence of this video, and the posts about it — including this blog post here. The reason is that the developers of robocars, in order to test them, are busy building simulators. In these simulators they are programming every crazy situation they can think of, even impossible situations, just to see what each revision of the car software will do. They are programming every situation that their cars have encountered on the road. Every situation that caused their software to make an error, or anybody else to make an error. In other words, if you can think of it after a little bit of thinking, they probably thought of it too. And if it’s in blog posts and famous news stories, they probably heard about it. Flooding and every kind of strange weather ever reported. The details of every accident from every police report that can be turned into a simulation. Earthquakes. Tornadoes. Hurricanes. Alien invasions. Oncoming tanks. If you can think of it without a major effort, and it seems like it could happen, they will put it in. And so every car will indeed be tested. In fact, the developers will probably have fun with the really strange situations which are so rare that they may not have commercial or safety justification, but still are interesting. Scenes from movies. James Bond car chases. You name it. In this particular case, there is another thing to help with this situation. Tsunamis don’t happen by surprise, not any more. The world, having seen them like this, now has earthquake detection and tsunami warning everywhere robocars are likely to go in the near future. The warnings will be transmitted along the same data stream warning cars about traffic, weather and road conditions. We even have maps of the terrain and can even predict what areas are low and which areas cars should head to in the event of a tsunami warning, and they will take routes designed to avoid risk. With superhuman knowledge, they will not panic and do much better than people at taking the route to high ground, and so they odds of them confronting the wall of water would be very slim, unless there was no choice. The robocar simply would not have been going down that road the way the Japanese driver was. Now we get to a final special ability of robocars — they will be just as capable in reverse gear as they are going forward, other than due to the speed limitations of reverse gear. So while you reverse timidly, a robocar need not do so. It will be able to pull off the fastest 3 point turn you can imagine if it wants to, or even just escape in reverse. Of course if it needs more speed than reverse offers, it would turn around in the best spot to do so. Stanford has even done a lot of research on drifting, and this will go into simulators too, so cars will probably know how to turn around as fast as a stunt driver if they have to. Electric cars may be able to go as fast in reverse as they can going forward to top it all off. (I should note that not all car designs feature sensors that see the same forward and back, so this may not be true for all vehicles, but all vehicles that can reverse at all need not be timid about it the way people are.) So for this situation, and anything else we know about, robocars should do a superhuman job. That doesn’t mean there aren’t things nobody ever thought of. But the more videos and stories like this that get recorded, the less and less probable unknown events will be, and thus an unknown event where the software does the wrong thing becomes not impossible, but very low probability. What is the optimum group vehicle size? My recent article on a future vision for public transit drew some ire by those who viewed it as anti-transit. Instead, the article broke with transit orthodoxy by suggesting that smaller vehicles (including cars and single person pods) might produce more efficient transit than big vehicles. Transitophiles love big vehicles for reasons beyond their potential efficiency, so it’s a hard sell. Let’s look at the factors which determine what vehicle size makes the best transit. Before the robocar future arrives, vehicle size is partly dominated by the need for drivers. Consider a bus route which could have one 40 person bus every 30 minutes or a 20 person bus every 15 minutes. The smaller vehicles have the same capacity, and but they will use a little more energy, a little more road space and cost somewhat more to buy. This leads to the intuition that bigger must be better. At the same time the smaller vehicles need twice as many drivers. Labour is more than half the operating budget of many transit agencies. Look at the Chicago Transit Authority and you see labour listed as 69% — and much labour is actually in other subcontractor categories — while fuel and electricity are only 7% — the capital costs like vehicles are not even included here. Needing twice the drivers dominates the equation. Riders of course would have an easy time deciding. They would of course love having vehicles every 15 minutes! Indeed they would be very pleased to get a 7 person van every 5 minutes if they could, the difference would be qualitative, not just quantitative, because when you get to that frequency you start thinking about it more like a car. In addition, the 2 small vehicles do about 1/8th the damage to the road as the one large vehicle. Taking the cost of drivers out, what is the optimum size? More to the point, what provides the optimum balance between rider demand (which would love more frequent service in smaller vehicles) and efficiency (which pushes for larger vehicles, up to a point?) In particular, more smaller vehicles does not just have to mean more frequent service on one route, it can also mean more routes. More routes can both mean getting places you could not get to before, and also getting there faster because you don’t need as many transfers. Here’s where big vehicles are better: • When near full, or overfull, they use: • Less energy per passenger-mile • Less road space per passenger • Less vehicle cost (depreciation, maintenance etc.) per passenger • Less frequent service forces people to bunch their travel together with others, allowing the advantages above. • Fewer stops also forces people to bunch together, to live near transit and to walk more. Here are some of the advantages of more, smaller vehicles • As noted, road damage is roughly as the 4th power of vehicle weight per axle. • More frequent and/or ubiquitous service as described above • Less likely to be lightly loaded (smaller vehicle is sent when demand is light.) • When lightly loaded, much more efficient in all factors than large vehicle • While the whole fleet takes more total road space than the large vehicles, each vehicle causes much less obstruction of traffic. • Able to use smaller bus-stops and navigate tighter turns and narrower roads. • Able to park in smaller spaces including many lots for cars (though still taking as much or slightly more total space.) • Stops are sometimes fewer, and take less time (fewer people getting on/off any given vehicle.) • Each vehicle is considerably less expensive. The big trade-off comes because the load varies. The full 40 person bus is an efficiency and cost win over two full 20 person buses (or 10 full 4 person cars) but not as much of a win as you might imagine. But the real question involves the frequent issue of a half-full 40 person bus vs. a full 20 person bus. In this case, the smaller vehicle is quite a bit more efficient. Even worse is the 1/4 full 10 person bus vs. the half full 20 person bus or 3 4-person cars. Here the winner is probably the cars, and this is important, because the average bus in the USA actually has just under 10 people on it. The ideal situation would be to send out a fleet of 40 or even 60 person buses at the peak of rush hour, and then put those in garages, and send out small buses during the off-peak takes and just cars in the off-off-peak times like the night. Have every vehicle run as close to full as possible and you get your greatest efficiency. This is not an option for a few reasons: • To do that with buses, you must lower frequency to keep them full, and riders will reject that • Agencies usually can’t afford huge fleets of large vehicles as well as huge fleets of medium vehicles to keep the large vehicles idle for most of the day. They are better off choosing with a loss of efficiency. In the robocar world, they will be able to call upon a large fleet of small vehicles (cars for 1-4 people) at all times and they won’t need to own them. But the transit companies and agencies still must own these larger (8 to 60) person vehicles. In some cities, it may be practical to keep a fleet of large vehicles for use only at rush hour. In fact, that’s what some commuter train lines use, and they are the most efficient transportation lines in the USA. The rush-hour-only commuter trains run full out to the suburbs, spend the night in the suburbs and run full back into town. That’s really efficient. The commuter trains with daytime service are not nearly as good. Train lines that can drop cars off-peak get a win here as well. How practical it is depends on how long you need the big bus to last. Transit vehicles tend to be robust, heavy and expensive, and they are well maintained to maximize their lifetime. A bus that only works rush hour will last more years than one that works all day. The problem is it may last too many years, to the point that it becomes obsolete or wears out from time rather than just miles. Leaving vehicles idle also means tying up capital for longer, so even if you find a good schedule for depreciation of the vehicles, the cost of money makes it difficult to have two or three different fleets. So in the end, cities have to choose. Because of the labour cost of drivers, they almost always choose the bigger vehicles. Without that cost, the advantages of the smaller vehicles win out because of the variability of load. If the line regularly runs low-load vehicles, it has chosen a size that is larger than optimal. This is all general analysis. The next step I would like to see from the transportation research community is to build these models with the actual numbers from real transit systems. For each city, for each route, the optimal size will be different. And of course, the existence of the robocars will change demand, which also changes load. They can change demand down (by being a superior solution) or up (by making it easier to get to the shared vehicle.) They can also replace the big vehicles entirely at off-peak times. That sounds like competition, but it actually can be enabling. One reason transit agencies run their big vehicles all day long (erasing their efficiency) is that riders want assurance they can come in at rush hour and then decide to leave early or late. Thus there has to be off-peak service. If riders can be assured that something else (like a robotic taxi or even an Uber) can get them home inexpensively off-peak, they are more willing to take the transit in. Indeed, it could make sense for transit agencies to say, “we will have low service after 8pm, but if you can show you rode with us in the morning, we will subsidize a private car for you after hours 10 times a month.” They might actually save money by offering this rather than running a mostly empty bus.'s neural network car and the hot new technology in robocars Perhaps the world’s most exciting new technology today are deep neural networks, in particular the convolutional neural networks such as “Deep Learning.” These networks are conquering some of the most well known problems in artificial intelligence and pattern matching, and since their development just a few years ago, milestones in AI have been falling as computer systems that match or surpass human capability have been demonstrated. Playing Go is just the most recent famous example. This is particularly true in image recognition. Over the past several years, neural network systems have gotten better than humans at problems like recognizing street signs in camera images and even beating radiologists at identifying cancers in medical scans. These networks are having their effect on robocar development. They are allowing significant progress in the use of vision systems for robotics and driving, making those progress much faster than expected. 2 years ago, I declared that the time when vision systems would be good enough to build a safe robocar without lidar was still fairly far away. That day has not yet arrived, but it is definitely closer, and it’s much harder to say it won’t be soon. At the same time, LIDAR and other sensors are improving and dropping in price. Quanergy (to whom I am an advisor) plans to ship $250 8-line LIDARS this year, and $100 high resolution LIDARS in the next couple of years. The deep neural networks are a primary tool of MobilEye, the Jerusalem company which makes camera systems and machine-vision ASICs for the ADAS (Advanced Driver Assistance Systems) market. This is the chip used in Tesla’s autopilot, and Tesla claims it has done a great deal of its own custom development, while MobilEye claims the important magic sauce is still mostly them. NVIDIA has made a big push into the robocar market by promoting their high end GPUs as the supercomputing tool cars will need to run these networks well. The two companies disagree, of course, on whether GPUs or ASCICs are the best tool for this — more on that later. In comes In February, I rode in an experimental car that took this idea to the extreme. The small startup, lead by iPhone hacker George Hotz, got some press by building an autopilot similar in capability to many others from car companies in a short amount of time. In January, I wrote an introduction to their approach including how they used quick hacking of the car’s network bus to simplify having the computer control the car. They did it with CNNs, and almost entirely with CNNs. Their car feeds the images from a camera into the network, and out from the network come commands to adjust the steering and speed to keep a car in its lane. As such, there is very little traditional code in the system, just the neural network and a bit of control logic. Here’s a video of the car taking us for a drive: The network is built instead by training it. They drive the car around, and the car learns from the humans driving it what to do when it sees things in the field of view. To help in this training, they also give the car a LIDAR which provides an accurate 3D scan of the environment to more absolutely detect the presence of cars and other users of the road. By letting the network know during training that “there is really something there at these coordinates,” the network can learn how to tell the same thing from just the camera images. When it is time to drive, the network does not get the LIDAR data, however it does produce outputs of where it thinks the other cars are, allowing developers to test how well it is seeing things. This approach is both interesting and frightening. This allows the development of a credible autopilot, but at the same time, the developers have minimal information about how it works, and never can truly understand why it is making the decisions it does. If it makes an error, they will generally not know why it made the error, though they can give it more training data until it no longer makes the error. (They can also replay all other scenarios for which they have recorded data to make sure no new errors are made with the new training data.)  read more » Everybody should have RAID and a filesystem to manage it For many years, I have been using RAID for my home storage. With RAID (and its cousins) everything is stored redundantly so that if any disk drive fails, you don’t lose your data, and in fact your system doesn’t even go down. This can come at a cost of anywhere from about 25% to 50% of your disk space (but disk is cheap) and it also often increases disk performance. Some years ago I wrote about how disk drives should be sold in form factors designed for easy RAID in every PC, and I still believe that. RAID comes with a few costs. One of them is that you need to do too much sysadmin to get it working right. The nastiest cost is there are some edge cases where RAID can cause you to lose all your data where you would not have lost it (or all of it) if you had not used RAID. That’s bad — it should never make things worse. A few years ago I switched to one of the new filesystems which put the RAID-like functionality right into the filesystem, instead of putting that into a layer underneath. I think that’s the right thing, and in fact, fear of layer violations is generally a mistake here. I am using BTRFS. Others use ZFS and a few other players. BTRFS is new and so its support for RAID-5 (Which only costs 25-33% of your space and is fast) is too young, so I use its RAID-1, where everything is just written twice onto two different disks. Unlike traditional RAID, BTRFS will do RAID-1 on more than 2 drives, and they don’t have to be all of equal size. That’s good, though I ran into some problems with the fairly common operation of increasing the size of my storage by replacing my smallest drive with a much larger one. The long term goal of such systems should be near-trivial sysadmin. The system should handle all drives and partitions thrown at it in a “just works” way. You give it any amount of drives and it figures out the best thing to do, and adapts as you change. You should only need to tell it a few policies, such as how much need you have for reliability and speed and how much space you are willing to pay for it. The systems should never put you at more risk than you ask for, or more risk than you would have had with having just one drive or a set of non-redundant drives. That’s hard, but it is a worthwhile goal. But I think we could do more, and we could do it in a way that we get better and better storage with less sysadmin. Multiple drives, but not too many I think most users will probably stick to 2 drives, and rarely go above 3. The reality is that 4 or more is for servers and heavy users, because each drive takes power and generates heat. However, adding an SSD to the mix is always a good idea but it’s not for redundancy. The OS should understand what’s happening and reflect it in the filesystem The truth is not all files need as much redundancy and speed. The OS can know a lot about that and identify: • Files that are accessed frequently vs. ones not accessed much, or for a long time • Files that are accessed by interactive applications which cause those applications to be IO bound. (ie. slowed by waiting for the disk.) • Files that have been backed up in particular ways, and when. Your OS should start by storing everything redundantly (RAID 1 or 5) until such time as the disk starts getting close to full. When that happens, it should of course alert you it is time to upgrade your drives or add another. But it can also offer another option which ou can explicitly ask for, namely reduce the redundancy on files which are rarely accessed, have not been used for a while, and have been backed up. It turns out, that’s often a lot of the files on a disk. In particular, the thing that uses up most of the disk space for the ordinary user is their collection of photos and videos. Other than the few that get regular access, there is no actual need for RAID level redundancy on these images. If their own drive is lost, there is a backup where you can get them. They aren’t needed for regular system operation. The systems already know what files belong to the OS, and can keep them redundant, though most home users are not looking for 100% uptime, they really only want 100% data safety. To do this right, programs need to tell the OS why they are accessing files. Your photo organizer possibly scans your photo collection regularly, but this scan doesn’t make the files system crucial. My goal is not to have the users designate these things, though that is one option. Ideally the system should figure it out. The system can also take the most important files, the ones that cause the system to block, and make sure they are both redundantly stored and found on SSD. Easier backup Backup needs to be easy and automatic. When systems boot up, they should offer to do backup for others who are nearby and semi-nearby, and then they should trade backup space. My system should offer space to others, and make use of their space for either general backup (if in the same house/company/LAN) and offsite backup (remote but with good bandwidth.) Of course, ISPs and other providers can also provide this space for money. The key thing is this should happen with almost no setup by the user. One problem for me is that I can come back from a trip with 50gb of new photos, and they would clog my upstream for remote backup. The system should understand what files have priority, and if the backlog gets too much, request I plug in an external USB drive to offer a backup until the backlog can be cleared. Otherwise I should not have to deal with it. Of course, the backup I offer others does not need RAID redundancy. Instead, I should be queried regularly to prove I still have the backups, and if not, the person I am backing up should seek another place. Of course all remote backup must be encrypted by me. In fact, all disks should be encrypted, but too much desire for security can cause risk of losing all your data. Systems must understand the reduced threat model of the ordinary user and make sure keys are backed up in enough places that the chances of losing them are nil, even if it increases the chance that the NSA might get the keys. This is actually pretty hard. The typical “What was your pet’s name” pseudo security questions are not strong enough, but going stronger makes it more likely there can be key loss. Proposals such as my friendscrow can work if the system knows your social network. They have the advantage that there is zero UI to escrowing the key, and a lot of work to recover it. This is the ideal model because if there is ZUI on storing it, you are sure it will be stored. Nobody minds extra work if they have lost all the normal paths to getting their key. The future of transit is self-driving medium sized vehicles with no fixed routes or schedules Most of our focus these days is on self-driving personal cars. In spite of that focus, the effects on mass transit will also be quite dramatic, in ways far beyond taking the driver out of the bus. Indeed, for various reasons, I believe traditional approaches to mass transit (large vehicles on fixed routes and schedules, sometimes with private right-of-way) will be obsoleted by robocar technology, and that the result will be almost 100% good — transportation that is better, faster, more convenient and even more sustainable. (The latter shocks people, who think that anything with small vehicles is inherently less energy efficient.) I have a new special article on outlining potential visions for the future of transit, and what they might mean. The vision is a work in progress, but I invite debate. Click for The Future of Mass Transit Total Vehicle Miles per year Avg Car Lifetime in Miles People travel more in cars. Vehicles run empty to reposition Cars last longer GM buys "Cruise" for $1B General Motors has purchased “Cruise,” a small self-driving startup in San Francisco. Rumours suggest the price was over one billion dollars. In addition, other rumours have come to me suggesting that at least one other startup has been seeking a new round of funding at that valuation, but did not succeed due to the market downturn. I gave Cruise some small assistance when they were getting started, and wrote about them when they showed off their first prototype. Since then, Cruise, as expected, moved away from highway autopilot retrofit into making a proper robocar, and their test Leaf has been running around SF with 4 velodyne LIDARs and other sensors for a while. Even in my wildest dreams, I did not imagine startup valuations this high, this soon. (Time to get my own startup going.) Let’s consider why: First, GM, as the world’s 2nd largest car company, is way behind on robocars. They were one of the first companies to announce a highway autopilot (called, ironically, “Super Cruise”) for the 2014 Cadillac. However, they quickly pulled back on that announced, and for the last few years have continued to delay it, recently announcing it would not even appear in the 2017 car, even though Mercedes, Tesla and several other companies had products like that. GM’s main academic partner was CMU. They sponsored Boss, the CMU team that won the Darpa Urban Challenge, headed by Chris Urmson (who now leads the Google car project.) Recently, Uber moved into Pittsburgh in a big way and poached many of the top people from CMU for their project. This left GM with very little, a poor position for the world’s 2nd largest car company. Next, we have Kyle Vogt, founder of Cruise. Kyle was on the founding team for, and also for Twitch, which had a billion dollar acquisition — in other words, Kyle is not precisely hurting for money. He has not confirmed this to me, but I suspect when GM showed up at his door, he was not interested in joining a big car company, and his resources meant he was not in any hurry. I then presume GM took that as negotiation and bumped the price to where you would have to be crazy to say no. GM will let cruise be independent, at least for now. That’s the only sane path. We’ll see where this goes. Bloomberg (or another moderate) could have walked away with the Presidency due to Trump Michael Bloomberg, a contender for an independent run for US President has announced he will not run though for a reason that just might be completely wrong. As a famous moderate (having been in both the Republican and Democratic parties) he might just have had a very rare shot at being the first independent to win since forever. Here’s why, and what would have to happen: 1. Donald Trump would have to win the Republican nomination. (I suspect he won’t, but it’s certainly possible.) 2. The independent would have to win enough electoral votes to prevent either the Republican or Democrat getting 280. If nobody has a majority of the electoral college, the house picks the President from the top 3 college winners. The house is Republican, so it seems pretty unlikely it would pick any likely Democratic Party nominee, and the Democrats would know this. Once they did know this, the Democrats would have little choice but to vote for the moderate, since they certainly would not vote for Trump. Now all it takes is a fairly small number of Republicans to bolt from Trump. Normally they would not betray their own party’s official nominee, but in this case, the party establishment hates Trump, and I think that some of them would take the opportunity to knock him out, and vote for the moderate. If 30 or more join the democrats and vote for the moderate, he or she becomes President. It would be different for the Vice President, chosen by the senate. Trump probably picks a mainstream republican to mollify the party establishment, and that person wins the senate vote easily. To be clear, here the independent can win even if all they do is make a small showing, just strong enough to split off some electors from both other candidates. Winning one big state could be enough, for example, if it was won from the candidate who would otherwise have won.  read more » Google's crash is a very positive sign Reports released reveal that one of Google’s Gen-2 vehicles (the Lexus) has a fender-bender (with a bus) with some responsibility assigned to the system. This is the first crash of this type — all other impacts have been reported as fairly clearly the fault of the other driver. This crash ties into an upcoming article I will be writing about driving in places where everybody violates the rules. I just landed from a trip to India, which is one of the strongest examples of this sort of road system, far more chaotic than California, but it got me thinking a bit more about the problems. Google is thinking about them too. Google reports it just recently started experimenting with new behaviours, in this case when making a right turn on a red light off a major street where the right lane is extra wide. In that situation it has become common behaviour for cars to effectively create two lanes out of one, with a straight through group on the left, and right turners hugging the curb. The vehicle code would have there be only one lane, and the first person not turning would block everybody turning right, who would find it quite annoying. (In India, the lane markers are barely suggestions, and drivers — which consist of every width of vehicle you can imagine) — dynamically form their own patterns as needed.) As such, Google wanted their car to be a good citizen and hug the right curb when doing a right turn. So they did, but found the way blocked by sandbags on a storm drain. So they had to “merge” back with the traffic in the left side of the lane. They did this when a bus was coming up on the left, and they made the assumption, as many would make, that the bus would yield and slow a bit to let them in. The bus did not, and the Google car hit it, but at very low speed. The Google car could have probably solved this with faster reflexes and a better read of the bus’ intent, and probably will in time, but more interesting is the question of what you expect of other drivers. The law doesn’t imagine this split lane or this “merge.” and of course the law doesn’t require people to slow down to let you in. But driving in so many cities requires constantly expecting the other guy to slow down and let you in. (In places like Indonesia, the rules actually give the right-of-way to the guy who cuts you off, because you can see him and he can’t easily see you, so it’s your job to slow. Of course, robocars see in 360 degrees, so no car has a better view of the situation.) While some people like to imagine that important ethical questions for robocars revolve around choosing who to kill in an accident, that’s actually an extremely rare event. The real ethical issues revolve around this issue of how to drive when driving involves routinely breaking the law — not once in a 100 lifetimes, but once every minute. Or once every second, as is the case in India. To solve this problem, we must come up with a resolution, and we must eventually get the law to accept it the same what it accepts it for all the humans out there, who are almost never ticketed for these infractions. So why is this a good thing? Because Google is starting to work on problems like these, and you need to solve these problems to drive even in orderly places like California. And yes, you are going to have some mistakes, and some dings, on the way there, and that’s a good thing, not a bad thing. Mistakes in negotiating who yields to who are very unlikely to involve injury, as long as you don’t involve things smaller than cars (such as pedestrians.) Robocars will need to not always yield in a game of chicken or they can’t survive on the roads. In this case, Google says it learned that big vehicles are much less likely to yield. In addition, it sounds like the vehicle’s confusion over the sandbags probably made the bus driver decide the vehicle was stuck. It’s still unclear to me why the car wasn’t able to abort its merge when it saw the bus was not going to yield, since the description has the car sideswiping the bus, not the other way around. Nobody wants accidents — and some will play this accident as more than it is — but neither do we want so much caution that we never learn these lessons. It’s also a good reminder that even Google, though it is the clear leader in the space, still has lots of work to do. A lot of people I talk to imagine that the tech problems have all been solved and all that’s left is getting legal and public acceptance. There is great progress being made, but nobody should expect these cars to be perfect today. That’s why they run with safety drivers, and did even before the law demanded it. This time the safety driver also decided the bus would yield and so let the car try its merge. But expect more of this as time goes forward. Their current record is not as good as a human, though I would be curious what the accident rate is for student drivers overseen by a driving instructor, which is roughly parallel to the safety driver approach. This is Google’s first caused accident in around 1.5M miles. It’s worth noting that sometimes humans solve this problem by making eye contact, to know if the other car has seen you. Turns out that robots can do that as well, because the human eye flashes brightly in the red and infrared when looking directly at you — the “red eye” effect of small flash cameras. And there are ways that cars could signal to other drivers, “I see you too” but in reality any robocar should always be seeing all other parties on the road, and this would just be a comfort signal. A little harder to read would be gestures which show intent, like nodding, or waving. These can be seen, though not as easily with LIDAR. It’s better not to need them. Uber, Lyft and crew should replace public transit at night Dolmu? Finally, there is the issue that this is too good. A ride in a private car vs. a late night transit bus, for the price of a bus? People will over-use it, and that would of course get the Taxis angry, though there is no reason they could not participate as they are all going to supporting mobile-app hail. But the subsidy may be too expensive if people over use it. Fears confirmed on failure of fix to Hugo awards Last year, I wrote a few posts on the attack on Science Fiction’s Hugo awards, concluding in the end that only human defence can counter human attack. A large fraction of the SF community felt that one could design an algorithm to reduce the effect of collusion, which in 2015 dominated the nomination system. (It probably will dominate it again in 2016.) The system proposed, known as “e Pluribus Hugo” attempted to defeat collusion (or “slates”) by giving each nomination entry less weight when a nomination ballot was doing very well and getting several of its choices onto the final ballot. More details can be found on the blog where the proposal was worked out. The process passed the first round of approval, but does not come into effect unless it is ratified at the 2016 meeting and then it applies to the 2017 nominations. As such, the 2016 awards will be as vulnerable to the slates as before, however, there are vastly more slate nominators this year — presuming all those who joined in last year to support the slates continue to do so. Recently, my colleague Bruce Schneier was given the opportunity to run the new system on the nomination data from 2015. The final results of that test are not yet published, but a summary was reported today in File 770 and the results are very poor. This is, sadly, what I predicted when I did my own modelling. In my models, I considered some simple strategies a clever slate might apply, but it turns out that these strategies may have been naturally present in the 2015 nominations, and as predicted, the “EPH” system only marginally improved the results. The slates still massively dominated the final ballots, though they no longer swept all 5 slots. I consider the slates taking 3 or 4 slots, with only 1 or 2 non-slate nominees making the cut to be a failure almost as bad as the sweeps that did happen. In fact, I consider even nomination through collusion to be a failure, though there are obviously degrees of failure. As I predicted, a slate of the size seen in the final Hugo results of 2015 should be able to obtain between 3 and 4 of the 5 slots in most cases. The new test suggests they could do this even with a much smaller slate group as they had in the 2015 nominations. Another proposal — that there be only 4 nominations on each nominating ballot but 6 nominees on the final ballot — improves this. If the slates can take only 3, then this means 3 non-slate nominees probably make the ballot. An alternative - Make Room, Make Room! First, let me say I am not a fan of algorithmic fixes to this problem. Changing the rules — which takes 2 years — can only “fight the last war.” You can create a defence against slates, but it may not work against modifications of the slate approach, or other attacks not yet invented. Nonetheless, it is possible to improve the algorithmic approach to attain the real goal, which is to restore the award as closely as possible to what it was when people nominated independently. To allow the voters to see the top 5 “natural” nominees, and award the best one the Hugo award, if it is worth. The approach is as follows: When slate voting is present, automatically increase the number of nominees so that 5 non-slate candidates are also on the ballot along with the slates. To do this, you need a formula which estimates if a winning candidate is probably present due to slate voting. The formula does not have to be simple, and it is OK if it occasionally identifies a non-slate candidate as being from a slate. 1. Calculate the top 5 nominees by the traditional “approval” style ballot. 2. If 2 or more pass the “slate test” which tries to measure if they appear disproportionately together on too many ballots, then increase the number of nominees until 5 entries do not meet the slate condition. As a result, if there is a slate of 5, you may see the total pool of nominees increased to 10. If there are no slates, there would be only 5 nominees. (Ties for last place, as always, could increase the number slightly.) Let’s consider the advantages of this approach: • While ideally it’s simple, the slate test formula does not need to be understood by the typical voter or nominator. All they need to know is that the nominees listed are the top nominees. • Likewise, there is no strategy in nominating. Your ballot is not reduced in strength if it has multiple winners. It’s pure approval. • If a candidate is falsely identified as passing the slate test — for example a lot of Doctor Who fans all nominate the same episodes — the worst thing that happens is we get a few extra nominees we should not have gotten. Not ideal, but pretty tame as a failure mode. • Likewise, for those promoting slates, they can’t claim their nominations are denied to them by a cabal or conspiracy. • All the nominees who would have been nominated in the absence of slate efforts get nominated; nobody’s work is displaced. • Fans can decide for themselves how they want to consider the larger pool of nominees. Based on 2015’s final results (with many “No Awards”) it appears fans wish to judge some works as there unfairly and discount them. Fans who wish it would have the option of deciding for themselves which nominees are important, and acting as though those are all that was on the ballot. • If it is effective, it gives the slates so little that many of them are likely to just give up. It will be much harder to convince large numbers of supporters to spend money to become members of conventions just so a few writers can get ignored Hugo nominations with asterisks beside them. It has a few downsides, and a vulnerability. • The increase in the number of nominees (only while under slate attack) will frustrate some, particularly those who feel a duty to read all works before voting. • All the slate candidates get on the ballot, along with all the natural ones. The first is annoying, but it’s hardly a downside compared to having some of the natural ones not make it. A variant could block any work that fits the slate test but scored below 5th, but that introduces a slight (and probably un-needed) bit of bias. • You need a bigger area for nominees at the ceremony, and a bigger party, if they want to show up and be sneered at. The meaning of “Hugo Nominee” is diminished (but not as much as it’s been diminished by recent events.) • As an algorithmic approach it is still vulnerable to some attacks (one detailed below) as well as new attacks not yet thought of. • In particular, if slates are fully coordinated and can distribute their strength, it is necessary to combine this with an EPH style algorithm or they can put 10 or more slate candidates on the ballot. All algorithmic approaches are vulnerable to a difficult but possible attack by slates. If the slate knows its strength and knows the likely range of the top “natural” nominees, it can in theory choose a number of slots it can safety win, and name only that many choices, and divide them up among supporters. Instead of having 240 people cast ballots with the 3 choices, they can have 3 groups of 80 cast ballots for one choice only. No simple algorithm can detect that or respond to it, including this one. This is a more difficult attack than the current slates can carry off, as they are not that unified. However, if you raise the bar, they may rise to it as well. All algorithmic approaches are also vulnerable to a less ambitious colluding group, that simply wants to get one work on the ballot by acting together. That can be done with a small group, and no algorithm can stop it. This displaces a natural candidate and wins a nomination, but probably not the award. Scientologists were accused of doing this for L. Ron Hubbard’s work in the past. What formula? The best way to work out the formula would be through study of real data with and without slates. One candidate would be to take all nominees present on more than 5% of ballots, and pairwise compare them to find out what fraction of the time the pair are found together on ballots. Then detect pairs which are together a great deal more than that. How much more would be learned from analysis of real data. Of course, the slates will know the formula, so it must be difficult to defeat it even knowing it. As noted, false positives are not a serious problem if they are uncommon. False negatives are worse, but still better than alternatives. So what else? At the core is the idea of providing voters with information on who the natural nominees would have been, and allowing them to use the STV voting system of the final ballot to enact their will. This was done in 2015, but simply to give No Award in many of the categories — it was necessary to destroy the award in order to save it. As such, I believe there is a reason why every other system (including the WSFS site selection) uses a democratic process, such as write-in, to deal with problems in nominations. Democratic approaches use human judgment, and as such they are not a response to slates, but to any attack. As such, I believe a better system is to publish a longer list of nominees — 10 or more — but to publish them sorted according to how many nominations they got. This allows voters to decide what they think the “real top 5” was and to vote on that if they desire. Because a slate can’t act in secret, this is robust against slates and even against the “slate of one” described above. Revealing the sort order is a slight compromise, but a far lesser one than accepting that most natural nominees are pushed off the ballot. The advantages of this approach: • It is not simply a defence against slates, it is a defence against any effort to corrupt the nominations, as long as it is detected and fans believe it. • It requires no algorithms or judgment by officials. It is entirely democratic. • It is completely fair to all comers, even the slate members. The downsides are: • As above, there are a lot more nominees, so the meaning of being a nominee changes • Some fans will feel bound to read/examine more than 5 nominees, which produces extra work on their part • The extra information (sorting order) was never revealed before, and may have subtle effects on voting strategy. So far, this appears to be pretty minor, but it’s untested. With STV voting, there is about as little strategy as can be. Some voters might be very slightly more likely to rank a work that sorted low in first place, to bump its chances, but really, they should not do that unless they truly want it to win — in which case it is always right to rank it first. • It may need to add EPH style counting if slates get a high level of coordination. Human judgment Another surprisingly strong approach would be simply to add a rule saying, “The Hugo Administrators should increase the number of nominees in any category if their considered analysis leaves them convinced that some nominees made the final ballot through means other than the nominations of fans acting independently, adding one slot for each work judged to fail that test, but adding no more than 6 slots.” This has tended to be less popular, in spite of its simplicity and flexibility - it even deals with single-candidate campaigns — because some fans have an intense aversion to any use of human judgment by the Hugo administrators. • Very simple (for voters at least) • Very robust against any attempt to corrupt the nominations that the admins can detect. So robust that it makes it not worth trying to corrupt the nominations, since that often costs money. • Does not require constant changes to the WSFS constitution to adapt to new strategies, nor give new strategies a 2 year “free shot” before the rules change. • If administrators act incorrectly, the worst they do is just briefly increase the number of nominees in some categories. • If there are no people trying to corrupt the system in a way admins can see, we get the original system we had before, in all its glory and flaws. • The admins get access to data which can’t be released to the public to make their evaluations, so they can be smarter about it. • Clearly a burden for the administrators to do a good job and act fairly • People will criticise and second guess. It may be a good idea to have a post-event release of any methodology so people learn what to do and not do. • There is the risk of admins acting improperly. This is already present of course, but traditionally they have wanted to exercise very little judgment. Will bed-bound seniors experience the world through VR telepresence robots? I’ve written before about my experiences inhabiting a telepresence robot. I did it again this weekend to attend a reunion, with a different robot that’s still in prototype form. I’ve become interested in the merger of virtual reality and telepresence. The goal would be to have VR headsets and telepresence robots able to transmit video to fill them. That’s a tall order. On the robot you would have a array of cameras able to produce a wide field view — perhaps an entire hemisphere, or of course the full sphere. You want it in high resolution, so this is actually a lot of camera. The lowest bandwidth approach would be to send just the field of view of the VR glasses in high resolution, or just a small amount more. You would send the rest of the hemisphere in very low resolution. If the user turned their head, you would need to send a signal to the remote to change the viewing box that gets high resolution. As a result, if you turned your head, you would see the new field, but very blurry, and after some amount of time — the round trip time plus the latency of the video codec — you would start seeing your view sharper. Reports on doing this say it’s pretty disconcerting, but more research is needed. At the next level, you could send a larger region in high-def, at the cost of bandwidth. Then short movements of the head would still be good quality, particularly the most likely movements, which would be side to side movements of the head. It might be more acceptable if looking up or down is blurry, but looking left and right is not. And of course, you could send the whole hemisphere, allowing most head motions but requiring a great deal of bandwidth. At least by today’s standards — in the future such bandwidth will be readily available. If you want to look behind you, there you could just have cameras capturing the full sphere, and that would be best, but it’s probably acceptable to have servos move the camera, and also to not be sending the rear information. It takes time to turn your head, and that’s time to send signals to adjust the remote parameters or camera. Still, all of this is more bandwidth than most people can get today, especially if we want lifelike resolution — 4K per eye or probably even greater. Hundreds of megabits. There are fiber operators selling such bandwidth, and Google fiber sells it cheap. It does not need to be symmetrical for most applications — more on that later. Surrogates, etc. At this point, you might be thinking of the not-very-exciting Bruce Willis movie “surrogates” where everybody just lay in bed all day controlling surrogate robots that were better looking versions of themselves. Those robot bodies passed on not just VR but touch and smell and taste — the works — by a neural interface. That’s science fiction, but a subset could be possible today. Local robots One place you can easily get that bandwidth is within a single building, or perhaps even a town. Within a short distance, it is possible to get very low latency, and in a neighbourhood you can get millisecond latency from the network. Low latency from the video codec means less compression in the codec, but that can be attained if you have lots of spare megabits to burst when the view moves, which you do. So who would want to operate a VR robot that’s not that far from them? This disabled, and in particular the bedridden, which includes many seniors at the end of their lives. Such seniors might be trapped in bed, but if they can sit up and turn their heads, they could get a quality VR experience of the home they live in with their family, or the nursing home they move to. With the right data pipes, they could also be in a nursing home but get a quality VR experience of being the homes of nearby family. They could have multiple robots in houses with stairs to easily “move” from floor to floor. What’s interesting is we could build this today, and soon we can build it pretty well. What do others see? One problem with using VR headsets with telepresence is a camera pointed at you sees you wearing a giant headset. That’s of limited use. Highly desired would be software that, using cameras inside the headset looking at the eyes, and a good captured model of the face, digitally remove the headset in a way that doesn’t look creepy. I believe such software is possible today with the right effort. It’s needed if people want VR based conferencing with real faces. One alternative is to instead present an avatar, that doesn’t look fully real, but which offers all the expression of the operator. This is also doable, and Philip Rosedale’s “High Fidelity” business is aimed at just that. In particular, many seniors might be quite pleased at having an avatar that looks like a younger version of themselves, or even just a cleaned up version of their present age. Another alternative is to use fairly small and light AR glasses. These could be small enough that you don’t mind seeing the other person wearing them and you are able to see their eyes direction, at most behind a tinted screen. That would provide less a sense of being there, but also might provide a more comfortable experience. For those who can’t set up, experiments are needed to see if they can make a system to do this that isn’ t nausea inducing, as I suspect wearing VR that shifts your head angle will be. Anybody tried that? Of course, the bedridden will be able to use VR for virtual space meetings with family and friends, just as the rest of the world will use them — still having these problems. You don’t need a robot in that case. But the robot gives you control of what happens on the other end. You can move around the real world and it makes a big difference. Such systems might include some basic haptic feedback, allowing things like handshakes or basic feelings of touch, or even a hug. Corny as it sounds, people do interpret being squeezed by an actuator with emotion if it’s triggered by somebody on the other side. You could build the robot to accept a hug (arms around the screen) and activate compressed air pumps to squeeze the operator — this is also readily doable today. Barring medical advances, many of us may sadly expect to spend some of their last months or years bedridden or housebound in a wheelchair. Perhaps they will adopt something like this, or even grander. And of course, even the able bodied will be keen to see what can be done with VR telepresence. Deadlines approaching for Singularity U summer program and accelerator The highlight and founding program of Singularity University, where I am chair of computing, is our summer program, now known as the Global Solutions Program. 80 students come from all over the world (only a tiny minority will be from the USA) to learn about the hottest rapidly changing technologies, and then join together with others to kickstart projects that have the potential to use those technologies to solve the world’s biggest problems. This year is the 2nd year of a Google scholarship program, which means the program is free for those who are accepted. About 50 slots go to those scholarships, the other 30 go to winners of national competitions to attend. You can apply both ways. That means you can expect a class of great rising and already risen stars. I don’t like to exaggerate, but almost everybody who goes through it finds it life-changing. If you are at a point where you are ready to do something new and big, and you want to understand how technology that keeps changing faster and faster works and how it can change the world and your world, look into it. Learn about it and apply. Also closing on Feb 19 is our accelerator program for existing or nascent startups. Applicants get $100K in seed funding, office space at Nasa Research Park and more through our network. You can read about it or Apply. Syndicate content
News tagged with b cells Related topics: immune system · cells · immune cells · antibodies · autoimmune diseases B cell B cells are lymphocytes that play a large role in the humoral immune response (as opposed to the cell-mediated immune response, which is governed by T cells). The principal functions of B cells are to make antibodies against antigens, perform the role of Antigen Presenting Cells (APCs) and eventually develop into memory B cells after activation by antigen interaction. B cells are an essential component of the adaptive immune system. The abbreviation "B", in B cell, comes from the bursa of Fabricius in birds, where they mature. In mammals, immature B cells are formed in the bone marrow. Subscribe to rss feed
A “capitol” is almost always a building. Cities which serve as seats of government are capitals spelled with an A in the last syllable, as are most other uses of the word as a common noun. The only exceptions are place names alluding to capitol buildings in some way or other, like “Capitol Hill” in DC, Denver, or Seattle (the latter named either after the hill in Denver or in hopes of attracting the Washington State capitol building). Would it help to remember that Congress with an O meets in the Capitol with another O? Return to list of errors book cover Read about the book.
Ape and Essence Test | Mid-Book Test - Hard Buy the Ape and Essence Lesson Plans Name: _________________________ Period: ___________________ Short Answer Questions 2. Where did William Tallis prefer to be buried? 3. Which is NOT part of the ceremony held to safeguard against having deformed babies? 4. What event resulted in the destruction of a majority of mankind in the screenplay? Short Essay Questions 1. What is Bob Briggs appraisal of Gandhi? 2. What is the importance of where Tallis preferred to be buried? 3. What is the New Zealand Rediscovery Expedition, and what is its mission? 4. Why is the Chief concerned about the present mating season? 5. What does Bob Briggs ask Lou Lublin for, and why? How does this relate to a religious painting? 6. Why are the gravediggers robbing graves? 7. How do Chief and Dr. Poole clash over books at Pershing Square? 8. Why does the Chief order his gravediggers to bury Dr. Poole? 9. Describe the Chief's conception of "Democracy." 10. How does Bob Briggs help Rosie? Essay Topics Write an essay for ONE of the following topics: Essay Topic 1 Ape and Essence presents a dystopic view of the future. What is a dystopia? What are the features of such a place? What specific aspects of Ape and Essence are dystopic? Why might an author depict such a dystopia? Essay Topic 2 What is the role of Science in Ape and Essence? How does science drive the plot? Character motivations? How does Huxley regard the pursuit of Science? Is science a danger as well as a potential boon? Explain. Essay Topic 3 The poetry of Percy Shelley is a crucial element in the novel. Examine the passages of Shelley's poetry engraved on William Tallis' gravestone. What does this poetry mean? Why was it chosen for the tombstone? What overall message should the reader take away from this poetry? (see the answer keys) This section contains 856 words (approx. 3 pages at 300 words per page) Buy the Ape and Essence Lesson Plans Follow Us on Facebook
Coins of England and Great Britain ('Coins of the UK') by Tony Clayton The Halfpenny Farthings <<-- : -->> Three Farthings Values of Halfpence Pictures of Halfpence Silver HalfPence The earliest halfpence were minted by Viking and Wessex kings before the creation of an English nation. These coins, and those of the later Saxon Kings are generally extremely rare. After the Norman Conquest, and prior to the reign of Henry I (1100-1135) halfpence were produced by cutting the silver penny in half. However, eventually coins half the penny in weight were produced. Those of the reigns of Henry I and Henry III have only been discovered during the last few years, and it is not until the reign of Edward I (1272-1307) that the denomination came into general use. Click here for a selection of pictures of a Henry VI halfpence. The last silver halfpence were produced during the Commonwealth after the Civil War, and had become tiny coins a mere 9-10 mm in diameter. Copper Halfpence The first copper halfpenny was minted in 1672 during the reign of Charles II, and the coin was also minted in 1673 and 1675. Tin Halfpence From 1685 to 1692 halfpence were minted in tin with a copper plug at the centre, in the same manner as the US trial silver centre cent. The coins had no date on the obverse or reverse - you have to look on the edge, and with worn examples the date can become unreadable. In addition the coins tended to corrode readily. This is due to two main effects, the first being the presence of two dissimilar metals which has an electrochemical effect, and the other due to the fact that the metallic form of tin is in fact unstable at low temperatures, turning into a non-metallic form known as grey tin. This latter process is accelerated the lower the temperature. The metal was used to deter counterfeiting (a problem with copper) and to encourage the tin industry. Copper Halfpence (again) In 1694 the decision was made to revert to copper for the halfpenny, and an attractive pattern of that date is known. The issued coins had a similar design to the earlier tin issues, but with the date on the reverse in the exergue. During the reign of William III the quality of these coins deteriorated, with some being cast rather than struck. So many of these coins were in circulation that none were struck during the reign of Queen Anne (1701-1714), although an undated pattern halfpenny is known, with the Queen portrayed as Britannia on the reverse. George I Inevitably a shortage followed in due course, and a new issue was made in 1717, often called the dump issue as the coins were smaller and thicker than before. In 1719 the coin reverted to the previous dimensions, and the issues of George I continued until 1724. George II The issues of George II from 1729 to 1754 are very common, with a more elderly portrait being used from 1740. After 1754 none were struck until 1770, and the majority of halfpence in circulation seem to have been forgeries. George III The first type similar to that of George II was issued from 1770 until 1775, and then there was a 24-year gap before Matthew Boulton struck a larger coin at the Soho Mint in 1799. The edge is interesting as it has an incuse pattern around the centre of the entire circumference. There had been plans to strike halfpence and farthings in the cartwheel style, but the government were worried that this would stimulate a demand that Boulton would be unable to fulfil. Soho patterns are known. Further Boulton coins were struck from 1806-7, although these were somewhat smaller. The Great Recoinage took place around 1816, and priority was give to gold and silver, so no further George III halfpence were struck. Halfpenny Trade Tokens During the period 1787 to 1797, and again between 1811 and 1812, many private trade tokens were manufactured to fill the gap left by the absence of official small change. A discussion of these pieces is beyond the scope of this web site, but very worn tokens are so common that they have little value. The following references may be of help if readers wish to pursue the interest further: George IV Halfpence were next issued after 14th November 1825. The new coins were smaller still, the diameter of 28mm and weight of around 9.3 grams continuing until 1860. The design on the reverse continued to be a seated Britannia, although there is no indication of value until the bronze issues of 1860. The halfpence of George IV are of a single main type issued from 1825 to 1827, although there is a scarce die variety of the 1826 coin with one raised line down the arms of the saltire rather than the usual two incuse lines. William IV The halfpence of William IV are of a single type with a similar reverse to that of George IV, issued only in 1831, 1834 and 1837. All copper halfpence of Victoria have the following design: Obverse: Young head left, VICTORIA DEI GRATIA around, date below. Reverse: Britannia seated facing right, BRITANNIAR: REG: FID: DEF: around, shamrock rose and thistle below. They are similar in design to the pennies and farthings, although there are subtle differences. Photographs of halfpence can be distinguished by the relatively large lettering on the obverse. There are two reverse types. 1851, 1852 and 1857 exist with both reverses. Overdates also exist: 1848 over 7, 1853 over 2 (rare), 1858 over 6 and 1858 over 7, and 1859 over 8. No halfpence were issued in 1840. Farthings of that date are frequently reported as apparently rare halfpence because the coin's size is much larger than that of the later bronze issues. The copper halfpence of 1860 were never issued for circulation and are very rare. They are worth as much as 1000 UK pounds in Fine condition. The 1845 issue is also very rare. Bronze Halfpence In 1860 all the copper coins were redesigned in a smaller size and were made of bronze rather than copper, as the latter did not wear well. For the first time the denomination appeared on the reverse. The design lasted until 1894, with issues every year. The obverse shows what is called the Bun portrait of Victoria facing left, with the inscription VICTORIA D:G: BRITT:REG:F:D:. The reverse shows Britannia seated facing right holding a trident and shield, with a lighthouse behind and ship in front, with the inscription HALF PENNY, and the date below in the exergue. The mintmark H, if present, is found centrally below the date up against the rim, as for the farthing. The new coins had a diameter of 25 mm and weighed about 5.7g, a size which remained the same until 1970. The design of Queen Victoria's head gradually and subtlely changes as the years pass, reflected her ageing. The following obverse dies are known used for circulating coins, according to Peck: The following reverse dies are known: Known Die Pairings (excluding proofs): The above information is given as a guide only. For more detailed information, along with illustrations, consult a copy of Peck or Freeman. Very rare halfpence dated 1862 are known with a die letter (A, B, or C) to the left of the lighthouse. In 1874 and 1875 some coins were struck at the Heaton Mint, Birmingham, and can be distinguished by a small H under the date. In 1876 all halfpence were minted at the Heaton Mint due to a breakdown in machinery at the Royal Mint. Further issues from the Heaton Mint were made in 1881 and 1882. In 1895, the design was changed with the portrait showing a veiled head of Queen Victoria to bring it into line with the portrait used on the silver coinage, with the inscription VICTORIA DEI GRA: BRITT: REGINA FID: DEF: IND: IMP:. There are no obverse variations between 1895 and 1901. The reverse is still Britannia, but without the ship and lighthouse. During 1896 the reverse design was slightly enlarged, so that the distance between the exergual line and the top of the helmet plume increases from 20.3 mm to 20.7mm. There are also two varieties of the reverse in the 1897 issue, differing in a number of respects, mainly the height of the sea above the exergual line (3.35mm instead of 3.0 mm), and a good indicator is that in the second reverse the shield is almost in contact with the border teeth instead of being well clear. Edward VII Issued from 1902 to 1910, all have the following basic design: Reverse: Britannia seated facing right, HALF PENNY around, date below, as for the Victoria Old Head design. The first issues of Edward VII in 1902 have the sea level as for the 1898-1901 Victorian halfpence (3.0 mm from exergual line to the sea, which meets Britannia's legs below where they cross), and is known as the Low Tide variety. During the year the design was changed to show a higher sea level (4.0 mm from exergual line to the sea, which meets Britannia's legs where they cross), and this new style continued for the rest of the reign. There were no other significant die changes. George V Issued from 1911 to 1936, all have the following basic design: Reverse: Britannia seated facing right, HALF PENNY around, date below, initially as for Edward VII. The remainder of the series show no exciting rarities or even scarce dates. The obverse was modified twice during the reign of George V, during 1925 and in 1928, when the head was made noticeably smaller, with a single change of reverse in 1925. The two changes in 1925 took place at the same time, and no mules have been recorded. Edward VIII On the accession of Edward VIII a new design of reverse was produced showing the Golden Hind, the ship used by Sir Francis Drake, the noted Elizabethan sailor. This was only struck as a pattern and no circulation issues were made. George VI The design of the Golden Hind was retained for the issues of George VI. There are minor variations in the design from one year to another which specialist collectors are interested in. However, the changes did not take place during any particular year's issue. First Type Issued from 1937 to 1948: Obverse: Head left, GEORGIVS VI D: G: BR: OMN: REX F: D: IND IMP around. Reverse: Ship (The Golden Hind) heading left, HALF PENNY above, date below. Second Type Issued from 1949 to 1952, with proofs from sets in 1950 and 1951: Obverse: Head left, GEORGIVS VI D: G: BR: OMN: REX FIDEI DEF around. Elizabeth II The reverse design remained as for George VI, with the Golden Hind. Variations continued through the reign of Elizabeth II. Three obverses and eight reverses variations are listed in Peck - later issues show further variations but will require further research: There are thus collectable varieties for 1953 (2), 1954 (2), 1957 (2) and 1958 (3). Those for 1957 are the most distinctive. I have no information regarding issues after 1963, although it is clear that further die changes took place. No halfpence were issued dated 1961. The last regular issues were dated 1967, although proofs dated 1970 was made for the 'Last Lsd' Sets. The coin was in fact demonetized on 1st August 1969, so the 1970 coin was never legal tender. English Pronunciation of Halfpenny This coin was not usually called a 'half penny', nor was the plural usually said as 'half pence'. The usual pronunciation was 'hayp-knee' referring to a single coin (with subtle variations depending on where in England you lived), or 'hay-punce' in the plural as in 'three halfpence'. See my Main Coins Index page for acknowledgements Farthings <<-- : -->> Three Farthings Main Index. Values of Halfpennies. Values Index. Pictures of Halfpennies. Pictures Index. Help and Advice Coins of the UK - Halfpennies Copyright reserved by the author, Tony Clayton v49 4th March 2015 Valid HTML 4.01 Transitional
Museum Hours: Saturday 10-6 Sunday 12-5 Postwar Broadcasting Kinescope Recording In September 1947, Eastman Kodak introduced the Eastman Television Recording Camera, in cooperation with DuMont and NBC, for recording images from a television screen under the trademark "Kinephoto". Prior to the introduction of videotape in 1956, kinescopes were the only way to record television broadcasts, or to distribute network television programs that were broadcast live from New York or other originating cities, to stations not connected to the network, or to stations that wished to show a program at a different time than the network broadcast. Although the quality was less than desirable, television programs of all types from prestigious dramas to regular news shows were handled in this manner. NBC, CBS, and DuMont set up their main kinescope recording facilities in New York City, while ABC chose Chicago. By 1951, NBC and CBS were each shipping out some 1,000 16mm kinescope prints each week to their affiliates across the United States, and by 1955 that number had increased to 2,500 per week for CBS. By 1954 the television industry’s film consumption surpassed that of all of the Hollywood studios combined. In 1953, General Precision Laboratories introduced its kinescope system.
Can individual extreme events be explained by greenhouse warming? HelpCenter FAQ Changes in climate extremes are expected as the climate warms in response to increasing atmospheric greenhouse gases resulting from human activities, such as the use of fossil fuels. However, determining whether a specific, single extreme event is due to a specific cause, such as increasing greenhouse gases, is difficult, if not impossible, for two reasons: 1. extreme events are usually caused by a combination of factors; 2. a wide range of extreme events is a normal occurrence even in an unchanging climate. Nevertheless, analysis of the warming observed over the past century suggests that the likelihood of some extreme events, such as heat waves, has increased due to greenhouse warming, and that the likelihood of others, such as frost or extremely cold nights, has decreased. For example, a recent study estimates that human influences have more than doubled the risk of a very hot European summer like that of 2003. Document Actions European Environment Agency (EEA) Kongens Nytorv 6 1050 Copenhagen K Phone: +45 3336 7100
Harry Ransom CenterThe University of Texas at Austin email signup Blog Video Facebook Twitter Instagram Video demonstrating Gutenberg's printing process. He may have used a different method of casting the printing type. There are no images of Gutenberg's print shop, but this indicates what a typical print shop may have looked like in years after Gutenberg's printing. A modern recreation of Gutenberg’s type. The typecasting process he used was probably substantially different from the one used to form these letters. B-42 Blackletter type, ©2000 Dale Guild Type Foundry. Old Testament Iosua [Joshua], Iudicum [Judges] Pages 114 verso and 115 recto Before beginning work on the Bible around 1450, Gutenberg experimented with printing single sheets of paper and even small books, including a simple Latin grammar textbook. To this end, he created a printing press and developed a method of casting individual pieces of metal type. Gutenberg's press was made of wood and might have been modeled on winepresses of his time. His type was made of a metal alloy that would melt at a low temperature but was strong enough to withstand being squeezed in a press. It was long thought that Gutenberg had originated the punch-matrix-mold system of typecasting used for centuries by subsequent typemakers, as demonstrated in the video on this page. Recent research, however, indicates that he may have used a cruder sand-casting system in which the character is carved into the sand and the metal alloy is poured into this mold to create the type piece. This process would have been a long and laborious one because nearly 300 different pieces of type are used in the Bible, each one requiring its own sand-cast mold. The exact number of presses in Gutenberg's shop is unknown, but his large production indicates that more than one press was used. A skilled typesetter selected the individual characters of type for each line of the text and set them backwards in a frame, from right to left, so that the text would read correctly when printed. The frame was then placed on the bed of the press, where ink was applied to the type. The sheet of paper was slightly moistened before being placed over the type and frame, and then a stout pull by the pressman pushed the paper down onto the ink and type, completing the printing process. No one knows exactly how many copies of the Bible were printed, but it is estimated that between 160 and 180 copies were produced. Most were printed on paper and the rest on vellum or scraped calfskin, a more expensive material. Although the original cost of the book is not known, most copies were likely purchased by wealthy churches and monasteries.
Naegleria (nay-GLEER-e-uh) infection is a rare and usually fatal brain infection caused by an amoeba commonly found in freshwater lakes, rivers and hot springs. Exposure occurs during swimming or other water sports. Millions of people are exposed to the amoeba that causes naegleria infection each year, but only a handful of them ever get sick from it. Health officials don't know why some people develop naegleria infection while others don't. Avoiding warm bodies of fresh water and wearing nose clips while in the water may help prevent such infections. Naegleria infection causes a disease called primary amebic meningoencephalitis (muh-ning-go-un-sef-uh-LIE-tis). This disease causes brain inflammation and destruction of brain tissue. Generally beginning within two to 15 days of exposure to the amoeba, signs and symptoms of naegleria infection may include: • A change in the sense of smell or taste • Fever • Sudden, severe headache • Stiff neck • Sensitivity to light • Nausea and vomiting • Confusion • Loss of balance • Sleepiness • Seizures • Hallucinations These signs and symptoms can progress rapidly. They typically lead to death within a week. When to see a doctor The amoeba isn't spread from person to person or by drinking contaminated water. And properly cleaned and disinfected swimming pools don't contain the naegleria amoeba. In the United States, millions of people are exposed to the amoeba that causes naegleria infection each year, but few people get sick from it. From 2005 to 2014, 35 infections were reported. Some factors that might increase your risk of naegleria infection include: • Age. Children and young adults are the most likely age groups to be affected, possibly because they're likely to stay in the water longer and are more active in the water. • What are the signs and symptoms? • When did they start? • Does anything make them better or worse? • Has the person been swimming in fresh water within the past two weeks? Imaging tests Spinal tap (lumbar puncture) The CDC suggests that the following measures may reduce your risk of naegleria infection: • Don't swim in or jump into warm freshwater lakes and rivers. July 16, 2015
Sometimes you can't think of the word you need to express yourself precisely, as when you say, "His explanation is hard to believe" when you might have said, "His explanation is implausible." How broad or limited is your vocabulary? Here's a simple assessment. Fill in the blank: "I like many types of music, literature and people. My tastes are _________." If you thought of the word eclectic, I would say you have a broad vocabulary; if you didn't, you have a limited one. Whatever your vocabulary range, here's how to expand it: 1. Read. The easiest and most pleasurable way to expand your vocabulary is to read good authors whose style you enjoy and admire. If they use a word that surprises, delights or confounds you, jot it down. If you don't know the word, look it up as soon as it's convenient, preferably while you still recall the context. If you do, you'll find yourself using more and more of the 500,000 words available to you in the English language, perhaps moving beyond the 10,000-20,000 word vocabulary of the nonreader to the 20,000-40,000 plus word vocabulary of the reader. 2. Use a hard-copy dictionary. In addition to the built-in dictionaries of e-readers, I recommend you use both online and hard-copy dictionaries. Online dictionaries are quick and convenient, but you enter and exit at the point of contact. Hard-copy dictionaries, especially illustrated dictionaries like the American Heritage Dictionary, invite you to browse. They appeal to your curiosity. 3. Move words from your comprehensive to your expressive vocabulary. You possess two sets of vocabulary: a larger set made up of words you understand (at least vaguely) and a smaller set you know well enough to use. These two vocabulary ranges are called your comprehensive (or passive) vocabulary and your expressive (or active) vocabulary. To move a word from your larger comprehensive range to your smaller expressive range, you need to know three things: how to define, pronounce and spell it. As you work to move it inside, engage your muscle memory. Say the word out loud. Move your mouth. And then look for occasions to use it. 4. Be systematic. If you're serious about improving your vocabulary, set a specific goal. Learn one new word a week, maintain a list and review your list weekly or at least monthly. Look away from your list and see how many words you can write or recite from memory. To take it one step further, write down the sentences in which you heard the words. Make up a shorter list of favorite words, ones you'll keep at the top of your mind and look for opportunities to use judiciously. 5. Use online resources. For some helpful word lists and vocabulary-building exercises, google "wilbers vocabulary." Finally, think of every new word you learn as a personal triumph. Give yourself a pat on the back, buy yourself a chocolate sundae, wolf down a Big Mac with fries or go for a celebratory run. Stephen Wilbers offers training seminars in effective business writing. E-mail him at His website is
Children's, Intermediate and Advanced Online English Dictionary & Thesaurus Dictionary Suite Multi-word Results band saw a power saw consisting of an endless metal belt, with teeth along one edge, that runs between two pulleys. band shell an outdoor bandstand consisting of a platform and a large, concave, almost hemispherical wall that serves as a sounding board. Band-Aid (Trademark) A Band-Aid brand bandage is a strip of tape that holds a gauze pad. It is used to cover small wounds. [3 definitions] big band a large jazz or dance band, esp. one of the 1930s and 1940s in the U.S., that combines arranged ensemble playing with solo improvisations. [2 definitions] brass band a musical group or band composed mostly of brass instruments. steel band a musical group, of a type originating in Trinidad, whose instruments are chiefly steel oil drums cut to varying heights. string band a musical group consisting of violin, guitar, string bass, and the like, usu. performing folk or country music. wave band a specific range of radio frequencies, such as those assigned for radio and television transmission.
Literacy test From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about citizenship and voting eligibility tests in the United States. For the standardized test given in Ontario high schools, see Ontario Secondary School Literacy Test. Editorial cartoon from the January 18, 1879, issue of Harper's Weekly criticizing the use of literacy tests. It shows "Mr. Solid South" writing on wall, "Eddikashun qualifukashun. The Blak man orter be eddikated afore he kin vote with us Wites, signed Mr. Solid South." A literacy test assesses a person's literacy skills: their ability to read and write. Literacy tests have been administered by various governments to immigrants, and in the United States between the 1850[1] s and 1960s, literacy tests were also administered to prospective voters and used to disenfranchise racial minorities. Between the 1890s to the 1960s, many state governments in the Southern United States administered literacy tests to prospective voters purportedly to test their literacy in order to vote. In practice, these tests were intended to disenfranchise racial minorities. Southern state legislatures employed literacy tests as part of the voter registration process starting in the late 19th century. Literacy tests, along with poll taxes and extra-legal intimidation,[2] were used to deny suffrage to African Americans. The first formal voter literacy tests were introduced in 1890. At first, whites were generally exempted from the literacy test if they could meet alternate requirements (the grandfather clause) that, in practice, excluded blacks. The Grandfather Clause allowed an illiterate person to vote if he could show descent from someone who was eligible to vote before 1867 (when most African Americans were slaves or otherwise ineligible to vote). Grandfather clauses were ruled unconstitutional by the United States Supreme Court in the case of Guinn v. United States (1915). In Lassiter v. Northampton County Board of Elections (1959), the U.S. Supreme Court held that literacy tests were not necessarily violations of Equal Protection Clause of the Fourteenth Amendment nor of the Fifteenth Amendment. Southern states abandoned the literacy test only when forced to do so by federal legislation in the 1960s. The Civil Rights Act of 1964 provided that literacy tests used as a qualification for voting in federal elections be administered wholly in writing and only to persons who had completed six years of formal education. In part to curtail the use of literacy tests, Congress enacted the Voting Rights Act of 1965. The Act prohibited jurisdictions from administering literacy tests to citizens who attained a sixth-grade education in an American school in which the predominant language was Spanish, such as schools in Puerto Rico.[3] The Supreme Court upheld this provision in Katzenbach v. Morgan (1966). Although the Court had earlier held in Lassiter that literacy tests did not violate the Fourteenth Amendment,[4] in Morgan the Court held that Congress could enforce Fourteenth Amendment rights—such as the right to vote—by prohibiting conduct it deemed to interfere with such rights, even if that conduct may not be independently unconstitutional.[5][6] As originally enacted, the Voting Rights Act also suspended the use of literacy tests in all jurisdictions in which less than 50% of voting-age residents were registered as of November 1, 1964, or had voted in the 1964 presidential election. In 1970, Congress amended the Act and expanded the ban on literacy tests to the entire country.[7] The Supreme Court then upheld the ban as constitutional in Oregon v. Mitchell (1970). The Court was deeply divided in this case, and a majority of justices did not agree on a rationale for the holding.[8][9] The literacy test was a device to restrict the total number of immigrants while not offending the large element of ethnic voters. The "old" immigration (British, Irish, German, Scandinavian) had fallen off and was replaced by a "new" immigration from Italy, Russia and other points in Southern and eastern Europe. The "old" immigrants were voters and strongly approved of restricting the "new" immigrants. All groups of American voters strongly opposed Chinese and Asian immigration. The 1896 Republican platform called for a literacy test.[10] The American Federation of Labor took the lead in promoting literacy tests that would exclude illiterate immigrants, primarily from Southern and Eastern Europe. A majority of the labor union membership were immigrants or sons of immigrants from Germany, Scandinavia and Britain, but the literacy test would not hinder immigration from those countries.[11] Corporate industry however, needed new workers for its mines and factories and opposed any restrictions on immigration.[12] In 1906, the House Speaker Joseph Gurney Cannon, a conservative Republican, worked aggressively to defeat a proposed literacy test for immigrants. A product of the western frontier, Cannon felt that moral probity was the only acceptable test for the quality of an immigrant. He worked with Secretary of State Elihu Root and President Theodore Roosevelt to set up the "Dillingham Commission," a blue ribbon body of experts that produced a 41-volume study of immigration. The Commission recommended a literacy test and the possibility of annual quotas.[13] Presidents Cleveland and Taft vetoed literacy tests in 1897 and 1913. President Wilson did the same in 1915 and 1917, but the test was passed over Wilson's second veto.[14] See also[edit] 1. ^ 2. ^ 3. ^ Voting Rights Act of 1965 § 4(e); 52 U.S.C. § 10303(e) (formerly 42 U.S.C. § 1973b(e)) 5. ^ Buss, William G. (January 1998). "Federalism, Separation of Powers, and the Demise of the Religious Freedom Restoration Act". Iowa Law Review. 83: 405–406. Retrieved January 7, 2014.  (Subscription required.) 6. ^ Katzenbach v. Morgan, 384 U.S. 641 (1966), pp. 652–656 7. ^ Williamson, Richard A. (1984). "The 1982 Amendments to the Voting Rights Act: A Statutory Analysis of the Revised Bailout Provisions". Washington University Law Review. 62 (1): 5–9. Retrieved August 29, 2013.  8. ^ Tok ji, Daniel P. (2006). "Intent and Its Alternatives: Defending the New Voting Rights Act" (PDF). Alabama Law Review. 58: 353. Retrieved July 29, 2015.  9. ^ Oregon v. Mitchell, 400 U.S. 112 (1970), pp. 188–121 10. ^ Brian Gratton, "Demography and Immigration Restriction in American History," in Jack A. Goldstone et al. eds, Political Demography: How Population Changes Are Re-shaping International Security and National Politics (2011) pp 159-75 11. ^ A. T. Lane, "American Trade Unions, Mass Immigration and the Literacy Test: 1900-1917," Labor History (1984) 25#1 pp 5-25. 12. ^ Claudia Goldin, "The political economy of immigration restriction in the United States, 1890 to 1921" in The regulated economy: A historical approach to political economy (U. of Chicago Press, 1994) pp 223-258 13. ^ Robert F. Zeidel, "Hayseed Immigration Policy: 'Uncle Joe' Cannon and the Immigration Question," Illinois Historical Journal (1995) 88#3 pp 173-188. 14. ^ Henry Bischoff (2002). Immigration Issues. Greenwood. p. 156.  Further reading[edit] • Petit, Jeanne D. The Men and Women We Want: Gender, Race, and the Progressive Era Literacy Test Debate (University of Rochester Press. 2010), On the USA External links[edit]
Hide this Flu Vaccine Story at-a-glance - • A recent review found that flu vaccines may not offer protection as previously thought. The elderly, in particular, do not appear to receive measureable value from the flu shot. Trivalent inactivated influenza vaccines also didn't offer much protection to children over the age of seven • While infants and young children are at greatest risk, no one is exempt from the potential serious complications of flu vaccination, one of which is Guillain-Barre syndrome. Early symptoms of GBS include sudden muscle weakness, fatigue and tingling sensations in the legs, eventually ending with either partial or total paralysis • A unit of U.S. drugmaker Johnson & Johnson, recently suspended delivery of their seasonal flu vaccine, Inflexal V, destined for Italy and other European countries, after discovering “problems” with two of 32 lots • Two weeks ago Italy banned the sale and use of four flu vaccines manufactured by Novartis, following the discovery of white particles in the vaccines. Over the next two days, Switzerland, Austria, Spain, France, Germany and Canada also suspended use of Novartis’ flu vaccines • ACIP recently changed their recommendation on Tdap during pregnancy. According to new recommendation, a Tdap booster vaccine is to be given to pregnant women during each consecutive pregnancy. The vote was unanimous despite the fact that neither safety nor efficacy data exists for women given multiple consecutive Tdap vaccinations during every pregnancy. Analysis Finds Flu Vaccine Efficacy Lacking, as Flu Vaccines are Suspended Across Europe and Canada November 06, 2012 | 252,324 views | Available in EspañolDisponible en Español By Dr. Mercola With flu season just around the corner, health agencies are telling Americans to just "get your flu shot," assuring everyone that it's safe and effective. Many, like MedicineNet.com,1 chalk up any and all safety concerns as "myths." "It's the time of year when you should be thinking about flu vaccinations for yourself and your family," they write. "Some people, however, decide not to get the flu vaccine and put themselves and others at risk of getting sick just because they believe long-held myths about the vaccine." Myths? I think not. Vaccine Claims are Not Based on Science-Backed Medicine The only myth here is the unscientific claim that the flu vaccine is safe and effective and "the best way" to protect yourself against the flu. Nothing could be further from the truth. Numerous studies have shown that the flu vaccine is NOT an effective way to prevent influenza and the real-life experiences of vaccine victims offer a window into the indisputable reality that flu vaccines are NOT without serious risks. Most recently, a University Of Minnesota study2 published in January found that flu vaccines may not offer as much protection as previously thought. The elderly, in particular, do not appear to receive measureable value from the flu shot, which is the same conclusion reached by several previous studies. Trivalent inactivated influenza vaccines also didn't offer much protection to children over the age of seven. The study differs from other meta-analyses in that it assessed efficacy and effectiveness of licensed influenza vaccines in the US by including only those studies that used sensitive and highly specific diagnostic tests to confirm cases of influenza. Eligible articles were published between Jan 1, 1967, and Feb 15, 2011, and used RT-PCR or culture for confirmation of influenza. According to the authors: In essence, if you're a senior, you're taking a health risk for a theoretical health benefit that can't be confirmed and, if you're a healthy adult, it's a shot in the dark. According to this analysis, at best you'll have up to 59 percent protection IF the selected type A and B influenza strains included in the vaccine are exactly those you happen to be exposed to. If not, you'll have no protection at all. So, again, you're taking a health risk for little or no benefit. Lead researcher Osterholm told WFMY News 2:3 Even the director of the Center for Infectious Disease Research and Policy, Michael T. Osterholm, is questioning the effectiveness of the vaccine. "We have overpromoted and overhyped this vaccine, it does not protect as promoted. It's all a sales job: it's all public relations" said Osterholm.  Powerful Profile of a Vaccine Victim While the efficacy of flu vaccines may be "suboptimal" or missing altogether, the same cannot be said about the potential health risks, so a calm, level-headed risk versus benefit analysis is crucial before you decide to get vaccinated. I urge you to watch the profile of a flu vaccine victim below, and weigh the potential of such an outcome against the potential of having to spend a week in bed with the flu... Remember most deaths attributed to the flu are actually due to bacterial pneumonia. But these days, bacterial pneumonia can be effectively treated with advanced medical care and therapies, like ICUs, respirators and parenteral antibiotics. Early symptoms of GBS include sudden muscle weakness, fatigue and tingling sensations in the legs that can take days or weeks to spread to the arms and upper body and can become painful, eventually ending with either partial or total paralysis. When there is total paralysis, GBS becomes life-threatening because it can impair breathing and interfere with the heart rate and cause high or low blood pressure that can lead to serious complications, such as heart attack and stroke. It is important to recognize the early symptoms of GBS, whether you have been vaccinated or not, and seek immediate medical care. All Vaccines Cause Inflammation, Can Alter Immune Response You also need to understand that vaccines can be immune suppressive – that is, they can suppress your immune system, which may not return to normal for weeks to months. Here are just some of the other ways vaccines can impair and alter your immune response: • Some components in vaccines are neurotoxic and may cause brain and immune dysfunction, including heavy metals such as mercury preservatives and aluminum adjuvants. • The lab altered bacteria and viruses in vaccines may also affect your immune response in a negative way. • Vaccines may alter your t-cell function and lead to chronic illness. • Vaccines can trigger allergies, autoimmune or neurological disorders, particularly in individuals genetically predisposed to being unable to resolve inflammation. Vaccines introduce large foreign protein molecules into your body and induce an inflammatory response to stimulate antibodies. However, if your body responds to these foreign particles in a way that causes a type of inflammatory response that does not resolve, you can develop severe allergies, autoimmunity or brain dysfunction. In 2011, the Institute of Medicine acknowledged there is individual susceptibility to adverse responses to vaccination involving unidentified genetic, biological and environmental factors. So check your family history for evidence of allergy, autoimmunity and neurological disorders and carefully evaluate the potential individual risks of vaccination for you and your family. The flu vaccine may also pose an immediate risk to your cardiovascular system due to the fact that vaccination elicits an inflammatory response. One 2007 study published in the Annals of Medicine5 concluded that: Novartis Flu Vaccine Now Banned in Several Countries Following Particle Contamination It all began on October 17, when vaccine maker Crucell, a unit of U.S. drugmaker Johnson & Johnson, suspended delivery of 2.36 million doses of their seasonal flu vaccine, Inflexal V, destined for Italy and other European countries, after discovering "problems" with two of the 32 lots.6 A week later, on October 24, Italy banned the sale and use of four flu vaccines manufactured by Novartis.7 The Italian Health Ministry issued an advisory stating that use of Agrippal, Fluad, Influpozzi and adjuvanted Influpozzi was suspended until further notice, following the discovery of white particles in the vaccines. The following day, the ban on Novartis' flu vaccines spread to a number of other countries: • Switzerland8 • Spain9 • Germany • Austria • France10 On October 27, Canada also suspended sale and use of Novartis' flu vaccines sold under the names of Fluad and Agriflu, both of which are manufactured in Italy.11 According to a report by the Wall Street Journal:12 "Problems with its flu vaccines represent a new blow for the Swiss drug maker [Novartis], which has struggled with a series of manufacturing problems recently. The Basel-based company is still trying to resume production at its troubled facility in Lincoln, Nebraska, which was shut down in December because of manufacturing flaws, and recently had to recall a birth-control pill because of a packaging error. Novartis Chief Executive Joe Jimenez in a call with journalists sought to reassure that its flu shots are safe, adding that the company is cooperating with health authorities. 'We are confident that the safety of the vaccines is assured. The lot in question had a deviation, it has been identified and put on hold and has not been released to the market,' he said. 'The manufacturing of vaccines is a complex procedure. Italian authorities are free to continue investigating,' but there is evidence that such deviations wouldn't affect safety or efficacy, Mr. Jimenez said." Flu Vaccine for Pregnant Women Called into Question In the U.S., trivalent influenza vaccination is universally recommended for all pregnant women, but a study13 published last year calls this practice into question. If you're pregnant, you'd be wise to consider the potential risks involved and resist being bullied into taking the flu vaccine unless you really feel it's worth the gamble. "Trivalent influenza virus vaccination elicits a measurable inflammatory response among pregnant women... There was considerable variability in magnitude of response; coefficients of variation for change at two days post-vaccination ranged from 122 percent to 728 percent, with the greatest variability in IL-6 [cytokine interleukin-6] responses at this timepoint. ...As adverse perinatal health outcomes including preeclampsia and preterm birth have an inflammatory component, a tendency toward greater inflammatory responding to immune triggers may predict risk of adverse outcomes, providing insight into biological mechanisms underlying risk... further research is needed to confirm that the mild inflammatory response elicited by vaccination is benign in pregnancy." There are serious questions about the safety of giving flu shots to pregnant women because stimulating a woman's immune system during midterm and later term pregnancy may significantly increase the risk that her baby will develop autism during childhood and schizophrenia sometime during the teenage years and afterward.14 This risk is not minor. According to Dr. Blaylock, it's a well-accepted fact within neuroscience that eliciting an immune response during pregnancy increases the risk of autism and schizophrenia in her offspring seven- to 14-fold! As stated by the authors of one 2007 study in the Journal of Neuroscience:15 More Craziness, ACIP Recommends Tdap Vaccine During Each Pregnancy In light of the uncertainty around the safety of vaccines during pregnancy, the recent decision by the Advisory Committee on Immunization Practices (ACIP) to recommend that a tetanus-diphtheria-acellular pertussis (Tdap) booster vaccine be given to pregnant women during each pregnancy is truly mindboggling. The previous recommendation was that pregnant women only receive the Tdap vaccine if they had never previously received it. Not only do we lack safety information about the use of vaccinations in general during pregnancy, we also do not have any information about the safety of multiple or consecutive Tdap vaccinations. Infectious Diseases in Children reported on October 24:16 "A lack of safety data about multiple Tdap vaccinations caused some hesitancy among the committee, especially when considering women who have short intervals between pregnancies. There are no data available that address this specific issue, but available data suggest that there is no excess risk of adverse events, Liang said, adding that the CDC working group supports the need for a prospective study to determine the adverse event risk in women with multiple pregnancies. She said data indicate that the average woman has two children, and most have an interval of at least 13 months between pregnancies, meaning that most women would not receive more than two doses of the vaccine." This is ludicrous. Since when is it wise to make widespread recommendations without ANY scientific evidence that it is safe to do so? Yet this is exactly what is happening with vaccine recommendations. We've seen this blasé attitude against safety again and again. The HPV vaccine is another excellent example of what can happen when you jump the gun and start mandating a vaccine for everyone without proper long-term safety and efficacy studies to back up your recommendation. How to Protect Yourself During the Flu Season • Optimize Your Gut Flora. The best way to do this is avoid apply the step above by avoiding sugars, processed foods and most grains, and replacing them with healthy fats and taking regular amounts of fermented foods which can radically improve the function of your immune system Protect Your Right to Informed Consent and Vaccine Exemptions Vaccine Awareness Week Share Your Story with the Media and People You Know Internet Resources Where You Can Learn More However, there is hope.
Published: 18-03-2010, 06:10 Slaves’ Christmas Slavery in the United States can be traced back to the early seven-teenth century. Although some of these colonial era slaves included Native Americans and poor Europeans, the vast majority of people subjected to slavery in America were of African descent. Slavery never became as popular in the Northern states as it did in the Southern states. By the 1830s the Northern states had all but eliminated slavery though it was still legal throughout the South. Slavery in the Southern United States ended with the close of the American Civil War in 1865. Many slaveowners gave their slaves three days off at Christmas time. Some permitted fewer or no days of rest, and others allowed more than three days. On some plantations slaves were authorized to select a YULE LOG to burn in the main fireplace of the manor house. The slaves’ holiday lasted as long as the log burned. Naturally the slave sent to fetch the Yule log from the woods exercised a great deal of care in choosing what he hoped would be a very slow-burning log. In this way the Christmas holiday could be extended to New Year’s Day. Many of these Christmas feasts included homemade wine or generous servings of the masters’ own liquor. This policy often resulted in drunkenness, as slaves were not permitted to drink at any other time of the year and thus were unaccustomed to the effects of alcohol. Former slave and abolitionist Frederick Douglass (1817-1895) believed that many slaveowners promoted this drunkenness as a means of discouraging slaves from seeking their own freedom. After the holidays were over slaveowners suggested to the slaves that if freed they would quickly slip into a life of laziness and alcoholic overindulgence. They pointed to the slaves’recent excesses as evidence for their argument. Christmas was a popular time for slaves to marry. The joyous family reunions and rowdy revelry that characterized the “Big Times,” as slaves sometimes referred to the Christmas holiday, inspired an increased number of romantic encounters leading to marriage. Plantation slaves sometimes had to make a formal visit to the “big house” (the manor house) to receive these gifts. Many never entered the mansion during the rest of the year. They arrived dressed in their best clothing to perform the little ritual surrounding Christmas gift giving. Along with his gifts, the master offered Christmas greetings to the slaves, wishing each of them a happy holiday. Sometimes he gave them a glass of EGGNOG and proposed a toast. Upon receiving his or her gift the slave would extend Christmas greetings and good wishes to the master and his family. Sometimes the slaves would collectively present the master or mistress with a token gift, such as a homemade basket or a clutch of eggs. On Christmas Day, custom permitted slaves to ask a Christmas gift of any white person they saw. All they had to do was to approach them and shout out, “Christmas gift!” before the white person could speak to them. Slaveowners who considered themselves good-natured let themselves be bested, and stocked up on coins, sweets, and trinkets to give away in this little GAME. In spite of their poverty, slave parents often gave their children a modest Christmas gift. These gifts consisted of things like home-made baskets, hats, aprons, or strip quilts. In some parts of the South slaves practiced a Christmas masquerade known as JONKONNU. Men dressed in tattered, makeshift costumes and masks. Thus attired they rambled from house to house playing music and dancing. Householders gave them coins or trinkets in exchange for their entertainment. Slaves also sang religious music at Christmas time. In fact, African-American slaves developed their own style of religious songs known as “spirituals.” Some well-known spirituals retell elements of the Christmas story. These include “Mary Had a Baby” “Go Tell It on the Mountain,” “Rise up Shepherd and Follow,” “Sister Mary Had-a But One Child,” and “Behold That Star.” Throughout the South, both white and black children were told that GABRIEL the ANGEL sprinkled stardust on the earth in early winter. It turned into the first frost of the season as it hit the ground. Its sparkling beauty served to remind children of the coming of the Christ Child. Slaves also passed along bits of old European Christmas lore, such as the belief that animals gain the power of human speech on Christmas Eve (see also NATIVITY LEGENDS). If one crept quietly into the barn at just the right moment, one might overhear them murmur praises to God and the baby JESUS. Nevertheless, to do so would bring a mountain of bad luck down on one’s head. Some plantation slaves celebrated New Year’s Day with a cakewalk (see also MUMMERS PARADE). In this competitive dance, couples stepped side by side, moving around the dance floor in the form of a square. Their exaggerated movements amplified and made fun of the formal dances popular among white folk. The couple who exhibited the fanciest moves won a cake. Southern slaves and their masters shared certain New Year’s Day superstitions. Many believed that consuming a dish called hopping John, made from black-eyed peas and ham hocks, brought good luck for the coming year. Other popular beliefs included the notion that to argue on New Year’s Day meant that one would be drawn into arguments throughout the coming year. Many invoked the superstition that to cut one’s hair on New Year’s Day was to divide one’s wealth in two. Others held to the belief that to borrow or lend anything on New Year’s Day would bring bad luck for the rest of the year. The worst luck a slave could encounter on New Year’s Day was to be separated from a close family member through work contracts arranged by the master. Those hired out to work for the year on other plantations left on New Year’s Day. For this reason slaves sometimes called January 1 “Heartbreak Day.” On January 1,1863, President Abraham Lincoln (1809-1865) signed the Emancipation Proclamation into law, granting immediate freedom to slaves in the Southern states. This event is still celebrated as EMANCIPATION DAY in some African-American communities. Some masters cynically promoted slave Christmas celebrations, be-lieving that this once-yearly binge relieved just enough suffering and want to prevent slaves from openly rebelling against their inferior status. Others may have been less aware of the possibility that the simple pleasures they afforded their slaves at Christmas time played a role in the preservation of slavery. In spite of all the pressures and deprivations they were subjected to, African-American slaves wrested some degree of holiday happiness out of the foods and freedoms allowed them at this time of year. Rising above their circumstances, they contributed a number of beautiful spirituals to the American repertoire of Christmas Carols. Our Christmas celebrations today are still the richer for them.
Friday, December 30, 2011 A note on ethnic conflict and demographics: the Czech Republic The population of the Czech Republic is similar to that of most European populations in its broad outlines: long life expectancies, below-replacement fertility rates, more-or-less substantial net immigration, all can be found in the Czech Republic. The most notable distinguishing factor of the Czech Republic's demographics lie in its size, now and in the relatively distant past: the Czech Republic is one of the very few countries in the world with a smaller population now than in 1945. The population of the Czech Republic reached a peak of nearly 11.2 million in 1940 but fell to a mere 8.8 million in 1947, not as a consequence of an especially high wartime death rate but rather primarily because of the expulsion of the roughly three million Sudeten Germans. The population has since grown to 10.5 million, this growth the product first of post-war natural increase then--despite a recent partial recovery in fertility rates--because of substantial net immigration. Beginning with post-war internal migrants from Slovakia (Slovaks and Roma alike) to the labour-hungry Czech lands, immigration into the Czech Republic became more globalized with a later wave of Vietnamese immigrants who made use of Communist Czechoslovakia's recruitment of Vietnamese students and guest workers in the 1970s and 1980s, to the Ukrainians who left their country in the 1990s to earn a living in a neighbouring country with a strong labour market and permeable borders. (As an aside, I wonder if the Ukrainian presence in the Czech Republic is at all linked to historical Czech interactions with the Carpathian Ruthenia that was historically almost an eastern extension of Slovakia, specifically with the Zakarpattia Oblast that was actually part of Czechoslovakia from independence until its 1945 cession to the Soviet Union.) Bulgarians, Chinese, Russians, and Mongolians are some of the newer groups to appear in the Czech Republic. As a high-income European country the Czech Republic would already be an attractive destination, but that the cultural links maintained by the Czechs with other Slavic populations in central and eastern Europe and the Communist-era political and military links established with countries such as Vietnam and Mongolia has advantaged the Czech Republic relative to its neighbours. From what I can tell, this immigration has been substantially less politically problematic than in most other European countries. None of this couldt have been the case without the catastrophic ethnic violence of the 1940s, the Nazis' colonization and brutalization of the Czechs being followed by the expulsion of nearly the entire ethnic German population from the Czech Republic's territory after the Nazi defeat. Ethnic conflict determined the demographics of the Czech Republic. Let's start with immigration. Over at my blog I note that Czechoslovakia came apart so quickly and peacefully because Czechs and Slovaks weren't particularly close, for good and for ill; mild resentments and a certain romantic nostalgia characterized, and characterize, relations between the two largest ethnic groups in the former Czechoslovakia. Czechs and Slovaks were separate groups, and each had its own discrete territory unthreatened by the other. The same wasn't true for Czechs and Germans in the modern Czech Republic. I've argued at my blog that the expulsion of the Sudeten Germans was only possible because of long-standing Czech fears that German influence could be the death of their nation, whether metaphorically through assimilation or actually through genocidal colonization. After the Second World War, the expulsion of the Sudeten Germans was probably inevitable. In a counterfactual scenario where the expulsion of the Sudeten Germans wasn't nearly so complete, where the border regions of the Czech Republic were bicultural if not outright German-majority, I can imagine immigration being a contentious issue. In Québec, immigration has been controversial because of concerns about the impact of immigrants on the language balance. Would the post-Communist immigration-driven population growth of the Czech Republic been possible otherwise? The demographic and economic geography of the Czech Republic would also be radically different. Before 1945, the population density of the Czech lands was relatively uniform, with many of the Czech lands' borders--the same borders home to the Sudeten Germans--being superbly industrialized. The expulsions changed this, depopulating the areas once populated by Germans and then repopulating them only partially with migrants from elsewhere in Czechoslovakia, the expulsions additionally undermining once-strong regional economies. The net result was to make population, making wealth and industry concentrate that much more strongly in the centre of the Czech Republic, in Prague and environs, and in turn creating peripheries. Some people have noted that these peripheries, in turn, suffered disproportionately from overindustrialization and pollution, as the regions' unsentimental new residents saw their new home as a space where industrial modernity could operate untrammelled by tradition, as a site for mass production and mass consumption regardless of the human and environmental cost. According to a recent study (PDF format), the regions which saw the strongest divergence from Czech and European Union averages (as measured by GDP per capita, not by household income) were the border regions that formed the core of the former German zone of settlement. This peripheralization, coupled with the only partial repopulation of the Czech republic's peripheries after 1945, could plausibly encourage the continued concentration of Czechs and their wealth in the geographic centre of their country. Could these border regions of the Czech Republic have evolved very differently if not for the replacement of their populations? In the past at Demography Matters, we've looked at how changing norms of gender, trivial connections formed by flows of guest workers or tourists, political concerns, the different ways in which people form families, similarities of language and culture between different populations, even geographic adjacency have led to demographic change of one kind or another. One thing that I don't think that we've ever before taken a look at is the role of ethnic conflict, culminating in ethnic cleansing and even genocide, in triggering demographic change. Thinking about this, I find it more than a bit disturbing since more than a few of the populations we've taken a look at--in Germany, Poland, East Africa, the former Yugoslavia, of course the Czech Republic--have been very strongly influenced by the long-term consequences of ethnic conflict. This will change in the new year. Tuesday, December 20, 2011 A note on North Korea after Kim Jong-Il Kim Jong-Il may have died, but North Korea is still damned. Korea--the south, the north, the peninsula in toto--has been the subject of more than a few posts here at Demography Matters. Korea matters. Right now, the North and the South are marked by notable imbalances: the south has urbanized, has completed the demographic transition, is in fact at the level of lowest-low fertility, and has begun to receive substantial inflows of immigrants, while the north remains substantially rural and substantially less advanced in the demographic transition and--obviously!--is far more a land of emigration than a land of prospective immigration. South Korea has reached the levels of First World; North Korea combines the worst of the Second and the Third Worlds, with an inflexible totalitarian economy broadly hostile to non-centrally directed enterprise in the context of terrible general poverty. Notably, the South is rather less xenophobic than the North; I wrote back in November 2010 about how the South responds to its deficit of women by sponsoring the immigration of women across East Asia to marry locals, while the North punished women who engaged in survival sex with Chinese men with--at best--the sorts of abortions that didn't involve being kicked repeatedly in the abdomen by members of the security forces. In the event that the north's border controls weaken sufficiently, as I wrote back in March 2010 sustained mass emigration--to South Korea, an amplification of the existing marriage-driven migration to China, to anywhere--is much the most likely outcome for decades to come. East Germany lost two million people to the West after reunification, and East Germany was--by world standards--a high-income society with relatively advanced consumer industries and a high level of technology. What can North Korea plausibly offer its citizens, especially given the huge improvements in life chances awaiting a North Korean who left and the perhaps (alas) slim likelihood that a new government could trigger quick positive economic transformations. Monday, December 05, 2011 Five notes from Jacques Pepin's The Origins of AIDS 2. The vulnerability of central African populations to external forces 3. The vulnerability of central African populations to disease 5. The potential novelty and superficiality of migration-related links (This principle applies to viruses and human beings alike.)
What is meant by biasing transistor, Electrical Engineering Q. What is meant by biasing transistor? The purpose of dc biasing of transistor is to obtain a certain dc collector current at a certain dc collector voltage. These values of current and voltage are expressed in terms of operating point. To obtain the operating point we make use of some circuits called biasing circuits. Only the fixing of a suitable operating point is not sufficient It must also be ensured that it remains there. In a transistor the operating point shifts with the use of circuit. Such a shift of operating point may drive the transistor into undesirable region. There two reasons for the shift of operating point. Firstly the transistor parameters are temperature dependent. Secondly the parameters (like beta) change from unit to unit. The requirements of biasing a transistor are: establish the operating point in the active region of the characteristics, so as that on applying the input signal the instantaneous operating point does not move to saturation or cut off region. Stabilize the collector current against temperature variations. Make the operating point independent of transistor parameters so that it does not shift when the transistor is replaced by another of the same type of circuits. Posted Date: 6/11/2013 5:45:39 AM | Location : United States Related Discussions:- What is meant by biasing transistor, Assignment Help, Ask Question on What is meant by biasing transistor, Get Answer, Expert's Help, What is meant by biasing transistor Discussions Write discussion on What is meant by biasing transistor Your posts are moderated Related Questions Followings are the Disadvantages of PLC a.Too much  work  required in  connecting wires. b.Difficulty with changes or replacements. c.Difficulty in finding  errors  requi How many interrupts does 8085 have, mention them The 8085 has 5 interrupt signals; they are INTR, RST7.5, RST6.5, RST5.5 and TRAP Q. Explain working of Voltmeter? In order tomeasure the potential difference between two terminals or nodes of a circuit, a voltmeter is connected across these two points. A pr A 100-kW, 250-V shunt generator has an armature-circuit resistance of 0.05  and a field- circuit resistance of 60 . With the generator operating at rated voltage, determine the i Q. Given the K map of a logic function as shown in Figure, in which ds denote don't-care conditions, obtain the SOP expression. It has been known for a thousand years or more (originating in China) that certain (magnetic) materials would always orientate    themselves in a  particular direction if suspended Explain about the typical embedded system hardware unit. Typical Embedded System Hardware Unit: Program Flow and data path Control Unit (CU): comprises a fetch unit for fetc advantages and disadvantages of inductive coupling
DEFINITION of 'Corporate Lien' A claim made against a business for outstanding debt. The debt can be owed to another business, or could be in the form of tax obligations to the government. For example, the federal government may impose a corporate tax lien on a company that fails to pay payroll tax or some other tax obligations. The corporate lien is placed on the company's assets to indicate that the company has financial obligations outstanding. BREAKING DOWN 'Corporate Lien' Companies, like investors, are responsible for the debt that they take on, as well as other financial responsibilities, such as paying their employees. If a company cannot meet its obligations, investors can step in and purchase the corporate lien, which settles the obligation with the lending party and allowing the investors to pursue compensation from the company and possibly any penalties they may be subject to. If the company ultimately declares bankruptcy, holders of the corporate lien may also be more likely to have a higher priority than stockholders. 1. Tax Lien 2. Lien 3. Tax Lien Foreclosure 4. Federal Tax Lien 5. Tax Lien Certificate 6. Silent Automatic Lien Related Articles 1. Taxes Investing In Property Tax Liens 2. Investing How Does a Lien Work? 3. Taxes IRS Asset Seizures: Could It Happen To You? 4. Taxes Start Over With The IRS 5. How to Sell a Car With a Lien 6. Taxes Explaining Corporate Tax 7. Financial Advisor Top Alternatives Assets for Capital Preservation 8. Taxes Is Multinational Tax Avoidance at an End? Are governments doing enough to end corporate tax avoidance? 9. Taxes Understanding Taxes 10. Financial Advisor 1. How can I invest in tax liens? 4. Does the IRS report to credit bureaus? Understand the relationship between the IRS and the various major credit bureaus. Learn how credit bureaus find out about ... Read Answer >> 5. What is the difference between a lien and an encumbrance? Hot Definitions 1. Federal Direct Loan Program 2. Cash Flow 3. PLUS Loan 4. Graduate Record Examination - GRE 5. Graduate Management Admission Test - GMAT 6. Magna Cum Laude Trading Center
YNP biologists struggle to maintain wolf research YELLOWSTONE NATIONAL PARK, Wyo - Yellowstone National Park scientists say wolf hunts are proving harmful to their decades-long research after they lost a record number of the predators to this year's hunting season. Yellowstone wolf biologist Doug Smith tells The Bozeman Daily Chronicle 12 percent of the park's wolf population was killed by hunters after the predators were stripped of their federal protections in both Wyoming and Montana. Hunters killed several wolves that had been collared, including five key members of the park's packs. Three were wearing expensive GPS collars. The deaths of just a few choice wolves have resulted in the likely loss or collapse of two packs and aborted the history and trend data biologists have built. More Stories
ON the moonless night of Dec. 20, most of the passengers on the overcrowded Dona Paz were asleep, either in steel bunks or on outside decks, as the ferry made for Manila. Many of them were going to the Philippine capital for the Christmas holidays. Then the 2,215-ton ferry collided with the 629-ton tanker Victor, with a crew of 13. Fire broke out; some passengers jumped in the sea, escaped; within minutes both ships sank, killing at least 1,500 people. By week's end, only 26 survivors had been found after searches in the area of the collision, between the islands of Luzon, where Manila is, and Mindoro, to the south. The survivors, many badly burned, estimated that as many as 3,000 people, double the normal capacity, might have been on the ferry. A 34-year-old fisherman said he had been on the ferry with his father-in-law, 14-year-old daughter, brother, niece and 14 people he had recruited to work as domestic servants. ''I was still shaken by the noise when I saw my father-in-law jump into the sea,'' he said. ''I saw the ship in flames and I wanted to kill myself.'' ''Our sadness is all the more painful because the tragedy struck with the approach of Christmas,'' said President Corazon C. Aquino. The Philippines is the only predominantly Christian nation in Asia. The greatest loss of life at sea in this century occurred Jan. 30, 1945, when an estimated 7,700 people went down in the Baltic Sea on the transport ship Wilhelm Gustloff, which was carrying Germans, including Nazis, fleeing Poland. The ship was torpedoed by a Soviet submarine. In April 1912, the British liner Titanic struck an iceberg in the North Atlantic and sank, killing 1,503 people. Photo of fisherman towing bodies recovered after collision between a Philippine passenger ferry and an oil tanker (AP)
You are here Video Games Come Alive Parents have been skeptical of video games. Caught between kids' love of them and their own fears about couch potato play, they've mostly fallen into uneasy acceptance. The result: Nearly a third of children age 6 and under have played video games (14 percent of kids age 3 and under and 50 percent of 4- to 6-year-olds). But the way kids play video games is being turned on its head. Rather than simply offering exercise for only the thumb and wrist, and targets for shooting or jumping over, some new games demand actual physical interaction. This transformation comes mostly as a result of new "input devices," or, to put it more simply, alternatives to the joystick. A small camera mounted on your TV can put your child in a video game, which she then plays with her entire body. Dance on a special mat, and your moves are converted to the action on the screen. A drum attached to the video-game console turns a simple game into an exercise in rhythm. Not only parents but educators and doctors are taking notice. "I used to think video games were bad and sedentary," says Ernie Medina, a preventive-care specialist in Redlands, California. "But these new ones can get kids in shape." He's started using some of these games at his clinic to help obese children lose weight. Last fall, he persuaded his local school district to add them to elementary school physical-education classes. While most children's video games are geared to grade-schoolers and up, these new devices and the games that go with them can appeal to preschoolers, too, particularly if parents are there to help. But are these new toys any good, or are they just the same old shoot-'em-up games with a different name? None of these are must-haves; kids are still better off playing real soccer than virtual. But if you have a compatible system already or are planning to buy one, these four games may be worth a closer look: What it is: A tiny digital video camera that plugs into a PlayStation 2, Sony's video-game console. When used with EyeToy-compatible games, it captures a live video image of the player and puts it directly into the action onscreen. Instead of controlling play with a knob or lever or buttons, you control it by waving your arms, body, and legs. The games: EyeToy Play 2 is a compilation of 12 games, some of which will probably appeal to preschool kids because of their simplicity (and silliness), while others are better suited to older children. Popping virtual bubbles, for instance, is easy enough for kids as young as 3, as is playing chef (chopping potatoes, smashing tomatoes) or wielding on-screen power tools (with help from an older parent or an older sibling, though some preschoolers won't get it or have the patience to figure it out). Games like virtual table tennis and soccer, are better suited to older kids, ages 6 and up. Why it's cool: Though the games require some parental involvement at setup (focusing the camera, navigating the menu to reach the desired game), the action unfolds in a fairly intuitive manner. There are no buttons or controllers to master. Even young kids can get the idea of, say, poking their fingers around in the air to pop bubbles. And although the idea of being inside the TV and controlling the action is a little freaky at first, it quickly becomes fun. The play is genuinely active. Price: $30 for camera; games are $40 or so without the camera. Requires a Sony Playstation 2 console, which sells for around $150. Buy it!  Dance Dance Revolution What it is: A soft plastic mat you plug into a Nintendo GameCube. You dance on the mat prompted by what's going on onscreen. The games: Best for young kids (5 and up) is Dance Dance Revolution: Mario Mix; it follows the cartoon character Mario as he tries to retrieve his lost Music Keys, which will restore peace -- and rhythm -- to the people of the Mushroom Kingdom. Kids are challenged to dance in sync with a song as the right moves flash on the screen. There are also head-to-head (well, toe-to-toe) dance-offs against other characters in the game. Other Dance Dance Revolution titles with more of a music-video feel are better for kids 10 and up. Three-year-olds will want to dance on the mat but probably won't have a clue how to make things happen onscreen. At age 5, kids really start to figure it out and don't want to stop. Why it's cool: Kids love to dance, and these games give them a fun excuse to just move and squirm and squiggle. Dance Dance Revolution has spawned a sort of electronic sport of its own, with competitions in arcades around the world. Guaranteed to make you sweat and your children pant. Price: $50, including dance mat. Requires Nintendo GameCube console, which goes for around $100. Buy it! Donkey Konga Bongos What it is: A set of bongos, attached to a Nintendo GameCube, that let kids groove, pound, and control the video-game action by drumming. The game: Donkey Konga 2 is the latest game for the Konga Bongos. Music plays as a cartoon gorilla slugs away at his onscreen drums. The challenge is to hit your drums right along with the multicolored arrows moving along the screen to earn bananas and points. Adept 4-year-olds will like it, but it's easier and more fun for kids 5 and up. Why it's cool: Kids are usually pounding something in the house anyway, and this kind of game is the perfect thing to keep them busy and working on their jams. Not quite a seminar in astrophysics, but harmless fun for little Art Blakeys everywhere. Develops both arm muscles and a sense of rhythm. Young kids will probably also like Taiko Drum Master, a similar game (but for Sony PlayStation 2 rather than GameCube) based on a traditional Japanese form of drumming, which uses one large single drum and drumsticks. Price: Konga is $49 with drum; Taiko is $48 with drum. Buy it!  What it is: An interactive game, kind of like Tamagotchi on steroids, that takes raising a virtual pet to a whole new level. It's played on the handheld Nintendo DS. The game: The game isn't physical, but its level of interactivity is new. The basic idea is to raise and train a puppy, which you "buy" from a kennel. Like a real dog, each pooch comes with its own personality -- lovable, shy, feisty, and everything in between. But the secret to the game's appeal is the Nintendo DS interface, which has a touch screen, a microphone, and voice-recognition software. The result: Your child can make up and speak the puppy's name until the pooch responds to her call (but not yours). Tricks like sitting and rolling over can be taught through a combination of voice commands and "petting" the dog via the touch-screen. Though the pets don't age (or die), they do build skills, like playing Frisbee. They also eat, and figuring out how to feed them is a player's first challenge. Children under 5 will really want to play but won't be able to read the screen prompts. Bigger kids will know every in and out of the game long before you do. Why it's cool: Kids love it, girls especially. The dogs are really cute -- they roll on their backs and want you to scratch their bellies, and if you pat their noses they'll sneeze. Some claim games like this teach kids about responsibility. Please. They'll treat their fake pooches like a king and still leave their rooms a mess. But it is charming and captivating (and portable -- can you say "car trip where at least one of the kids forgets to whine?"). Price: $30. Requires the $129 Nintendo DS. Buy it!
Essential Safety Tips for Printmaking in the Art Room Many teachers get cold feet when thinking about doing advanced printmaking in the art room, simply because of the safety issues that can arise. We put our heads together and have come up with your essential list to keep safety in the art room at the forefront, without spoiling the fun of printmaking. 1. Find a material that is soft and easy to carve, preventing slips. We like Blick E-Z Cut Printing Blocks (AOE writer, Cassidy uses these in her classroom!) 2. Provide appropriate tools for student use. For example, with these particular cutters, instead of pushing away, student pull towards them. This minimizes the danger of pushing the tool into their hands. The only disadvantage is that they’re made for right-handed people. If you have a left handed student, you can use the traditional cutters. 3. Skip carving with younger students. Use a foam product like this one instead. It’s my personal favorite to use in the art room with students 2nd-5th grade. You can get some really nice prints from these. 4. Try a pre-made printing object, such as these Gyotaku Fish. I talked more about this technique in this video, and why my students love them. It gives them the basic concept of the printmaking process, with fabulous results and a cultural connection, too! 5. Be sure to use bench hooks when carving. Bench hooks help hold the blocks in place so students can keep their hands away from the sharp tools. There are many different types out there, we like these as a middle of the road solution for any classroom. 6. Remember, it’s not just about WHAT supplies you have on hand, it’s how you distribute and educate students about them! Remember to count the cutting tools at the beginning and end of class. Cassidy, AOE Writer, gives this smart tip: “As a middle school teacher, I rarely pass out supplies myself. However, for printmaking, passing out the cutting tools to the whole table and then holding students responsible for all of them at the end of class seems to work well. If a student needs a different size, I have a special sign out sheet. Students can sign out tools and then cross of their names when they return them.” What are other ways you ensure a safe art-making experience with printmaking? Any lessons you learned the hard way?  Jessica Balsley • Janine I have my students wear gloved on their non-cutting hand – it reminds them they have a hand. Sounds silly, but there has not been any accidents since making that change. • Jessica Balsley I was actually thinking about gloves the other day. I seem to remember seeing some in a catalog that were meant just for this purpose. Has anyone tried them? • Pingback: Essential Triple Tip Waterproof() • Heidi you mentiond you have students draw 4 bugs to start with. Are they looking at a model, or just drawing from memory, a photo? • Bob If you have some basic woodworking skills (cutting, shooting screws) you can easily make a class set of bench hooks. Also, for gyotaku, I suggest using real fish when possible. The school I teach in includes a 3-year-olds class up through 12th grade. The 3-year-olds teacher does simple gyotaku with her class using real fish, and I do it in middle school, also with real fish. This gives the kids a chance to explore biology, which is endlessly fascinating to the little ones. Some of the coolest prints we get, both in the early childhood class and in middle school, are from squid and octopus.
New liquid crystal promises faster LCDs As predicted by IBM, apparently Researchers have observed a new type of liquid crystal - long theorised, but not observed until now - that they say promises faster and cheaper liquid crystal displays. The team, Dr. Satyendra Kumar, from Kent State university; and Dr. Andrew Primak, from Pacific Northwest National Lab. Dr. Bharat R. Acharya, of Platytus Technologies, used a small-angle X-ray diffraction technique to observe the crystal phase, called biaxial nematic liquid crystal. In very simple terms, LCD displays function because electrical current can control the opacity of the crystals. The displays can be either passive or active: the active displays have transistors at each pixel point, so less current is needed to control its brightness. The rate at which the current can be switched on and off (and still get a response from the crystal) determines the screen refresh time, and so the quality of the image - how a cursor tracks with mouse movement, for example. This new crystal phase has the potential to speed the refresh rate a further ten times, the researchers say, as the crystals reorient more quickly in response to a voltage. Acharya commented: "There was no evidence of the existence of biaxial nematic liquid crystals made of single molecules until recently." In 2000, Kent State researchers presented initial findings at the March meeting of the American Physical Society, but these were more complex micellar, or aggregated, biaxial liquid crystals, Acharya explains, and do not have the right optical properties for use in displays. A paper describing their work appeared in the April 9 issue of Physical Review Letters.These latest findings will be presented by Kumar at the International Liquid Crystal Conference in Slovenia on July 6, 2004. ® Related stories TV: coming to a mobile near you Sony to ship Wi-Fi LCD TV this autumn ESA commissions super spacesuit Sponsored: Accelerated Computing and the Democratization of Supercomputing
Small Samples and Non Normality: Sample size needed for the distribution of means to approximate normality depends on the degree of non-normality of the population. For approximately normal distributions, you won't need as large sample as a very non-normal distribution. Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samples.) The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed. By the central limit theorem, means of samples from a population approach a normal distribution regardless of the distribution of the population. Rules of thumb say that the sample means are basically normally distributed as long as the sample size is at least 20 or 30. For a t-test to be valid on a sample of smaller size, the population distribution would have to be approximately normal. The above facts wells explain the concept of small samples and non normality. Get latest updates on the related subject in Statistic homework help and assignment help at Understand Statistics better with these Q&A More Q&A Submit Your Questions Here! Copy and paste your question here... Attach Files Top Statistics Online Experts
A Philosopher's Blog Birkini Ban Posted in Ethics, Law, Philosophy, Religion by Michael LaBossiere on August 29, 2016 In response to terrorist attacks, some French politicians sprang into action and imposed ordinances aimed at banning the burkini. For those who are not theological fashionistas, a burkini is essentially a more fashionable wet suit intended primarily for Moslem women who want to swim in public while remaining modestly dressed. The burkini is in some ways reminiscent of women’s swimwear of the early 1900s, but far less likely to result in death by drowning. The burkini is also popular with women who want to swim but would prefer to lower their chances of getting skin cancer. To be a bit more specific about the ban, the ordinances did not name the burkini, but rather forbid bathing attire that is not “appropriate,” that fails to be “respectful of good morals and of secularism,” and does not follow “hygiene and security rules.” There is a certain irony in the fact that being scantily clad on the beach was once considered in the West to be inappropriate and disrespectful of good morals. Now it is claimed that being well covered is not respectful of good morals. While I am not a legal scholar, the specifications seem rather odd. I would think that appropriate attire that is “respectful of good morals” would be one that covers up the naughty bits—assuming that covering the bits is the right thing to do. While not an expert on hygiene and security, I do not see how a burkini would be any more a threat to hygiene or security than other common swimming attire such as bikinis, speedos, and wet suits. After all, the typically burkini is effectively a wet suit. There is also the fact that Christian nuns who dress conservatively for the beach are not targeted; presumably their attire is in accord with both hygiene and security. As with France’s 2011 burqa ban, these ordinances seem aimed at creating the impression that a leader is doing something, to distract the masses from real problems and to appeal to religious intolerance and xenophobia. Since women going to swim in a burkini are unlikely to present a threat to public safety, there seems to be no legitimate basis for these ordinances in regards to preventing harm to the public. And this is the only rational moral justification for laws that forbid people from dressing or acting certain ways. It could be countered that ordinances are actually intended to protect the women from oppression; that it aims to prevent women from being forced to cover up if they do not wish to do so.  While many Westerners probably assume that Moslem women are all forced to cover up, this is not the case. Some women apparently do this by choice and regard the right to do so as protected by the Western notion of freedom. While some might be skeptical about how free the choice is, it is reasonable to think that some women would, in fact, freely decide to cover up in this way. After all, if some women are willing to show lots of skin in public, then it hardly seems unusual that some women would rather show far less. There are certainly women who prefer modest attire and women who willingly embrace religious traditions. For example, some nuns who visit beaches dress very modestly; but they seem to do some from choice. Presumably the same can be true of Moslem women. Some might argue that women who cover up too much and those that cover up too little are all victims of male oppression and are not really making free choices. While it is reasonable to believe that social and cultural factors impact dressing behavior, it seems unreasonably to claim that all these women are incapable of choice and are mere victims of the patriarchy. In any case, to force someone to dress or not dress a certain way because of some ideology about the patriarchy would also be oppressive. It might also be argued that just as there are laws against being naked in public, there should also be laws against being improperly over-covered on the beach. After all, a woman would (probably) get in trouble for walking the streets of France with only her face, feet and hands covered, so why should a woman be allowed to go to the beach with only her face, hands and feet exposed? Both, it could be argued, create public distractions and violate the general sense of appropriate dress. While this might have some appeal, such ordinances would need to applied in a consistent manner. As such, if a Christian woman were spotted walking the beach in jeans and a shirt, she would have to be removed from the beach or forced to strip. The obvious counter is that the ordinances are not used to target anyone but Moslem women in birkinis, although the secular part of the ordinances would allow targeting any attire with a non-secular connection. This would, obviously, ban nuns from the beach if they wore religiously linked attire, such as modest swimsuits. This sort of ban would be a clear attack on religious freedom, which is problematic. While I am not particularly religious, I do recognize the importance of the freedom of faith and its expression. While there can be legitimate grounds for limiting such expressions (like banning human sacrifices), when a practice does not create harm, then there seems to be no real ground for banning it. As such, the ban in France seems to be completely unjustified and also an infringement of both the freedom of choice and the freedom of religion. While some might point out that some Muslim countries do not allow such freedoms, my easy and obvious reply is that these countries are in the wrong and we should certainly not want to be like them. Two wrongs do not, obviously, make a right. Lastly, it could be argued that the bikini is a very serious matter—the bikini is rejection of French culture and an explicit statement in support of Islam against France. The challenge is, of course, to provide evidence that this is the intention behind wearing the bikini. While attire can be used to make a statement, thinking that wearing a birkini must be an attack on France is on par with thinking that a person who eats a Big Mac or hummus in public in France is also attacking France. Even if a person is wearing the birkini as a statement, then it would seem to fall under freedom of expression. While it might offend some, offense is not grounds for imposing on this freedom. While there is some appeal to the idea that people should assimilate into the culture, there is the obvious question of why one view of the culture should be granted hegemony over everything. That is, why the burkini cannot be as accepted as the bikini, why Islam cannot be as accepted as Methodism. Going back to the food analogy, it would be unreasonable to require French citizens to only eat food that is regarded as properly French and to see people who eat other food as a threat. In closing, the birkini bans are unwarranted and morally unacceptable. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Do We Want Rapists, Robbers and Murderers Voting? Posted in Ethics, Philosophy, Politics by Michael LaBossiere on August 26, 2016 My essay on felons and voting received an interesting comment from A.J. McDonald, Jr. He raised a concern about having rapists, robbers and murders voting. One initial reply is that there are many other types of felonies, a significant number of which are non-violent felonies. As such, any discussion of felons and voting needs to consider not just the worst felonies, but all the felonies on the books. And, in the United States, there are many on the books. That said, I will address the specific concern about felons convicted of rape, robbery and murder. On the face of it, it is natural to have an immediate emotional reaction to the idea of rapists, robbers and murderers voting. After all, these are presumably very bad people and it offensive to think of them exercising the same fundamental right as other citizens. While this reaction is natural, it is generally unwise to try to settle complex moral questions by appealing to an immediate emotional reaction—although calm deliberation might end up in the same place as fiery emotion. I will begin by considering arguments for disenfranchising such felons. The most plausible argument, given my view that voting rights are foundational rights in a democratic state, is that such crimes warrant removing or at least suspending a person’s status as a citizen. After all, when a person is justly convicted of rape, murder or robbery they are justly punished by suspension of their liberty. In some cases, they are punished by death. As such, it seems reasonable to accept that if the right to liberty (and even life) can be suspended, then the right to vote can be suspended as well. I certainly see the appeal here. However, I think there is a counter to this reasoning. Punishment by imprisonment is generally aimed at three goals. The first is to protect the public from the criminal by removing him from society and to serve as a deterrent to others.  This could be used to justify taking away the right to vote by arguing that felons are likely to vote in ways that would harm society. The easy and obvious reply is that there seems to be little reason to think that felons could do harm through voting. Or any more harm than non-felon voters. For felons to do real harm through voting, there would need to be harmful choices and these would need to be choices that felons would pick because they are felons and they would need to be able to win that vote It could be claimed that, for example, there might be a vote on reducing prison sentences and the felons would vote in their interest to the detriment of others. While this is possible, it seems unlikely that the felons would be able to win the vote on their own. There is also the obvious counter that non-felons are likely to vote in harmful ways as well—as the history of voting shows. As such, denying felons the vote to protect the public from harm is not a reasonable justification. If there are things being voted for that could do serious harm, then the danger lies with those who got such things on the ballot and not with felons who might vote for it. The second is the actual punishment, which is typically justified in terms of retribution. This does have some appeal as a justification, assuming that the felon wants to vote and regards being denied the vote as a harm. However, most Americans do not vote—so it is not much of a punishment. There is also the question of whether the denial of the right to vote is a suitable punishment for a crime. Punishments should not simply be tossed onto a crime—they should fit. While paying restitution would fit for a robbery, being denied the right to vote would not seem to fit. The third is rehabilitation; the prisoner is supposed to be reformed so he can be returned to society (assuming the sentence is not death or life). Denying voting rights would seem to have the opposite effect—the person would be even more disconnected from society. As such, this would not justify removal of the voting rights. Because of these considerations, even rapists, murderers and robbers should not lose their right to vote. I do agree, as argued in my previous essay, that crimes that are effectively rejections of the criminal’s citizenship (like rebellion and treason) would warrant stripping a person of citizenship and the right to vote. Other crimes, even awful ones, would not suffice to strip away citizenship. Another approach is to make the case that rapists, murderers and robbers are morally bad or bad decision makers and should be denied the right to vote on moral grounds. While it is true that rapists, murderers and robbers are generally very bad people, the right to vote is not grounded in being a good person (or even just not being bad) or making good (or at least not bad) decisions. While it might seem appealing to have moral and competency tests for voting, there is the obvious problem that many voters would fail such tests. Many politicians would also fail the tests as well. It could be countered that the only test that would be used is the legal test of whether or not a person is convicted of a felony. While obviously imperfect, it could be argued that those convicted are probably guilty and probably bad people and thus should not be voting. While it is true that some innocent people will be convicted and denied the right to vote and also true that many bad people will be able to avoid convictions, this is acceptable. A reply to this is to inquire as to why such a moral standard should be used in regards to the right to vote. After all, the right to vote (as I have argued before) is not predicated on moral goodness or competence. It is based on being a citizen, good or bad. As such, any crime that does not justly remove a citizen’s status as a citizen would not warrant removing the right to vote. Yes, this does entail that rapists, murders and robbers should retain the right to vote. This might strike some as offensive or disgusting, but these people remain citizens. If this is too offensive, then such crimes would need to be recast as acts of treason that strip away citizenship. This seems excessive. And there is the fact that there are always awful people voting—they just have not been caught or got away with their awfulness or are clever and connected enough to ensure that the awful things they do are not considered felonies or even crimes. I am just as comfortable allowing a robber to vote as I am to allow Trump and Hillary to vote in their own election. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Felons & Voting My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Simulated Living Posted in Metaphysics, Philosophy by Michael LaBossiere on August 22, 2016 One of the oldest problems in philosophy is that of the external world. It present an epistemic challenge forged by the skeptics: how do I know that what I seem to be experiencing as the external world is really real for real? Early skeptics often claimed that what seems real might be just a dream. Descartes upgraded the problem through his evil genius/demon which used either psionic or supernatural powers to befuddle its victim. As technology progressed, philosophers presented the brain-in-a-vat scenarios and then moved on to more impressive virtual reality scenarios. One recent variation on this problem has been made famous by Elon Musk: the idea that we are characters within a video game and merely think we are in a real world. This is, of course, a variation on the idea that this apparent reality is just a simulation. There is, interestingly enough, a logically strong inductive argument for the claim that this is a virtual world. One stock argument for the simulation world is built in the form of the inductive argument generally known as a statistical syllogism. It is statistical because it deals with statistics. It is a syllogism by definition: it has two premises and one conclusion. Generically, a statistical syllogism looks like this: Premise 1: X% of As are Bs. Premise 2: This is an A. Conclusion: This is a B. The quality (or strength, to use the proper term) of this argument depends on the percentage of As that are B. The higher the percentage, the stronger the argument. This makes good sense: the more As that are Bs, the more reasonable it is that a specific A is a B.  Now, to the simulation argument. Premise 1: Most worlds are simulated worlds. Premise 2: This is a world. Premise 3: This is a simulated world. While “most” is a vague term, the argument is stronger than weaker in that if its premises are true, then the conclusion is logically more likely to be true than not. Before embracing your virtuality, it is worth considering a rather similar argument: Premise 1: Most organisms are bacteria. Premise 2: You are an organism. Conclusion: You are a bacterium. Like the previous argument, the truth of the premises make the conclusion more likely to be true than false. However, you are almost certainly not a bacteria. This does not show that the argument itself is flawed. After all, the reasoning is quite good and any organism selected truly at random would most likely be a bacterium. Rather, it indicates that when considering the truth of a conclusion, one must consider the total evidence. That is, information about the specific A must be considered when deciding whether or not it is actually a B. In the bacteria example, there are obviously facts about you that would count against the claim that you are a bacterium—such as the fact that you are a multicellular organism. Turning back to the simulation argument, the same consideration is in play. If it is true that most worlds are simulations, then any random world is more likely to be a simulation than not. However, the claim that this specific world is a simulation would require due consideration of the total evidence: what evidence is there that this specific world is a simulation rather than real? This reverses the usual challenge of proving that the world is real to trying to prove it is not real. At this point, there seems to be little in the way of evidence that this is a simulation. Using the usual fiction examples, we do not seem to find glitches that would be best explained as programming bugs, we do not seem to encounter outsiders from reality, and we do not run into some sort of exit system (like the Star Trek holodeck). Naturally, this is all consistent with this being a simulation—it might be well programmed, the outsider might never be spotted (or never go into the system) and there might be no way out. At this point, the most reasonable position is that the simulation claim is at best on par with the claim that the world is real—all the evidence is consistent with both accounts. There is, however, still the matter of the truth of the premises in the simulation argument. The second premise seems true—whatever this is, it seems to be a world. It seems fine to simply grant this premises. As such, the first premise is the key—while the logic of the argument is good, if the premise is not plausible then it is not a good argument overall. The first premise is usually supported by its own stock argument. The reasoning includes the points that the real universe contains large numbers of civilizations, that many of these civilizations are advanced and that enough of these advanced civilizations create incredibly complex simulations of worlds. Alternatively, it could be claimed that there are only a few (or just one) advanced civilizations but that they create vast numbers of complex simulated worlds. The easy and obvious problem with this sort of reasoning is that it requires making claims about an external real world in order to try to prove that this world is not real. If this world is taken to not be real, there is no reason to think that what seems true of this world (that we are developing simulations) would be true of the real world (that they developed super simulations, one of which is our world).  Drawing inferences from what we think is a simulation to a greater reality would be like the intelligent inhabitants of a Pac Man world trying to draw inferences from their game to our world. This would be rather problematic. There is also the fact that it seems simpler to accept that this world is real rather than making claims about a real world beyond this one. After all, the simulation hypothesis requires accepting a real world on top of our simulated world—why not just have this be the real world? My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Dating III: Age is Not Just a Number Posted in Philosophy, Relationships/Dating by Michael LaBossiere on August 19, 2016 Being a philosopher and single again, I have been overthinking the whole dating thing. I suspect that those who give it little or no thought do much better; but I am what I am and therefore I must overthink. An interesting adventure in interaction provided me with something new, or rather old, to think about: age and dating. In this scenario I was talking with a woman and actually had no intention of making any overtures or moves (smooth or otherwise). With some storytelling license in play, we join the story in progress. Her: Flirt. Flirt. Flirt. Her: “So, what do you do for work?” Flirt. Me: “I’m a philosophy professor.” Her: “At FSU?” Flirt. Me: “No, literally across the tracks at FAMU.” Her: “When did you start?” Flirt. Me: “1993.” Her: “1993…how old are you?” Me: “Fifty.” At this point, she dropped out of flirt mode so hard that it damaged the space-time continuum. Windows cracked. Tiny fires broke out in her hair. Car alarms went off. Pokémon died. Squirrels were driven mad and fled in terror, crying out to their dark rodent gods for salvation. As my friend Julie commented, I had “instantly gone from sexable to invisible.”  Here is how the conversation ended: Her: “Um, I bet my mother would like you. Oh, look at the time…I have to go now.” Me: “Bye.” While some might have found such an experience ego-damaging, my friends know I have an adamantine ego. Also, I am always glad to get a good story that provides an opportunity for some philosophical analysis. What struck me most about this episode is that the radical change in her behavior was due entirely to her learning my age—I can only infer that she had incorrectly estimated I was younger than fifty. Perhaps she had forgotten to put in her contacts. So, on to the matter of age and dating. While some might claim that age is just a number, that is not true. Age is rather more than that. At the very least, it is clearly a major factor in how people select or reject potential dates. On the face of it, the use of age as a judging factor should be seen as perfectly fine and is no doubt grounded in evolution. The reason is, of course, that dating is largely a matter of attraction and this is strongly influenced by preferences. One person might desire the feeble hug of a needy nerd, while another might crave the crushing embrace of a jock dumb as a rock. Some might swoon for eyes so blue, while others might have nothing to do with a man unless he rows crew. Likewise, people have clear preferences about age. In general, people prefer those close to them in age, unless there are other factors in play. Men, so the stereotype goes, have a marked preference for younger and younger women the older and older they get. Women, so the stereotype goes, will tolerate a wrinkly old coot provided that he has sufficient stacks of the fattest loot. Preferences in dating are, I would say, analogous to preferences about food. One cannot be wrong about these and there are no grounds for condemning or praising such preferences. If Sally likes steak and tall guys, she just does. If Sam likes veggie burgers and winsome blondes, he just does. As such, if a person prefers a specific age range, that is completely and obviously their right. As with food preferences, there is little point in trying to argue—people like what they like and dislike what they dislike. That said, there are some things that might seem to go beyond mere preferences. To illustrate, I will offer some examples. There are white people who would never date a black person. There are black people who would never date anyone but another black person. There are people who would never date a Jew. There are others for whom only a Jew will do. Depending on the cause of these preferences, they might be better categorized as biases or even prejudices. But, it is worth considering that these might be benign preferences. That, for example, a white person has no racial bias, they just prefer light skins to dark skins for the same sort of reason one might prefer brunettes to blondes. Then again, they might not be so benign. People are chock full of biases and prejudices and it should come as no surprise that they influence dating behavior. On the one hand, it is tempting to simply accept these prejudices in this context on the grounds that dating is entirely a matter of personal choice. On the other hand, it could be argued that such prejudices are problematic even in the context of dating. This is not to claim that people should be subject to some sort of compelled diversity dating, just that perhaps they should be criticized. When it comes to apparent prejudices, it is worth considering that the apparent prejudice might be a matter of innocent ignorance. That is, the person merely lacks correct information. Assuming the person is not willfully and actively ignorant, this is not to be condemned as a moral flaw since it can be easily fixed by the truth. To go back to the food analogy, imagine that Jane prefers Big Macs because she thinks they are healthy and refuses to eat avocadoes because she thinks they are unhealthy. Given what she thinks, it is reasonable for her to eat Big Macs and avoid avocadoes. If she knew the truth, she would change her eating habits since she wants to eat healthy—she is merely ignorant. Likewise, if Jane believed that black men are all uneducated thugs, then it would seem reasonable for her to not to want to date a black man given what she thinks she knows. If she knew the truth, her view would change. As such, she is not prejudiced—just ignorant. It is also worth considering that an apparent prejudice is a real prejudice—that the person would either refuse to accept facts or would still maintain the same behavior in the face of the facts. As an example, suppose that Sam thinks that white people are all complete racists and thus refuses to even consider dating a white person on this basis. While it is often claimed that everyone is racist, it is clear that not all white people are complete racists. As such, if Sam persisted in his belief or behavior in the face of the facts, then it would be reasonable to condemn him for his prejudices. Finally, it might even be the case that the alleged prejudice is actually rational and well founded. To use a food analogy, a person who will not eat raw steak because she knows the health risks is not prejudiced but quite reasonable. Likewise, a person who will not date a person who is a known cheater is not prejudiced but quite rational. The question at this point is where does age fit in regard to the above considerations. The easy and obvious answer is that it can fall into all three. If a person’s dating decisions are based on incorrect information about age, then they have made an error of ignorance. If a person’s decisions are based on mere prejudice, then they have made a moral error. But, if the decision regarding age and dating is rational and well founded, then the person would have made a good decision. As should be suspected, the specifics of the situation are what matter. That said, there are some general categories relating to age that are worth considering. Being fifty, I am considering these matters from the perspective of someone old. Honesty compels me to admit that I am influenced by my own biases here and, as my friend Julie has pointed out, older men are full of delusions about age. However, I will endeavor to be objective and will lay out my reasoning for your assessment. The first is the matter of health. In general, as people get older, their health declines. For example, older people are more likely to have colon cancer—hence people who are not at risk do not get colonoscopies until fifty. Because of this, it is quite reasonable for a younger person to be concerned about dating someone older—that person is more likely to get ill. That said, an older person can be far healthier than a younger person. As such, it might come down to whether or not a person looks at dating option broadly in terms of categories of people (such as age or ethnicity) or is more willing to consider individuals who might differ from the stereotypes of said categories. Using categories does help speed up decisions, although doing so might result in missed opportunities. But, there are billions of humans—so categories could be just fine if one wants to narrow their focus. While an older person might not be sick, age does weaken the body. For example, I remember being bitterly disappointed by a shameful 16:28 5K in my youth. Now I have to struggle to maintain that pace for a half mile. Back then I could easily do 90-100 miles a week; now I do 50-60. Time is cruel. For those who are concerned about a person’s activity levels, age is clearly a relevant factor and provides a reasonable basis for not dating an older (or younger) person that is neither an error nor a prejudice. However, an older person can be far more fit and active than a younger person—so that is worth considering before rejecting an entire category of people. Life expectancy is also part of the health concerns. A younger person interested in a long term relationship would need to consider how long that long term might be and this would be quite rational. To use an obvious analogy, when buying a car, one should consider the miles on it. Women also live longer than men, so that is a consideration as well. Since I am fifty-year-old American living in Florida, the statistics say I have about 26 years left. Death sets a clear limit to how long term a relationship can be. But, life expectancy and quality of life are influenced by many factors and they might be worth considering. Or not. Because, you know, death. The second broad category is that of interests and culture. Each person is born into a specific temporal culture and that shapes her interests. For example, musical taste is typically set in this way and older folks famously differ in their music from younger folks. What was once rebellious rock becomes a golden oldie. Fashion is also very much a matter of time, although styles have a weird way of cycling back into vogue, like those damn bell bottoms. Thus people who differ in age are people from different cultures and that presents a real challenge. An old person who tries to act young typically only succeeds in appearing absurd. One who does not try will presumably not fit in with a younger person. So, either way is a path to failure. Epic failure. There is also the fact that interests change as a person gets older. To use some stereotypes, older folks are supposed to love shuffleboard and bingo while the youth are now into extreme things that would presumably kill or baffle old people, like virtual reality and Snapchat. Party behavior also differs. Young folks go to parties to drink, talk about their jobs and get laid. Older folks go to parties to drink, talk about their jobs and get laid. These are radical differences that cannot be overcome. It could be countered that there can be shared interests between people of different ages and that a lack of shared interests is obviously not limited to those who differ in age. The response is that perhaps the age difference would generally result in too much of a difference in interests, thus making avoiding dating people who differ enough in age rational and reasonable. The third broad category is concerns about disparities in power. An older adult will typically have a power advantage over a younger adult and this raises moral concerns regarding exploitation (there is also a reverse concern: that a younger person will exploit an older person). Because of this, a younger adult should be rightly concerned about being at a disadvantage relative to an older person. Of course, this concern is not just limited to age. If the concern about power disparity is important, then it would also apply to disparities in education, income, abilities and intelligence between people in the same age group. That said, the disparities would tend to be increased with an age difference. As such, it is reasonable to be concerned about this factor. The fourth broad category is what could be called the “ick factor.” While there is considerable social tolerance for rich old men having hot young partners, people dating or attempting to date outside of their socially defined age categories are often condemned because it is seen as “icky” or “gross.” When I was in graduate school, I remember people commenting on how gross it was for old faculty to hook up with young graduate students. Laying aside the exploitation and unprofessionalism, it did seem rather gross. As such, the ick argument has considerable appeal. But, there is the question of whether the perceived grossness is founded or not. On the one hand, it can be argued that grossness is in the eye of the beholder or that grossness is set by social norms and these serve as proper foundations. On the other hand, it could be contended that the perception of grossness is a mere unfounded prejudice. On the third hand, the grossness could be cashed out in terms of the above categories. For example, it is icky for an unhealthy and weak rich man to date a hot, healthy young woman with whom he has no real common interests (beyond money, of course). Fortunately, this is a problem with a clear solution: if you do not die early, you get old. Then you die. Problem solved. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Tagged with: , The Erosion of the Media Posted in Ethics, Philosophy, Politics by Michael LaBossiere on August 17, 2016 A free and independent press is rightly considered essential to a healthy democracy. Ideally, the press functions like Socrates’ gadfly—it serves to reproach the state and stir it to life. Also like Socrates, the press is supposed to question those who hold power and reveal what lies they might tell. Socrates was, of course, put to death for troubling the elites of Athens. While some countries do the same with their journalists, a different approach has been taken in the United States. To be specific, there has been a concerted effort to erode and degrade the free press. While the myth of the noble press is just that, the United States has long had a tradition of journalistic ethics and there have been times when journalists were trusted and respected. Edward R. Murrow and Walter Cronkite are two examples of such trusted and well-respected journalists. Since their time, trust in the media has eroded dramatically. Some of this erosion is self-inflicted. While the news is supposed to be objective, there has been an ever increasing blend of opinion and fact as well as clear partisan bias on the part of some major news agencies. Fox News, for example, serves to openly advance a right leaning political agenda and shows shamefully little concern for objective journalism. Its counterpart on the left, MSNBC, serves to advance its own agenda. Such partisanship serves to rightly erode trust in these networks, although this erosion tends to be one sided. That is, partisans often put great trust in their own network while dismissing the rival network. Critics of the media can make an argument by example through piling up example after example of bias and untrue claims on the part of specific networks and it is natural for the distrust to spread broadly. Except, of course, to news sources that feed and fatten one’s own beliefs. A rather useful exercise for people would be to apply the same level of skepticism and criticism they apply to the claims by news sources they like as to those made by the news sources they dislike. If, for example, those who favor Fox News greeted its claims with the same skepticism they apply to the media of the left, they would become much better critical thinkers and be closer to the truth. While the news has always been a business, it is now primarily a business that needs to make money. This has had an eroding effect in many ways. One impact is that budget cuts have reduced real investigative journalism down to a mere skeleton. This means that many things remain in the shadows and that the new agencies have to rely on being given the news from sources that are often biased. Another impact is that the news has to attract viewership in order to get advertising. This means that the news has to appeal to the audience and avoid conflicts with the advertisers. This serves to bias the news. The public plays a clear role in this erosion by preferring a certain sort of “news” over actual serious journalism. We can help solve this problem by supporting serious journalism and rewarding news sources that do real reporting. Much of the erosion of journalism comes from the outside and is due to concerted war on the press and truth. As a matter of historical fact, this attack has come from the political right. The modern efforts to create distrust of the media by claiming it has a liberal bias goes back at least to the Nixon administration and continues to this day. Sarah Palin seems to have come up with the mocking label of “lamestream media” as part of her attacks on the media for having the temerity to report things that she actually said and to indicate when she said things that were not true. It is not surprising that she has defended Donald Trump from the media’s efforts to inform the public when Trump says things that are untrue. Given this long history of fighting the press, it is not surprising that the right has developed a set of weapons for battling the press. One approach, exemplified by Sarah Palin’s “lamestream media” approach is to simply engage in ad homimens and the genetic fallacy. In the case of ad hominems, individual journalists are attacked and this is taken as refuting their criticisms. Such attacks, obviously, do nothing to refute the claims made by journalists (or anyone).  In the case of the genetic fallacy, the tactic is to simply attack the media in general for an alleged bias and concluding, fallaciously, that the claims made have been thus refuted. This is not to say that there cannot be legitimate challenges to credibility, but this is rather a different matter from what is actually done. For example, someone spinning for Trump might simply say the media is liberally biased and favors Hillary and thus they are wrong when they claim that Trump seems to have suggested someone assassinate Hillary Clinton. While it would be reasonable to consider the possibility of bias, merely bashing the media does nothing to disprove specific claims. Another standard tactic is to claim that the media never criticizes liberals—that is, the media is unfair. For example, when Trump is called out for saying untrue things or criticized for claiming that Obama founded Isis, his defenders rush to claim that the media does not criticize Hillary for her remarks or point out when she is lying. While an appeal for fair play is legitimate, even such an appeal does not serve to refute the criticisms or prove that what Trump said is true. There is also the fact that the press does criticize the left and does call out Hillary when she says untrue things. Politifact has a page devoted to Trump, but also one for Hillary Clinton. While Hillary does say untrue things, she gets accused of this less than Trump on the very reasonable grounds that he says far more untrue things. To use an analogy, to cry foul regarding Trump’s treatment would be like a student who cheats relentlessly in class complaining that another student, who cheats far less, does not get in as much trouble. The obvious reply is that if one cheats more, one gets in more trouble. If one says more untrue things, then one gets called on it more. Not surprisingly, those who loath Hillary or like Trump with make the claim that fact checkers like Politifact are biased because they are part of the liberal media. This creates a rather serious problem: any source used to show that the “liberal media” has the facts right will be dismissed as being part of the liberal media. Likewise, any support for criticisms made by this “liberal media” will also be rejected by claiming the sources is also part of the liberal media. Bizarrely, even when there is unedited video evidence of, for example, something Trump said this defense will still be used. While presented as satire by Andy Borowitz (clearly a minion of the liberal media), the fact is that Trump regards the media as unfair because it actually reports what he actually says. While the erosion of the media yields short term advantages for specific politicians, the long term consequences for the United States are dire. One impact of the corrosion of truth is that politicians are ever more able to operate free of facts and criticism—thus making politics almost entirely a matter of feelings unanchored in reality. Since reality always has its way eventually, this is disastrous. What is being done to the media can be seen as analogous to the poisoning of the village watchdogs by a villager who wishes to engage in some sneaky misdeeds at night and needs the dogs to be silent. While this initially works out well for the poisoner, the village will be left unguarded.  Likewise, poisoning the press will allow very bad people to slip by and do very bad things to the public. While, for example, Trump’s spinning minions might see the advantage in attacking the press for the short term advantage of their candidate, they also clear a path for whatever else wishes to avoid the light of truth. Those on the left who go after the media also deserve criticism to the degree they contribute to the erosion. The spurning of truth is thus something we should be very worried about. Merlin, in Excalibur, put it very well: “when a man lies, he murders some part of the world.” And without a healthy press, people will get away with murder. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Tagged with: , , Trump & Racist Remarks My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Posted in Ethics, Law, Philosophy by Michael LaBossiere on August 12, 2016 When Trump began his bid for the Presidency in 2015, it was largely dismissed as a joke. He then trounced his Republican opponents. So as to not let them forget their shame, Trump still occasionally takes shots at his fallen rivals. As this is being written, Trump has a very real chance of winning the election, sending Hillary Clinton’s dream of being the first female president into the flaming dumpster of history. Trump’s success was a shock to the elites of many realms, from the top pundits to the Republican leadership. Liberal intellectuals, who once mocked Trump with witty remarks between sips of their gluten free lattes, are now moping the sweat from their fevered brows with woven hemp handkerchiefs. Sane commentators predicted, with each horrific spew from Trump’s word port, that Trump would be brought down with a huge and luxurious self-inflicted wound. Now the sane commentators have gazed into the mouth of madness and have accepted that there seems to be nothing that Trump can say that would derail the onslaught of the Trumpernaut. Trump’s run, win or lose, will be a treasure trove for many dissertations in psychology, political science and other fields as thinking people try to analyze this phenomenon from the perspective of history. There is, of course, considerable speculation about the foundation for Trump’s success. Or, more accurately, his lack of failure. As someone who teaches critical thinking, one of the most striking thing about Trump’s success is that many of the reasons Trump supporters give for supporting Trump are objectively unfounded in reality. One of the main mantras of Trump backers is that Trump “tells it like it is.” The usual meaning of these words is that a person is saying what is true. After all, “like it is” is supposed to refer to what the world in fact is and not what is not. As a matter of objective fact, Trump rarely “tells it like it is.” The proof of this can be found on Trump’s Politifact page. 4% of Trump’s claims have been evaluated as true and 11% as mostly true. This is hardly like it is. Yet, Trump supporters persist in claiming that he tells it like it is, despite the fact that he does not. One possible explanation is that his supporters believe his claims. If so, they would certainly think that he tells it like it is. This would require either never making an inquiry into the truth of Trump’s claims or refusing to accept the inquiries that have been made. Trump has, of course, availed himself of a sword forged and often wielded by other Republicans, which is the attack on the “liberal media” as biased. This allows any assessment of Trump’s claims to be dismissed. Another possibility is that their use of the phrase is meaningless, a mere parroting of Trump’s talking point. This would be analogous to the repetition of other empty advertising slogans, like “it gets clothes brighter than bright” or, for those more cynical than I, “hope and change.” If someone is asked why they back Trump, they typically feel the need to present a reason, and this empty saying no doubt pops into the mind. His supporters also claim that they back him because of his great business success. While it is true that the Trump brand is known worldwide, it is not clear that he has been a great success in business. Newsweek, which was once a success itself, has done a rundown of Trump’s many business failures. While it is true that Trump’s people have skillfully used the bankruptcy laws and threats of lawsuits, this seems to be rather different from the sort of business success that people attribute to him. Some critics have speculated that Trump is refusing to release his tax forms (which he can—the IRS does not forbid people being audited from releasing their forms) because they would show he is not as wealthy as he claims. This is, of course, speculation and Trump could have other good reasons for not releasing the forms. Of course, some might make use of the classic cry of “what is he hiding?” Trump can, obviously, claim to be something of a success: he is world famous and clearly has his name on many things. Trump supporters also use the talking point that Trump is not politically correct. This is true—Trump relentlessly says things that horrify and terrify the guardians of political correctness. To those who are tired of the political correctness enforcers, this is very appealing. However, Trump goes far beyond not being politically correct and, some would claim, he heads into racism and sexism. This has suggested to some critics that Trump’s backers are racists and sexists who like what he has to say.  He also routinely crosses boundaries of decency that, until Trump, most Americans thought no candidate (or decent human being) would cross. The latest example is his battle with the Khan family, whose son was an Army captain killed in Iraq. Normally a savage attack on a Gold Star family would be a death blow to a candidate. However, while Trump’s backers often condemn his remarks, they stick with him. One possibility is that although they condemn his remarks in public, they secretly agree with these claims. Another possibility is that the offenses are condemned but are not regarded as serious enough to break the deal. This would, of course, require that there be other motives to support Trump. For many, the best reason to back Trump is that he is not Hillary Clinton. As pundits like to point out, Trump and Hillary have record high unfavorable ratings. There are also people who are party loyalists (or at least party pragmatists) who support Trump because he is the Republican candidate. Interestingly, Trump is also attracting support from voters who have traditionally backed the Democrats—that is, working class whites. A final talking point used by Trump supporters is that he is against the elites. This is amazing in its irony: Trump was born into wealth and has always been among the moneyed elites. That said, Trump does have a persona that some would regard as crude and non-elite. Trump is tapping into a very real sense of anger and desperation among Americans who believe, with complete correctness, that they have largely been abandoned by the elites. I certainly get this. I am from Old Town, Maine—a very small town that relied on the paper mill for employment and tax revenue. After ownership of the mill shifted a few times, the last owner shut down operations, presumably going overseas. When I was a kid, the mill smelled bad—which my dad called the “smell of money.” That smell is now gone, and my hometown is struggling. My dad said that there are about fifty abandoned houses in town, and on my runs I saw many empty houses—including the house I grew up in. Meanwhile, we get to see app billionaires on the Late Show with Stephen Colbert talk about their billions. Those who dig into the numbers see that the elites have consistently gotten their way at the expense of the rest of us; that the economic success at the top has not trickled down, and that we will be worse off than our predecessors. Our elites have failed us and we have failed by making them our elites. Trump, the elite billionaire who got his start with a “little loan” of a million dollars from his father, is able to somehow tap into this anger. Most likely because Hillary is clearly identified with the elites that have failed us so badly. That is, Trump is seen as the only viable option, the only voice for the non-elite. This itself is a sign of the failure of our elites—that so many people regard Trump as their only hope. Or perhaps they see him as someone who will burn it all in an act of vengeance against the elites. While I do understand the rage against the failures of the elite and get that Hillary is the elitist of the elite, Trump is not the savior of America. Voting for Hillary is essentially voting for more of the same. But voting for Trump is to vote for disaster. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Dating II: Are Relationships Worth It? Posted in Ethics, Philosophy, Relationships/Dating by Michael LaBossiere on August 10, 2016 My long term, long-distance relationship recently came to an amicable end, thus tossing me back into the world of dating. Philosophers, of course, have two standard responses to problems: thinking or drinking. Since I am not much for drinking, I have been thinking about relationships. Since starting and maintaining a relationship is a great deal of work (and if it is not, you are either lucky or doing it wrong), I think it is important to consider whether relationships are worth it. One obvious consideration is the fact that the vast majority of romantic relationships end well before death.  Even marriage, which is supposed to be the most solid of relationships, tends to end in divorce. While there are many ways to look at the ending of a relationship, I think there are two main approaches. One is to consider the end of the relationship a failure. One obvious analogy is to writing a book and not finishing: all that work poured into it, yet it remains incomplete. Another obvious analogy is with running a marathon that one does not finish—great effort expended, but in the end just failure. Another approach is to consider the ending more positively: the relationship ended, but was completed. Going back to the analogies, it is like completing that book you are writing or finishing that marathon. True, it has ended—but it is supposed to end. When my relationship ended, I initially looked at it as a failure—all that effort invested and it just came to an end one day because, despite two years of trying, we could not land academic jobs in the same geographical area. However, I am endeavoring to look at in a more positive light—although I would have preferred that it did not end, it was a very positive relationship, rich with wonderful experiences and helped me to become better as a human being. There still, of course, remains the question of whether or not it is worth being in another relationship. One approach to address this is the ever-popular context of biology and evolution. Humans are animals that need food, water and air to survive. As such, there is no real question about whether food, water and air are worth it—one is simply driven to possess them. Likewise, humans are driven by their biology to reproduce and natural selection seems to have selected for genes that mold brains to engage in relationships. As such, there is no real question of whether they are worth it, humans merely do have relationships. This answer is, of course, rather unsatisfying since a person can, it would seem, make the choice to be in a relationship or not. There is also the question of whether relationships are, in fact, worth it—this is a question of value and science is not the realm where such answers lie. Value questions belong to such areas as moral philosophy and aesthetics. So, on to value. The question of whether relationships are worth it or not is rather like asking whether technology is worth it or not: the question is extremely broad. While some might endeavor to give sweeping answers to these broad questions, such an approach would seem problematic and unsatisfying. Just as it makes sense to be more specific about technology (such as asking if nuclear power is worth the risk), it makes more sense to consider whether a specific relationship is worth it. That is, there seems to be no general answer to the question of whether relationships are worth it or not, it is a question of whether a specific relationship would be worth it. It could be countered that there is, in fact, a legitimate general question. A person might regard any likely relationship to not be worth it. For example, I know several professionals who have devoted their lives to their careers and have no interest in relationships—they do not consider a romantic involvement with another human being to have much, if any value. A person might also regard a relationship as a necessary part of their well-being. While this might be due to social conditioning or biology, there are certainly people who consider almost any relationship worth it. These counters are quite reasonable, but it can be argued that the general question is best answered by considering specific relationships. If no specific possible (or likely) relationship for a person would be worth it, then relationships in general would not be worth it. So, if a person honestly considered all the relationships she might have and rejected all of them because their value is not sufficient, then relationships would not be worth it to her. As noted above, some people take this view. If at least some possible (or likely) relationships would be worth it to a person, then relationships would thus be worth it. This leads to what is an obvious point: the worth of a relationship depends on that specific relationship, so it comes down to weighing the negative and positive aspects. If there is a sufficient surplus of positive over the negative, then the relationship would be worth it. As should be expected, there are many serious epistemic problems here. How does a person know what would be positive or negative? How does a person know that a relationship with a specific person would be more positive or more negative? How does a person know what they should do to make the relationship more positive than negative? How does a person know how much the positive needs to outweigh the negative to make the relationship worth it? And, of course, many more concerns. Given the challenge of answering these questions, it is no wonder that so many relationships fail. There is also the fact that each person has a different answer to many of these questions, so getting answers from others will tend to be of little real value and could lead to problems. As such, I am reluctant to answer them for others; especially since I cannot yet answer them for myself. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Dating I: Spotting Fake Profiles Posted in Philosophy, Relationships/Dating by Michael LaBossiere on August 8, 2016 After my long-term, long-distance relationship came to an amicable (albeit unexpected) end, I was thrown back into the dumpster fire that is dating. Since this is the 21st century, I signed up for Match.com. This was against my usual good judgment, but breakups are like politics: they make people stupid. As I expected, the process of online dating is largely a matter of avoiding scams. These range from attempts to lure people to porn sites to more elaborate dating scams. Simple scammers rarely email, they try to lure people with the free winks, free likes and by making you a favorite. For this essay, I’ll focus on the simplest of scamming techniques, the fake profile. While I will not cover all the ways to spot one, I will offer what I hope will be some useful advice from the perspective of philosophy. I’ll begin the top of the profile. Match and other sites have users create a profile name, such as Lovecatsmorethanmen88 (which might be a real profile, if so I apologize).  While fake profiles can have names that are indistinguishable from the real ones, there are two main giveaways. The first is a name that is a phone number, such as txtme86753089. The second is a name that tries to give an email address, such as scam_gmal. While some real users might try to save a few bucks this way, that is presumably very rare. The photos also serve as a good indicator for scams. If a person has a single photo of a beautiful person, there is a good chance it is a fake. After all, everyone has a smart phone and can take unlimited pictures.  Loading many photos takes time and scammers presumably need to crank out fake profiles. That said, there are real users who have just one picture—so the one photo clue is not decisive. Unusually provocative photos are also an indicator that the profile is a fake, but this is not a guarantee—presumably real users are not averse to using some raw sex appeal. A rather obvious indicator is the use of stock photos taken from the web. In some cases, the faker makes it easy by leaving the “watermarks” in place. For less obvious cases, you can right click in Chrome and do a Google image search. While this does not work all the time, it can reveal some obvious fakes. This can also help with photos stolen from people—a common practice on dating sites. An extremely obvious indicator is a photo with text saying something like “text me 8675309” or “email me at scammster@scam.com.”  As with the profile name, some real users might do this; but it is most likely a scam. Photos of an extremely beautiful person might indicate a scam—scammers do not use ugly photos as their bait. However, there are presumably some real profiles of people who are really beautiful. While it might hurt your ego, it is worth matching up the beauty of the person who has winked at you with your own appearance (and income, of course). It is also smart to look for inconsistencies between the picture and the profile: check to see if the age, body type and so on match up. For example, a photo of a hot 20 something on a profile for a 40-year-old is likely to be a scam. That said, some people look awesome for their age…and people often post photos that are 5-10 years old (which is another form of deceit). The text of a profile is also a good indicator of whether it is a scam or not. The scammers creating fake profiles are not going to spend a long time crafting a profile—they will only have a little text. The text also tends to be full of spelling and grammatical errors. They also often include an email address. For example, here is the text from what is almost certainly a fake profile: I am looking for man who is serious in relations and reliable, words from his lips are materialized and his acts are saying more about his attitude to life. I can give my shoulder in rainy day and it’s normal for me. write please my e mail Remeda1997 gma. Mutual support, sharing bad and funny moments and looking on one page – the best what can hold both love birds ever! Age difference is not matter for me! However, short profile texts are also common in legitimate profiles as is bad spelling and poor grammar. However, they will tend to be less obviously awkward in the use of the language. Scam profiles often have a certain feel to them—for example, they tend to promise (in awkward wording) all sorts of wonderful things (like “looking on one page”).  They also tend to be a bit too accepting (“Age difference is not matter for me!”). More sophisticated scammers probably copy and paste from real profiles, which makes them harder to spot. Another indicator is a profile that has not been completed. As noted above, simple scammers favor quantity over quality and spending too much time completing a profile is not an effective use of their time (or, more likely, the time of their minions). This is, however, not decisive: real users sometimes leave their profiles incomplete. There has also been some analysis of how scammers complete profiles: 83% claim to be Catholic, 63% claim to be widowers, 37% claim graduate degrees, 54% claim doctorates, and 36% claim to be native Americans. 25% claim to work as engineers and 23% claim to be self-employed.  These are, of course, not decisive—but it does provide some interesting insight into the approach to scamming. It is also important to note that this analysis was done by a specific site—there are bound to be differences between sites. As such, you should not assume that Mohawk Catholic widower with a PhD in electrical engineering is a faker. But it is worth considering if there are other signs. If you get a wink, like or have your profile favorited by suspicious profile, the easiest and smartest response is to not respond or, at the very least, wait a while. Fake profiles are sometimes removed by the service (I have seen this happen many times myself). An actual person who is interested will probably email. While it should be needless to say, you should never send a text or email to a profile that tries to sneak in a phone number or address—those are almost certainly fake profiles. If you do get an email that immediately asks you to send a text or email outside the service, then it is likely a scam. Yes, online dating is awful and probably best avoided. My Amazon Author Page My Paizo Page My DriveThru RPG Page Follow Me on Twitter Get every new post delivered to your Inbox. Join 2,896 other followers
Children's hospital From Wikipedia, the free encyclopedia Jump to: navigation, search A children's hospital is a hospital which offers its services exclusively to children and adolescents. Most children's hospitals can serve children from birth up to the age of 18, or in some instances, children's hospitals' doctors may treat children until they finish high school. The number of children's hospitals proliferated in the 20th century, as pediatric medical and surgical specialties separated from internal medicine and adult surgical specialties. Children's hospitals are characterized by greater attention to the psychosocial support of children and their families. Some children and young people have to spend relatively long periods in hospital, so having access to play and teaching staff can also be an important part of their care.[1] With local partnerships this can include trips to local botanical gardens, zoo, and public libraries for instance.[2] In addition to psychosocial support, children's hospitals have the added benefit of being staffed by professionals who are trained in treating children. In New Zealand vocational training in paediatrics is undertaken through the Royal Australasian College of Physicians (RACP). Once RACP training is completed the doctor is awarded the Fellowship of the RACP (FRACP) in paediatrics.[3] While many normal hospitals can treat children adequately, pediatric specialists may be a better choice when it comes to treating rare afflictions that may prove fatal or severely detrimental to young children, in some cases before birth. Also, many children's hospitals will continue to see children with rare illnesses into adulthood, allowing for a continuity of care. Christian Influence and Church based care[edit] Similar to the creation of hospitals, the creation of children's hospitals was largely carried out by Christians and churches. Many Christians and churches saw it as part of their faith and duty to help the poor and sick. It was their duty not only to help the sick and poor, but also to help the members of society who were most vulnerable. Children were deemed vulnerable and victims of their situation largely because they could do very little to alter their economic situation. Prior to the creation of children's hospitals, factions of the church and their members helped established dispensaries to provide care for those in need. The dispensary movement and the push to cure the sick and poor in urban areas was also backed by Christian organizations.[4] Urban areas were the main focus of early reformers because of their tendency to be overcrowded and their high mortality rate. Orphanages, another form of early child saving, were often run by churches, priests, nuns, and other members of the religious community. The founder of the first recognized foundling hospital, Thomas Coram, was devout Christian. As science advanced, many Christians were credited for making important medical discoveries. Early children's hospitals were usually founded by physicians along with Christian women who were mainly in charge of fundraising and caring for children alongside the physicians. Early Voluntary Care[edit] Prior to the 19th century hospital reforms, the well-being of the child was thought to be in the hands of the mother; therefore, there was little discussion of children's medicine, and as a result next to no widespread formal institutions which focused on healing children. There were however centres which focused on helping abandoned children and offering care in hopes that these children might survive into adulthood. Some examples include orphanages, dispensaries, and foundling hospitals. Florence's Hospital of the Innocent (Ospedale degli Innocenti) was originally a charity based orphanage which opened in 1445; its aim was to nurse sick and abandoned infants back to health. A later example and better established institution whose goal it was to help rehabilitate infants was the Foundling Hospital founded by Thomas Coram in 1741. Foundling hospitals were set up to receive abandoned infants, nurse them back to health, teach them a trade or skill, and integrate them back into society. Coram's foundling hospital was revolutionary because it was one of the United Kingdoms first children charities.[5] Moreover, it was largely made successful by the powerful people who donated money to the hospital.[6] Coram's hospital would eventually be faced with the fact that the number of infants needing care outweighed their hospitals capacity. In order to accommodate the number of children in need, there were attempts to set up similar hospitals throughout the UK; they ultimately were unsuccessful due to the lack of funding.[6] Simultaneously, dispensaries which were also funded by donations were being opened in order to provide medicine and medical attention to those who could not afford private care. Dispensaries and foundling hospitals were the earliest forms of what would later become children's hospitals. The establishing of the Foundling Hospital by Thomas Coram was a direct response to the high infant mortality rate in London, England. Although foundling hospitals acknowledged the high infant mortality rate, infant mortality would not be addressed in a wide spread way until the early 19th century when children's hospitals would begin to open in Vienna, Moscow, Prague, Berlin, and various other major cities throughout Europe.[7] 19th-century models[edit] In America, by the mid-19th century middle-class women and physicians became increasingly concerned about the well-being of children in poor living conditions. Although infant mortality had begun to decline, it still remained a prominent issue. Social reformers blamed the emergence of the industrial society and poor parents for not properly caring for their children.[8] In response, reformers and physicians founded children's hospitals across the country. Early children's hospitals were set up in converted houses not only to help the children transition from leaving their home to being in a hospital, but also because it was often the only space available.[8] Early children's hospitals focused more on short-term care and treating mild illnesses rather than long-term intensive care. Treating serious diseases and illnesses in early children's hospitals could result in the disease spreading throughout the hospital which would drain their already limited resources. A serious disease outbreak in a children's hospital would result in more deaths than lives saved and would therefore reinforce the previous notion that people often died while in the hospital.[8] Like those found in the United States, children's hospitals in the United Kingdom in the 19th century often resembled middle-class homes. British children's hospitals introduced rules to which patients and their families were expected to adhere; these rules carefully lined out middle-class values and expectations.[7] British children's hospitals, like their American and Canadian counterparts, relied heavily on donations from the rich. Donations came in the form of money, food, toys, and clothes for the children. The United Kingdom's children's hospitals were soon faced with the reality that their small and vulnerable patient would soon outnumber their resources.[7] In order to maintain the cost of running these new hospitals throughout the United Kingdom, the upper classes needed to market their hospitals as centres for reform. In order to brand themselves as reformers, they had to contrast themselves against the parents; this meant they had to portray the poor parents as incompetent.[7] Despite their mission to save children, hospitals in Britain and Glasgow rarely admitted children under the age of two; such children were deemed costly and needed constant attention.[7] Similar to the American hospitals, those located in Europe were also hesitant to admit children who required long-term care in fear that those lives would be lost or that long-term care would block beds for those in immediate need.[7] The intentions of the hospitals built in Europe were to provide care for those who could not afford care. Care was primarily provided to those who met the age requirements and were willing to adhere to the hospital's rules. Since early children's hospitals relied on donations, they were often underfunded, overcrowded, and lacking medical resources. The first formally recognized paediatrics hospital was the Hôpital des Enfants Malades in Paris, France, which opened in 1802. The United Kingdom was slow to follow and established The Great Ormond Street Hospital in London, England, in 1852, which marked the opening of the first British children's hospital.[9] The United States would soon follow and established The Children's Hospital of Philadelphia in Pennsylvania in 1855.[10] Canada established their first children's hospital in 1875; The Hospital for Sick Children in Toronto, Ontario, along with the latter all remain open today.[11] By the end of the 19th century, and the during the first two decades of the 20th century, the number of children's hospitals tripled in both Canada and the United States.[8] The first children's hospital in Scotland opened in 1860 in Edinburgh.[12] Professionalization of Children's Hospitals[edit] In the 19th century, there was a societal shift in how children were viewed. This shift took away some of the parents' control and placed it in the hands of medical professionals.[13] By the early 20th century, a child's health became increasingly tied to physicians and hospitals. Unlike the professionalization of nursing, the medical field professionalized at a greater speed.[14] This was a result of licensing acts, the formation of medical associations, and new fields of medicine being introduced across countries.[14] These new areas of medicine offered physicians the chance to build their careers by "overseeing the medical needs of private patients, caring for and trying new therapies on the sick poor, and teaching medical students."[14] In order to raise their status further, physicians began organizing children's hospitals; by doing so, it also brought attention and importance to their speciality in the modern health care system.[8] This idea brought about the creation of children's hospitals in Philadelphia, Boston, Washington, D.C, and San Francisco – all which emphasized children as their focus.[8] Along with specialized physicians, the 20th century brought the removal of voluntary or religiously associated female care and replaced it with professionally trained nurses.[15] In addition to separate institutions for children and a professional staff, both medical and technological advancement helped solidify children's hospitals as centres of physical healing. The discovery of vaccines, anaesthetics, and surgical improvements made children's hospitals more reliable and more effective in the treatment of childhood disease and illness. Using hospital discharge data from 2003-2011, the Agency for Healthcare Research and Quality (AHRQ) studied trends in aggregate hospital costs, average hospital costs, and hospital utilization. The Agency found that for children aged 0–17, aggregate costs rose rapidly for the surgical hospitalizations and decreased for injury hospitalizations. Further, average hospital costs, or cost per discharge, increased at least 2% for all hospitalizations and were expected to grow by at least 4% through 2013. The exception to this was mental health hospitalizations, which saw a lower percentage increase of 1.2%, and was projected to increase only 0.9% through 2013. Despite the rising aggregate costs and costs per discharge, hospitalizations (except for mental health hospitalizations) for children aged 0–17 decreased over the same time, and were projected to continue decreasing.[16] In 2006-2011, the rate of ED use in the United States was highest for patients aged under one year, but lowest for patients aged 1–17 years. The rate of ED use for patients aged under one year declined over the same time period; this was the only age group to see a decline.[17] Between 2008 and 2012, growth in mean hospital costs per stay in the United States was highest for patients aged 17 and younger.[18] In 2012 there were nearly 5.9 million hospital stays for children in the United States, of which 3.9 million were neonatal stays and 104,700 were maternal stays for pregnant teens.[19] Every year US News & World Report ranks the top children's hospitals and pediatric specialties in the United States. For the year 2010-2011, eight hospitals ranked in all 10 pediatric specialties. The ranking system used by US News & World Report depends on a variety of factors. In past years (2007 was the 18th year of Pediatric Ranking), ranking of hospitals has been done solely on the basis of reputation, gauged by random sampling and surveying of pediatricians and pediatric specialists throughout the country. The ranking system used is currently under review.[20] See also[edit] 1. ^ McLeish, Jean (4 September 2009). "Where special treatment is just what the doctor ordered". TES Scotland. Retrieved 14 July 2014.  2. ^ 3. ^ 4. ^ Beal-Preston, Rosie (Spring 2000). "The Christian Contribution to Medicine".  5. ^ "About us | Coram".  6. ^ a b Golding, A.M.B. "The life and times of Thomas Coram 1668–1751". Public Health. 121 (2): 154–156. doi:10.1016/j.puhe.2006.09.019.  7. ^ a b c d e f Abreu, Laurinda (2013). Hospital Life : Theory and Practice From the Medieval to the Modern. Pieterien: Peter Lang AG. pp. 209, 210, 213, 214, 220, 222.  8. ^ a b c d e f Sloane, David (October 2005). "Not Designed Merely to Heal: Women Reformers and the Emergence of Children's Hospitals". The Journal of the Gilded Age and Progressive Era. doi:10.1017/S1537781400002747.  9. ^ "Great Osmond Street Hospital Charity".  10. ^ "About the Children's Hospital of Philadelphia". The Children's Hospital of Philadelphia.  11. ^ "History and milestones".  13. ^ Duffin, Jacalyn (2010). History of Medicine A Scandalously Short Introduction. Toronto: University of Toronto Press. p. 341. ISBN 978-0-8020-9556-5.  14. ^ a b c Connolly, Cindy. "Growth and Development Of a Specialty: The Professionalization of Child Health Care." Pediatric Nursing 31, no. 3 (May 2005): 211-215. Academic Search Premier, EBSCOhost. 15. ^ Fass, Paula. Encyclopedia of Children and Childhood: In History and Society. Macmillan Reference USA. p. 176.  20. ^ "Birth of a New Methodology" Avery Comarow. US News and World Report, August 26, 2007. Accessed October 10, 2007.
Rose-breasted grosbeak From Wikipedia, the free encyclopedia   (Redirected from Rose-breasted Grosbeak) Jump to: navigation, search Rose-breasted grosbeak Adult male Pheucticus ludovicianus CT2.jpg Adult female Scientific classification Kingdom: Animalia Phylum: Chordata Class: Aves Order: Passeriformes Family: Cardinalidae Genus: Pheucticus Species: P. ludovicianus Binomial name Pheucticus ludovicianus (Linnaeus, 1766) Rose-breasted Grosbeak-rangemap.gif Range in northern America:     Breeding range     Migration only range     Wintering range Zamelodia ludoviciana The rose-breasted grosbeak (Pheucticus ludovicianus) is a large seed-eating grosbeak in the cardinal family (Cardinalidae). It is primarily a foliage gleaner. It breeds in cool-temperate North America, migrating to tropical America in winter. The genus name Pheucticus is from Ancient Greek pheuktiko, " shy", from pheugo , "to flee, and the specific ludovicianus is from New Latin and refers to Louisiana.[2] Immature male Two males at feeder Adult birds are 18–22 cm (7.1–8.7 in) long, span 29–33 cm (11–13 in) across the wings and weigh 35–65 g (1.2–2.3 oz).[3][4] Grosbeaks measured during migration in the West Indies averaged 43 g (1.5 oz) while those banded in Pennsylvania average about 45 g (1.6 oz).[5][6] There is very little sexual dimorphism in size, females were found to be marginally smaller in standard measurements but in some seasons were marginally heavier than males when banded in Pennsylvania.[6][7][8] At all ages and in both sexes, the beak is dusky horn-colored, and the feet and eyes are dark.[9] The adult male in breeding plumage has a black head, wings, back and tail, and a bright rose-red patch on its breast; the wings have two white patches and rose-red linings. Its underside and rump are white. Males in nonbreeding plumage have largely white underparts, supercilium and cheeks. The upperside feathers have brown fringes, most wing feathers white ones, giving a scaly appearance. The bases of the primary remiges are also white. The coloration renders the adult male rose-breasted grosbeak (even while wintering) unmistakable if seen well. The adult female has dark grey-brown upperparts – darker on wings and tail –, a white supercilium, a buff stripe along the top of the head, and black-streaked white underparts, which except in the center of the belly have a buff tinge. The wing linings are yellowish, and on the upperwing there are two white patches like in the summer male. Immatures are similar, but with pink wing-linings and less prominent streaks and usually a pinkish-buff hue on the throat and breast. At one year of age—in their first breeding season—males are scaly above like fully adult males in winter plumage, and still retail the immature's browner wings. Unlike males, females can easily be confused with the black-headed grosbeak (Pheucticus melanocephalus) where their ranges overlap in the central United States and south-central Canada. The rose-breasted grosbeak female has slightly darker brown markings on the underside, paler rather yellowish streaking on both the head and wings and paler, pinkish (rather than bi-colored) bill when compared to the female black-headed grosbeak.[10] A potential confusion species also is the female purple finch (Haemorhous purpureus), but that species is noticeably smaller with a less robust bill and a notched tail.[11] The song is a subdued mellow warbling, resembling a more refined, sweeter version of the American robin's (Turdus migratorius). Males start singing early, occasionally even when still in winter quarters. The call is a sharp pink or pick, somewhat reminiscent of a woodpecker call. Range and ecology[edit] The rose-breasted grosbeak forages in shrubs or trees for insects, seeds and berries, also catching insects in flight and occasionally eating nectar. It usually keeps to the treetops, and only rarely can be seen on the ground. In the winter quarters, they can be attracted into parks, gardens, and possibly even to bird feeders by fruit like Trophis racemosa. Other notable winter food includes Jacaranda seeds and the fruits of the introduced busy Lizzy (Impatiens walleriana).[14] In grosbeaks from the north-central United States and southern Canada, it was found 52% of the stomach contents were comprised by invertebrates, predominantly beetles; 19.3% was made up of wild fruits; 15.7% by weed seeds; 6.5% by cultivated fruits and plants, including peas, corn (Zea mays), oats (Avena sativa) and wheat (Triticum vulgare); and the remaining 6.5% by other plant material, including tree buds and flowers.[15] Reproductive biology[edit] Rose-breasted grosbeaks were the only one of 70 migratory songbird species in the eastern United States shown in males to have produced sperm while still far south of their breeding location.[16] Male grosbeaks tend to arrive a few days to a week before the females and pair formation apparently occurs on the breeding grounds.[17] Nest building begins from as early as early May in Tennessee to as late as early June further north in Saskatchewan.[18][19] Egg laying may occur anytime from mid-May to mid-July, as has been recorded in Quebec.[20] Usually only a single brood is laid by these grosbeaks each summer but second broods are suspected in Canada and confirmed in semi-captivity.[21][22] Both the male and the female apparently participate in selecting and building the nest, which is on a tree branch, over vines or any elevated woody vegetation.[23] Nests have been recorded at 0.8 to 16.7 m (2.6 to 54.8 ft) off the ground, averaging 6 m (20 ft) high, almost always in the vicinity of openings in woodlands.[24] Nests are typical of many passerines in both construct, material and size, made from leaves, twigs, rootlets or hair.[25] Clutches are from 1 to 5 eggs, normally being 3-4, being pale blue to green with purplish to brownish red spotting.[26] Males do a third of the incubation roughly, the female doing the remaining amount, and incubation can last from 11–14 days.[22] Nestlings are 5 g (0.18 oz) at hatching and after 3–6 days of age, they gain at least 3 g (0.11 oz) each day.[21] The young grosbeaks typically fledge at 9–13 days of age and are independent of their parents after approximately 3 weeks.[21][24] Longevity and mortality[edit] Maximum lifespan recorded for a wild rose-breasted grosbeak was 12 years, 11 months.[27] Captive grosbeaks have been recorded living up to 24 years of age, making them quite a long-living passerine excluding the pressures of surviving in the wild.[28] Although frequently targeted by the brood parasite, the brown-headed cowbird (Molothrus ater), the rose-breasted grosbeak is apparently able to recognize cowbird eggs and has been seen to aggressively displace cowbirds near the nest.[29] Typically, fewer than 7% of grosbeak nests have cowbird eggs per study.[30] Per the U.S. Bird Banding Laboratory, as of 1997, rose-breasted grosbeaks recovered dead have largely collided with objects, including buildings and cars (17.2%) or had been shot (10%; mostly before 1960), 3.6% of the fatalities were caught by cats, 0.8% caught by dogs. Mortality due to natural causes, including disease, natural predators and inclement weather go largely unreported.[31] It is known the main cause of nesting failure is predation. Natural predators of eggs and nestlings include blue jays (Cyanocitta cristata), common grackles (Quiscalus quiscula), raccoons (Procyon lotor), gray (Sciurus carolinensis) and red (Tamiasciurus hudsonicus) squirrels.[31][32] Confirmed predators of adults include both Cooper's (Accipiter cooperii)[33] and sharp-shinned hawks (Accipiter striatus)[34] as well as northern harriers (Circus cyaenus),[35] eastern screech-owls (Megascops asio)[36] and short-eared owls (Asio flammeus).[37] Status and comparative ecology[edit] Fires are necessary to maintain many kinds of grassland (see Fire ecology). Fire suppression in the late 20th century allowed forests to spread on the Great Plains into areas where recurring fire would otherwise have maintained grassland. This allowed hybridization with the black-headed grosbeak subspecies P. melanocephalus papago[38] Range expansions also seem also to have occurred elsewhere, for example in northern Ohio where it bred rarely if at all in the 1900s (decade), but it by no means an uncommon breeder today. In general, though it requires mature woodland to breed and is occasionally caught as a cage bird, the rose-breasted grosbeak is not at all rare, and not considered a threatened species by the IUCN.[1][39] Its average maximum lifespan in the wild is 7.3 years.[40] 2. ^ Jobling, James A. (2010). The Helm Dictionary of Scientific Bird Names. London, United Kingdom: Christopher Helm. pp. 232, 302. ISBN 978-1-4081-2501-4.  3. ^ Rose-breasted Grosbeak, All about Birds 5. ^ Faaborg, J. R. and J. W. Terborgh. 1980. Patterns of migration in the West Indies. in Migrant birds in the Neotropics: ecological, behavior, distribution and conservation. (Keast, A. and E. S. Morton, Eds.) Smithson. Inst. Press, Washington, D.C. 6. ^ a b Clench, M. H. and R. C. Leberman. 1978. Weights of 151 species of Pennsylvania birds analyzed by month, age, and sex. Bull. Carnegie Mus. Nat. Hist. 5. 7. ^ Godfrey, W. E. 1986. The birds of Canada. Rev. ed. Natl. Mus. Nat. Sci. Ottawa, ON. 8. ^ Pyle, P. 1997. Identification guide to North American birds. Pt. 1: Columbidae to Ploceidae. Slate Creek Press, Bolinas, CA. 9. ^ Olson et al. (1981) 10. ^ Morlan, J. 1991. Identification of female Rose-breasted and Black-headed grosbeaks. Birding 23:220-223. 11. ^ "Rose-breasted Grosbeak- Identification". All About Birds- The Cornell Lab of Ornithology. Retrieved 2015-05-15.  13. ^ Henninger (1906), OOS (2004) 14. ^ Foster (2007) 15. ^ Mcatee, W. L. 1908. Food habits of the grosbeaks. U.S. Dep. Agric. Biol. Surv. Bull. 32. 16. ^ Quay, W. B. 1985. Cloacal sperm in spring migrants: occurrence and interpretation. Condor 87:273-280. 17. ^ Dunham, D. W. 1966. Territorial and sexual behavior in the Rose-breasted Grosbeak, Pheucticus ludovicianus. Z. Tierpsychol. 23:438-451. 18. ^ Nicholson, C. P. 1997. Rose-breasted Grosbeak. Pages 325-327 in Atlas of the breeding birds of Tennessee. (Nicholson, C. P., Ed.) Univ. of Tennessee Press, Knoxville. 19. ^ Macoun, J. and J. M. Macoun. 1909. Catalogue of Canadian birds. Dep. Mines, Geol. Surv. Branch, Ottawa, ON. 20. ^ Pelletier, R. and D. Dauphin. 1996. Rose-breasted Grosbeak. Pages 954-957 in The breeding birds of Quebec: atlas of the breeding birds of southern Québec. (Gauthier, J. and Y. Aubry, Eds.) Assoc. québecoise des groupes d'ornithologues, Prov. of Quebec Soc. for the protection of birds, Can. Wildl. Serv., Environ. Canada, Québec Region, Montréal. 21. ^ a b c Watts, G. E. 1935. Life history of the Rose-breasted Grosbeak (Hedymeles ludoviciana). Master's Thesis. Cornell Univ. Ithaca, NY. 22. ^ a b Peck, G. K. and R. D. James. 1998. Breeding birds of Ontario: nidiology and distribution: passerines (1st rev.-pt. C: tanagers to Old World sparrows). Ont. Birds 16:111-127. 23. ^ Roberts, T. S. 1932. The birds of Minnesota. Univ. of Minnesota Press, Minneapolis. 24. ^ a b Scott, D. M. 1998. Laying hours and other nesting data of Rose-breasted Grosbeaks. Ont. Birds 16:88-93. 25. ^ Baicich, P. J. and C. J. O. Harrison. 1997. A guide to the nests, eggs, and nestlings of North American birds. 2nd ed. Academic Press, San Diego, CA. 26. ^ Best, L. B. and D. F. Stauffer. 1980. Factors affecting nesting success in riparian bird communities. Condor 82:149-157. 27. ^ Klimkiewicz, M. K. and A. G. Futcher. 1987. Longevity records of North American birds: Coerbinae through Estrildidae. J. Field Ornithol. 58:318-333. 28. ^ Bent, A. C. 1968. Life histories of North American cardinals, grosbeaks, buntings, towhees, finches, sparrows and allies: Order Passeriformes, Family Fringillidae. U.S. Nat. Mus. Bull. 237. 29. ^ Friesen, L. E., M. D. Cadman, and R. J. Mackay. 1999. Nesting success of Neotropical migrant songbirds in a highly fragmented landscape. Conserv. Biol. 13:338-346. 30. ^ Terrill, L. M. 1961. Cowbird hosts in southern Quebec. Can. Field-Nat. 75:2-11. 31. ^ a b Wyatt, Valerie E. and Charles M. Francis. 2002. Rose-breasted Grosbeak (Pheucticus ludovicianus), The Birds of North America Online (A. Poole, Ed.). Ithaca: Cornell Lab of Ornithology; Retrieved from the Birds of North America Online: [1]. 32. ^ Baird, J. 1964. Hostile displays of Rose-breasted Grosbeak towards a red squirrel. Wilson Bull. 76:286-289. 33. ^ Meng, H. (1959). Food habits of nesting Cooper's Hawks and Goshawks in New York and Pennsylvania. The Wilson Bulletin, 169-174. 34. ^ Ivor, H. R. 1944. Bird study and semi-captive birds: the Rose-breasted Grosbeak. The Wilson Bulletin 56:91-104. 35. ^ Barnard, P., MacWhirter, B., Simmons, R., Hansen, G. L., & Smith, P. C. (1987). Timing of breeding and the seasonal importance of passerine prey to northern Harriers (Circus cyaneus). Canadian Journal of Zoology, 65(8), 1942-1946. 36. ^ VanCamp, L. F., & Henny, C. J. (1975). The screech owl: its life history and population ecology in northern Ohio. North American Fauna, 1-65. 37. ^ Holt, D. W. (1993). Breeding season diet of Short-eared Owls in Massachusetts. The Wilson Bulletin, 490-496. 38. ^ palpago is a lapsus in Rhymer & Simberloff (1996). External links[edit]
Powered by Translate Regulations (Standards - 29 CFR) - Table of Contents • Part Number: 1910 • Part Title: Occupational Safety and Health Standards • Subpart: Z • Subpart Title: Toxic and Hazardous Substances • Standard Number: 1910.1001 App B • Title: Detailed procedure for asbestos sampling and analysis - Non-Mandatory • GPO Source: e-CFR Appendix B to §1910.1001 -- Detailed Procedures for Asbestos Sampling and Analysis -- Non-Mandatory OSHA Permissible Exposure Limits: Time Weighted Average......................... 0.1 fiber/cc Excursion Level (30 minutes).................. 1.0 fiber/cc Collection Procedure: A known volume of air is drawn through a 25-mm diameter cassette containing a mixed-cellulose ester filter. The cassette must be equipped with an electrically conductive 50-mm extension cowl. The sampling time and rate are chosen to give a fiber density of between 100 to 1,300 fibers/mm(2) on the filter. Recommended Sampling Rate....................... 0.5 to 5.0 liters/ minute (L/min) Recommended Air Volumes: 1. Introduction This method describes the collection of airborne asbestos fibers using calibrated sampling pumps with mixed-cellulose ester (MCE) filters and analysis by phase contrast microscopy (PCM). Some terms used are unique to this method and are defined below: Asbestos: A term for naturally occurring fibrous minerals. Asbestos includes chrysotile, crocidolite, amosite (cummingtonite-grunerite asbestos), tremolite asbestos, actinolite asbestos, anthophyllite asbestos, and any of these minerals that have been chemically treated and/or altered. The precise chemical formulation of each species will vary with the location from which it was mined. Nominal compositions are listed: Chrysotile............ Mg(3)Si(2)O(5)(OH)(4) Crocidolite........... Na(2)Fe(3)(2)(+)Fe(2)(3)(+)Si(8)O(22)(OH)(2) Amosite............... (Mg,Fe)(7)Si(8)O(22)(OH)(2) Tremolite-actinolite.. Ca(2)(Mg,Fe)(5)Si(8)O(22)(OH)(2) Asbestos Fiber: A fiber of asbestos which meets the criteria specified below for a fiber. Aspect Ratio: The ratio of the length of a fiber to it's diameter (e.g. 3:1, 5:1 aspect ratios). Cleavage Fragments: Mineral particles formed by comminution of minerals, especially those characterized by parallel sides and a moderate aspect ratio (usually less than 20:1). Detection Limit: The number of fibers necessary to be 95% certain that the result is greater than zero. Differential Counting: The term applied to the practice of excluding certain kinds of fibers from the fiber count because they do not appear to be asbestos. Fiber: A particle that is 5 um or longer, with a length-to-width ratio of 3 to 1 or longer. Field: The area within the graticule circle that is superimposed on the microscope image. Set: The samples which are taken, submitted to the laboratory, analyzed, and for which, interim or final result reports are generated. Tremolite, Anthophyllite, and Actinolite: The non-asbestos form of these minerals which meet the definition of a fiber. It includes any of these minerals that have been chemically treated and/or altered. Walton-Beckett Graticule: An eyepiece graticule specifically designed for asbestos fiber counting. It consists of a circle with a projected diameter of 100 + or - 2 um (area of about 0.00785 mm(2)) with a crosshair having tic-marks at 3-um intervals in one direction and 5-um in the orthogonal direction. There are marks around the periphery of the circle to demonstrate the proper sizes and shapes of fibers. This design is reproduced in Figure 1. The disk is placed in one of the microscope eyepieces so that the design is superimposed on the field of view. 1.1. History Early surveys to determine asbestos exposures were conducted using impinger counts of total dust with the counts expressed as million particles per cubic foot. The British Asbestos Research Council recommended filter membrane counting in 1969. In July 1969, the Bureau of Occupational Safety and Health published a filter membrane method for counting asbestos fibers in the United States. This method was refined by NIOSH and published as P & CAM 239. On May 29, 1971, OSHA specified filter membrane sampling with phase contrast counting for evaluation of asbestos exposures at work sites in the United States. The use of this technique was again required by OSHA in 1986. Phase contrast microscopy has continued to be the method of choice for the measurement of occupational exposure to asbestos. 1.2. Principle Air is drawn through a MCE filter to capture airborne asbestos fibers. A wedge shaped portion of the filter is removed, placed on a glass microscope slide and made transparent. A measured area (field) is viewed by PCM. All the fibers meeting defined criteria for asbestos are counted and considered a measure of the airborne asbestos concentration. 1.3. Advantages and Disadvantages There are four main advantages of PCM over other methods: (1) The technique is specific for fibers. Phase contrast is a fiber counting technique which excludes non-fibrous particles from the analysis. (2) The technique is inexpensive and does not require specialized knowledge to carry out the analysis for total fiber counts. (3) The analysis is quick and can be performed on-site for rapid determination of air concentrations of asbestos fibers. (4) The technique has continuity with historical epidemiological studies so that estimates of expected disease can be inferred from long-term determinations of asbestos exposures. The main disadvantage of PCM is that it does not positively identify asbestos fibers. Other fibers which are not asbestos may be included in the count unless differential counting is performed. This requires a great deal of experience to adequately differentiate asbestos from non-asbestos fibers. Positive identification of asbestos must be performed by polarized light or electron microscopy techniques. A further disadvantage of PCM is that the smallest visible fibers are about 0.2 um in diameter while the finest asbestos fibers may be as small as 0.02 um in diameter. For some exposures, substantially more fibers may be present than are actually counted. 1.4. Workplace Exposure Asbestos is used by the construction industry in such products as shingles, floor tiles, asbestos cement, roofing felts, insulation and acoustical products. Non-construction uses include brakes, clutch facings, paper, paints, plastics, and fabrics. One of the most significant exposures in the workplace is the removal and encapsulation of asbestos in schools, public buildings, and homes. Many workers have the potential to be exposed to asbestos during these operations. About 95% of the asbestos in commercial use in the United States is chrysotile. Crocidolite and amosite make up most of the remainder. Anthophyllite and tremolite or actinolite are likely to be encountered as contaminants in various industrial products. 1.5. Physical Properties Asbestos fiber possesses a high tensile strength along its axis, is chemically inert, non-combustible, and heat resistant. It has a high electrical resistance and good sound absorbing properties. It can be weaved into cables, fabrics or other textiles, and also matted into asbestos papers, felts, or mats. 2. Range and Detection Limit 2.1. The ideal counting range on the filter is 100 to 1,300 fibers/mm(2). With a Walton-Beckett graticule this range is equivalent to 0.8 to 10 fibers/field. Using NIOSH counting statistics, a count of 0.8 fibers/field would give an approximate coefficient of variation (CV) of 0.13. 2.2. The detection limit for this method is 4.0 fibers per 100 fields or 5.5 fibers/mm(2). This was determined using an equation to estimate the maximum CV possible at a specific concentration (95% confidence) and a Lower Control Limit of zero. The CV value was then used to determine a corresponding concentration from historical CV vs fiber relationships. As an example: Lower Control Limit (95% Confidence) = AC - 1.645(CV)(AC) AC = Estimate of the airborne fiber concentration (fibers/cc) Setting the Lower Control Limit = 0 and solving for CV: 0 = AC - 1.645(CV)(AC) CV = 0.61 This value was compared with CV vs. count curves. The count at which CV = 0.61 for Leidel-Busch counting statistics or for an OSHA Salt Lake Technical Center (OSHA-SLTC) CV curve (see Appendix A for further information) was 4.4 fibers or 3.9 fibers per 100 fields, respectively. Although a lower detection limit of 4 fibers per 100 fields is supported by the OSHA-SLTC data, both data sets support the 4.5 fibers per 100 fields value. 3. Method Performance -- Precision and Accuracy Precision is dependent upon the total number of fibers counted and the uniformity of the fiber distribution on the filter. A general rule is to count at least 20 and not more than 100 fields. The count is discontinued when 100 fibers are counted, provided that 20 fields have already been counted. Counting more than 100 fibers results in only a small gain in precision. As the total count drops below 10 fibers, an accelerated loss of precision is noted. At this time, there is no known method to determine the absolute accuracy of the asbestos analysis. Results of samples prepared through the Proficiency Analytical Testing (PAT) Program and analyzed by the OSHA-SLTC showed no significant bias when compared to PAT reference values. The PAT samples were analyzed from 1987 to 1989 (N = 36) and the concentration range was from 120 to 1,300 fibers/mm(2). 4. Interferences Fibrous substances, if present, may interfere with asbestos analysis. Some common fibers are: Plant Fibers Perlite Veins Some Synthetic Fibers Membrane Structures Sponge Spicules The use of electron microscopy or optical tests such as polarized light, and dispersion staining may be used to differentiate these materials from asbestos when necessary. 5. Sampling 5.1. Equipment 5.1.1. Sample assembly (The assembly is shown in Figure 3). Conductive filter holder consisting of a 25-mm diameter, 3-piece cassette having a 50-mm long electrically conductive extension cowl. Backup pad, 25-mm, cellulose. Membrane filter, mixed-cellulose ester (MCE), 25-mm, plain, white, 0.4 to 1.2-um pore size. 1. Do not re-use cassettes. 2. Fully conductive cassettes are required to reduce fiber loss to the sides of the cassette due to electrostatic attraction. 3. Purchase filters which have been selected by the manufacturer for asbestos counting or analyze representative filters for fiber background before use. Discard the filter lot if more than 4 fibers/100 fields are found. 4. To decrease the possibility of contamination, the sampling system (filter-backup pad-cassette) for asbestos is usually preassembled by the manufacturer. 5. Other cassettes, such as the Bell-mouth, may be used within the limits of their validation. 5.1.2. Gel bands for sealing cassettes. 5.1.3. Sampling pump. Each pump must be a battery operated, self-contained unit small enough to be placed on the monitored employee and not interfere with the work being performed. The pump must be capable of sampling at the collection rate for the required sampling time. 5.1.4. Flexible tubing, 6-mm bore. 5.1.5. Pump calibration. Stopwatch and bubble tube/burette or electronic meter. 5.2. Sampling Procedure 5.2.1. Seal the point where the base and cowl of each cassette meet with a gel band or tape. 5.2.2. Charge the pumps completely before beginning. 5.2.3. Connect each pump to a calibration cassette with an appropriate length of 6-mm bore plastic tubing. Do not use luer connectors -- the type of cassette specified above has built-in adapters. 5.2.4. Select an appropriate flow rate for the situation being monitored. The sampling flow rate must be between 0.5 and 5.0 L/min for personal sampling and is commonly set between 1 and 2 L/min. Always choose a flow rate that will not produce overloaded filters. 5.2.5. Calibrate each sampling pump before and after sampling with a calibration cassette in-line (Note: This calibration cassette should be from the same lot of cassettes used for sampling). Use a primary standard (e.g. bubble burette) to calibrate each pump. If possible, calibrate at the sampling site. Note: If sampling site calibration is not possible, environmental influences may affect the flow rate. The extent is dependent on the type of pump used. Consult with the pump manufacturer to determine dependence on environmental influences. If the pump is affected by temperature and pressure changes, correct the flow rate using the formula shown in the section "Sampling Pump Flow Rate Corrections" at the end of this appendix. 5.2.6. Connect each pump to the base of each sampling cassette with flexible tubing. Remove the end cap of each cassette and take each air sample open face. Assure that each sample cassette is held open side down in the employee's breathing zone during sampling. The distance from the nose/mouth of the employee to the cassette should be about 10 cm. Secure the cassette on the collar or lapel of the employee using spring clips or other similar devices. 5.2.7. A suggested minimum air volume when sampling to determine TWA compliance is 25 L. For Excursion Limit (30 min sampling time) evaluations, a minimum air volume of 48 L is recommended. 5.2.8. The most significant problem when sampling for asbestos is overloading the filter with non-asbestos dust. Suggested maximum air sample volumes for specific environments are: | Air vol. Environment | (L) Asbestos removal operations (visible dust)........... | 100 Asbestos removal operations (little dust)............ | 240 Caution: Do not overload the filter with dust. High levels of non-fibrous dust particles may obscure fibers on the filter and lower the count or make counting impossible. If more than about 25 to 30% of the field area is obscured with dust, the result may be biased low. Smaller air volumes may be necessary when there is excessive non-asbestos dust in the air. While sampling, observe the filter with a small flashlight. If there is a visible layer of dust on the filter, stop sampling, remove and seal the cassette, and replace with a new sampling assembly. The total dust loading should not exceed 1 mg. 5.2.9. Blank samples are used to determine if any contamination has occurred during sample handling. Prepare two blanks for the first 1 to 20 samples. For sets containing greater than 20 samples, prepare blanks as 10% of the samples. Handle blank samples in the same manner as air samples with one exception: Do not draw any air through the blank samples. Open the blank cassette in the place where the sample cassettes are mounted on the employee. Hold it open for about 30 seconds. Close and seal the cassette appropriately. Store blanks for shipment with the sample cassettes. 5.2.10. Immediately after sampling, close and seal each cassette with the base and plastic plugs. Do not touch or puncture the filter membrane as this will invalidate the analysis. 5.2.11. Attach and secure a sample seal around each sample cassette in such a way as to assure that the end cap and base plugs cannot be removed without destroying the seal. Tape the ends of the seal together since the seal is not long enough to be wrapped end-to-end. Also wrap tape around the cassette at each joint to keep the seal secure. 5.3. Sample Shipment 5.3.2. Secure and handle the samples in such that they will not rattle during shipment nor be exposed to static electricity. Do not ship samples in expanded polystyrene peanuts, vermiculite, paper shreds, or excelsior. Tape sample cassettes to sheet bubbles and place in a container that will cushion the samples in such a manner that they will not rattle. 5.3.3. To avoid the possibility of sample contamination, always ship bulk samples in separate mailing containers. 6. Analysis 6.1. Safety Precautions 6.1.1. Acetone is extremely flammable and precautions must be taken not to ignite it. Avoid using large containers or quantities of acetone. Transfer the solvent in a ventilated laboratory hood. Do not use acetone near any open flame. For generation of acetone vapor, use a spark free heat source. 6.1.2. Any asbestos spills should be cleaned up immediately to prevent dispersal of fibers. Prudence should be exercised to avoid contamination of laboratory facilities or exposure of personnel to asbestos. Asbestos spills should be cleaned up with wet methods and/ or a High Efficiency Particulate-Air (HEPA) filtered vacuum. Caution: Do not use a vacuum without a HEPA filter -- It will disperse fine asbestos fibers in the air. 6.2. Equipment 6.2.1. Phase contrast microscope with binocular or trinocular head. 6.2.2. Widefield or Huygenian 10X eyepieces (Note: The eyepiece containing the graticule must be a focusing eyepiece. Use a 40X phase objective with a numerical aperture of 0.65 to 0.75). 6.2.3. Kohler illumination (if possible) with green or blue filter. 6.2.4. Walton-Beckett Graticule, type G-22 with 100 plus or minus 2 um projected diameter. 6.2.5. Mechanical stage. A rotating mechanical stage is convenient for use with polarized light. 6.2.6. Phase telescope. 6.2.7. Stage micrometer with 0.01-mm subdivisions. 6.2.8. Phase-shift test slide, mark II (Available from PTR optics Ltd., and also McCrone). 6.2.9. Precleaned glass slides, 25 mm X 75 mm. One end can be frosted for convenience in writing sample numbers, etc., or paste-on labels can be used. 6.2.10. Cover glass #1 1/2. 6.2.11. Scalpel (#10, curved blade). 6.2.12. Fine tipped forceps. 6.2.13. Aluminum block for clearing filter (see Appendix D and Figure 4). 6.2.14. Automatic adjustable pipette, 100- to 500-uL. 6.2.15. Micropipette, 5 uL. 6.3. Reagents 6.3.1. Acetone (HPLC grade). 6.3.2. Triacetin (glycerol triacetate). 6.3.3. Lacquer or nail polish. 6.4. Standard Preparation A way to prepare standard asbestos samples of known concentration has not been developed. It is possible to prepare replicate samples of nearly equal concentration. This has been performed through the PAT program. These asbestos samples are distributed by the AIHA to participating laboratories. Since only about one-fourth of a 25-mm sample membrane is required for an asbestos count, any PAT sample can serve as a "standard" for replicate counting. 6.5. Sample Mounting Note: See Safety Precautions in Section 6.1. before proceeding. The objective is to produce samples with a smooth (non-grainy) background in a medium with a refractive index of approximately 1.46. The technique below collapses the filter for easier focusing and produces permanent mounts which are useful for quality control and interlaboratory comparison. An aluminum block or similar device is required for sample preparation. 6.5.1. Heat the aluminum block to about 70 deg. C. The hot block should not be used on any surface that can be damaged by either the heat or from exposure to acetone. 6.5.2. Ensure that the glass slides and cover glasses are free of dust and fibers. 6.5.3. Remove the top plug to prevent a vacuum when the cassette is opened. Clean the outside of the cassette if necessary. Cut the seal and/or tape on the cassette with a razor blade. Very carefully separate the base from the extension cowl, leaving the filter and backup pad in the base. 6.5.4. With a rocking motion cut a triangular wedge from the filter using the scalpel. This wedge should be one-sixth to one-fourth of the filter. Grasp the filter wedge with the forceps on the perimeter of the filter which was clamped between the cassette pieces. DO NOT TOUCH the filter with your finger. Place the filter on the glass slide sample side up. Static electricity will usually keep the filter on the slide until it is cleared. 6.5.5. Place the tip of the micropipette containing about 200 uL acetone into the aluminum block. Insert the glass slide into the receiving slot in the aluminum block. Inject the acetone into the block with slow, steady pressure on the plunger while holding the pipette firmly in place. Wait 3 to 5 seconds for the filter to clear, then remove the pipette and slide from the aluminum block. 6.5.6. Immediately (less than 30 seconds) place 2.5 to 3.5 uL of triacetin on the filter (Note: Waiting longer than 30 seconds will result in increased index of refraction and decreased contrast between the fibers and the preparation. This may also lead to separation of the cover slip from the slide). 6.5.7. Lower a cover slip gently onto the filter at a slight angle to reduce the possibility of forming air bubbles. If more than 30 seconds have elapsed between acetone exposure and triacetin application, glue the edges of the cover slip to the slide with lacquer or nail polish. 6.5.8. If clearing is slow, warm the slide for 15 min on a hot plate having a surface temperature of about 50 deg. C to hasten clearing. The top of the hot block can be used if the slide is not heated too long. 6.5.9. Counting may proceed immediately after clearing and mounting are completed. 6.6. Sample Analysis Completely align the microscope according to the manufacturer's instructions. Then, align the microscope using the following general alignment routine at the beginning of every counting session and more often if necessary. 6.6.1. Alignment (1) Clean all optical surfaces. Even a small amount of dirt can significantly degrade the image. (2) Rough focus the objective on a sample. (3) Close down the field iris so that it is visible in the field of view. Focus the image of the iris with the condenser focus. Center the image of the iris in the field of view. (4) Install the phase telescope and focus on the phase rings. Critically center the rings. Misalignment of the rings results in astigmatism which will degrade the image. (5) Place the phase-shift test slide on the microscope stage and focus on the lines. The analyst must see line set 3 and should see at least parts of 4 and 5 but, not see line set 6 or 6. A microscope/microscopist combination which does not pass this test may not be used. 6.6.2. Counting Fibers (2) Start counting from one end of the wedge and progress along a radial line to the other end (count in either direction from perimeter to wedge tip). Select fields randomly, without looking into the eyepieces, by slightly advancing the slide in one direction with the mechanical stage control. (3) Continually scan over a range of focal planes (generally the upper 10 to 15 um of the filter surface) with the fine focus control during each field count. Spend at least 5 to 15 seconds per field. (4) Most samples will contain asbestos fibers with fiber diameters less than 1 um. Look carefully for faint fiber images. The small diameter fibers will be very hard to see. However, they are an important contribution to the total count. (5) Count only fibers equal to or longer than 5 um. Measure the length of curved fibers along the curve. (6) Count fibers which have a length to width ratio of 3:1 or greater. (7) Count all the fibers in at least 20 fields. Continue counting until either 100 fibers are counted or 100 fields have been viewed; whichever occurs first. Count all the fibers in the final field. (8) Fibers lying entirely within the boundary of the Walton-Beckett graticule field shall receive a count of 1. Fibers crossing the boundary once, having one end within the circle shall receive a count of 1/2. Do not count any fiber that crosses the graticule boundary more than once. Reject and do not count any other fibers even though they may be visible outside the graticule area. If a fiber touches the circle, it is considered to cross the line. (9) Count bundles of fibers as one fiber unless individual fibers can be clearly identified and each individual fiber is clearly not connected to another counted fiber. See Figure 1 for counting conventions. (10) Record the number of fibers in each field in a consistent way such that filter non-uniformity can be assessed. (11) Regularly check phase ring alignment. (12) When an agglomerate (mass of material) covers more than 25% of the field of view, reject the field and select another. Do not include it in the number of fields counted. (13) Perform a "blind recount" of 1 in every 10 filter wedges (slides). Re-label the slides using a person other than the original counter. 6.7. Fiber Identification As previously mentioned in Section 1.3., PCM does not provide positive confirmation of asbestos fibers. Alternate differential counting techniques should be used if discrimination is desirable. Differential counting may include primary discrimination based on morphology, polarized light analysis of fibers, or modification of PCM data by Scanning Electron or Transmission Electron Microscopy. A great deal of experience is required to routinely and correctly perform differential counting. It is discouraged unless it is legally necessary. Then, only if a fiber is obviously not asbestos should it be excluded from the count. Further discussion of this technique can be found in reference 8.10. If there is a question whether a fiber is asbestos or not, follow the rule: 6.8. Analytical Recommendations -- Quality Control System 6.8.1. All individuals performing asbestos analysis must have taken the NIOSH course for sampling and evaluating airborne asbestos or an equivalent course. 6.8.2. Each laboratory engaged in asbestos counting shall set up a slide trading arrangement with at least two other laboratories in order to compare performance and eliminate inbreeding of error. The slide exchange occurs at least semiannually. The round robin results shall be posted where all analysts can view individual analyst's results. 6.8.3. Each laboratory engaged in asbestos counting shall participate in the Proficiency Analytical Testing Program, the Asbestos Analyst Registry or equivalent. 6.8.4. Each analyst shall select and count prepared slides from a "slide bank". These are quality assurance counts. The slide bank shall be prepared using uniformly distributed samples taken from the workload. Fiber densities should cover the entire range routinely analyzed by the laboratory. These slides are counted blind by all counters to establish an original standard deviation. This historical distribution is compared with the quality assurance counts. A counter must have 95% of all quality control samples counted within three standard deviations of the historical mean. This count is then integrated into a new historical mean and standard deviation for the slide. The analyses done by the counters to establish the slide bank may be used for an interim quality control program if the data are treated in a proper statistical fashion. 7.1. Calculate the estimated airborne asbestos fiber concentration on the filter sample using the following formula: AC = Airborne fiber concentration (For Equation A, Click Here) FB = Total number of fibers greater than 5 um counted FL = Total number of fields counted on the filter BFB = Total number of fibers greater than 5 um counted in the blank BFL = Total number of fields counted on the blank ECA = Effective collecting area of filter (385 mm(2) nominal for a 25-mm filter.) FR = Pump flow rate (L/min) MFA = Microscope count field area (mm(2)). This is 0.00785 mm(2) for a Walton-Beckett Graticule. T = Sample collection time (min) 1,000 = Conversion of L to cc Note: The collection area of a filter is seldom equal to 385 mm(2). It is appropriate for laboratories to routinely monitor the exact diameter using an inside micrometer. The collection area is calculated according to the formula: Area = pi(d/2)(2) 7.2. Short-cut Calculation Since a given analyst always has the same interpupillary distance, the number of fields per filter for a particular analyst will remain constant for a given size filter. The field size for that analyst is constant (i.e. the analyst is using an assigned microscope and is not changing the reticle). For example, if the exposed area of the filter is always 385 mm(2) and the size of the field is always 0.00785 mm(2), the number of fields per filter will always be 49,000. In addition it is necessary to convert liters of air to cc. These three constants can then be combined such that ECA/(1,000 X MFA) = 49. The previous equation simplifies to: (For Equation B, Click Here) 7.3. Recount Calculations As mentioned in step 13 of Section 6.6.2., a "blind recount" of 10% of the slides is performed. In all cases, differences will be observed between the first and second counts of the same filter wedge. Most of these differences will be due to chance alone, that is, due to the random variability (precision) of the count method. Statistical recount criteria enables one to decide whether observed differences can be explained due to chance alone or are probably due to systematic differences between analysts, microscopes, or other biasing factors. Reject a pair of counts if: (For Equation C, Click Here) AC1 = lower estimated airborne fiber concentration AC2 = higher estimated airborne fiber concentration ACavg = average of the two concentration estimates CV(FB) = CV for the average of the two concentration estimates If a pair of counts are rejected by this criterion then, recount the rest of the filters in the submitted set. Apply the test and reject any other pairs failing the test. Rejection shall include a memo to the industrial hygienist stating that the sample failed a statistical test for homogeneity and the true air concentration may be significantly different than the reported value. 7.4. Reporting Results Report results to the industrial hygienist as fibers/cc. Use two significant figures. If multiple analyses are performed on a sample, an average of the results is to be reported unless any of the results can be rejected for cause. 8. References 8.1. Dreesen, W.C., et al, U.S. Public Health Service: A Study of Asbestosis in the Asbestos Textile Industry, (Public Health Bulletin No. 241), US Treasury Dept., Washington, DC, 1938. 8.2. Asbestos Research Council: The Measurement of Airborne Asbestos Dust by the Membrane Filter Method (Technical Note), Asbestos Research Council, Rockdale, Lancashire, Great Britain, 1969. 8.3. Bayer, S.G., Zumwalde, R.D., Brown, T.A., Equipment and Procedure for Mounting Millipore Filters and Counting Asbestos Fibers by Phase Contrast Microscopy, Bureau of Occupational Health, U.S. Dept. of Health, Education and Welfare, Cincinnati, OH, 1969. 8.4. NIOSH Manual of Analytical Methods, 2nd ed., Vol. 1 (DHEW/ NIOSH Pub. No. 77-157-A). National Institute for Occupational Safety and Health, Cincinnati, OH, 1977. pp. 239-1-239-21. 8.5. Asbestos, Code of Federal Regulations 29 CFR 1910.1001. 1971. 8.6. Occupational Exposure to Asbestos, Tremolite, Anthophyllite, and Actinolite. Final Rule, Federal Register 51:119 (20 June 1986). pp. 22612-22790. 8.7. Asbestos, Tremolite, Anthophyllite, and Actinolite, Code of Federal Regulations 1910.1001. 1988. pp 711-752. 8.8. Criteria for a Recommended Standard -- Occupational Exposure to Asbestos (DHEW/NIOSH Pub. No. HSM 72-10267), National Institute for Occupational Safety and Health NIOSH, Cincinnati,OH, 1972. pp. III-1-III-24. 8.9. Leidel, N.A., Bayer,S.G., Zumwalde, R.D.,Busch, K.A., USPHS/NIOSH Membrane Filter Method for Evaluating Airborne Asbestos Fibers (DHEW/NIOSH Pub. No. 79-127). National Institute for Occupational Safety and Health, Cincinnati, OH, 1979. 8.10. Dixon, W.C., Applications of Optical Microscopy in Analysis of Asbestos and Quartz, Analytical Techniques in Occupational Health Chemistry, edited by D.D. Dollberg and A.W. Verstuyft. Wash. D.C.: American Chemical Society, (ACS Symposium Series 120) 1980. pp. 13-41. Quality Control The OSHA asbestos regulations require each laboratory to establish a quality control program. The following is presented as an example of how the OSHA-SLTC constructed its internal CV curve as part of meeting this requirement. Data is from 395 samples collected during OSHA compliance inspections and analyzed from October 1980 through April 1986. Each sample was counted by 2 to 5 different counters independently of one another. The standard deviation and the CV statistic was calculated for each sample. This data was then plotted on a graph of CV vs. fibers/mm(2). A least squares regression was performed using the following equation: CV = antilog1(10)[A(log(10)(x))(2)+B(log(10)(x))+C] x = the number of fibers/mm(2) Application of least squares gave: A = 0.182205 B = -0.973343 C = 0.327499 Using these values, the equation becomes: CV = antilog(10)[0.182205(log(10)(x))(2)-0.973343(log(10)(x))+0.327499] Sampling Pump Flow Rate Corrections This correction is used if a difference greater than 5% in ambient temperature and/or pressure is noted between calibration and sampling sites and the pump does not compensate for the differences. (For Equation D, Click Here) Q(act) = actual flow rate Q(cal) = calibrated flow rate (if a rotameter was used, the rotameter P(cal) = uncorrected air pressure at calibration P(act) = uncorrected air pressure at sampling site T(act) = temperature at sampling site (K) T(cal) = temperature at calibration (K) Walton-Beckett Graticule When ordering the Graticule for asbestos counting, specify the exact disc diameter needed to fit the ocular of the microscope and the diameter (mm) of the circular counting area. Instructions for measuring the dimensions necessary are listed: (1) Insert any available graticule into the focusing eyepiece and focus so that the graticule lines are sharp and clear. (2) Align the microscope. (3) Place a stage micrometer on the microscope object stage and focus the microscope on the graduated lines. (4) Measure the magnified grid length, PL (um), using the stage micrometer. (5) Remove the graticule from the microscope and measure its actual grid length, AL (mm). This can be accomplished by using a mechanical stage fitted with verniers, or a jeweler's loupe with a direct reading scale. (6) Let D = 100 um. Calculate the circle diameter, d(c)(mm), for the Walton-Beckett graticule and specify the diameter when making a purchase: AL x D d(c) = ----------------- Example: If PL = 108 um, AL = 2.93 mm and D = 100 um, then, 2.93 x 100 (7) Each eyepiece-objective-reticle combination on the microscope must be calibrated. Should any of the three be changed (by zoom adjustment, disassembly, replacement, etc.), the combination must be recalibrated. Calibration may change if interpupillary distance is changed. Measure the field diameter, D (acceptable range: 100 plus or minus 2 um) with a stage micrometer upon receipt of the graticule from the manufacturer. Determine the field area (mm(2)). Field Area = pi(D/2)(2) If D = 100 um = 0.1 mm, then Field Area = pi(0.1 mm/2)(2) = 0.00785 mm(2) The Graticule is available from: Graticules Ltd., Morley Road, Tonbridge TN9 IRN, Kent, England (Telephone 011-44-732-359061). Also available from PTR Optics Ltd., 145 Newton Street, Waltham, MA 02154 [telephone (617) 891-6000] or McCrone Accessories and Components, 2506 S. Michigan Ave., Chicago, IL 60616 [phone (312)-842-7100]. The graticule is custom made for each microscope. Counts for the Fibers in the Figure | | Structure | | No. | Count | Explanation | | 1 to 6...... | 1 | Single fibers all contained within the circle. 7........... | 1/2 | Fiber crosses circle once. 8........... | 0 | Fiber too short. 9........... | 2 | Two crossing fibers. 10.......... | 0 | Fiber outside graticule. 11.......... | 0 | Fiber crosses graticule twice. 12.......... | 1/2 | Although split, fiber only crosses once. (For Figure 1 of Walton-Beckett Graticule, Click Here) Next Standard (1910.1001 App C) Regulations (Standards - 29 CFR) - Table of Contents Thank You for Visiting Our Website You are exiting the Department of Labor's Web server.
Our modern world would collapse if it weren't for space exploration Even while we don't have colonies in Mars or orbital Holiday Inns, we live in the Space Age, says io9's Annalee Newitz. Our modern world is so deeply dependent on our space ventures that our economy, scientific progress and intellectual growth as species would collapse without them. Just try to stop for a few minutes and think about all the industries and daily human chores that are completely tied to space infrastructure: global location to control air, sea and land traffic, communications to link everyone and everything everywhere, constant weather tracking, agricultural and environmental control, watching the sun to prevent communications and power knockouts... the list is truly endless and staggering. Just thinking of these things failing makes me sick with vertigo. And if you add the knowledge of the Universe provided by human space travel and interplanetary spacecrafts—which has fundamentally affected the idea of our very own existence as species—the thought that we are not living in the Space Age right now is simply stupid. Stop pretending we aren't living in the Space Age I am sick of hearing people say that the Space Age is "over" because we haven't sent humans back to the Moon. Seriously? That's your complaint? You people need to shut the hell up, and this gorgeous picture of Saturn taken by Cassini is just one reason why. Let's begin by talking about what the "Space Age" is, shall we? The term got bandied around a lot in the 1950s because it was the first time in human history that we sent anything into space. During the "Space Race," which was really just another aspect of the Cold War, we started the glorious journey to the stars by sending remote-controlled probes into the upper atmosphere and eventually into orbit. Later, in the 1960s, we started sending people into space and eventually a few of them landed on the Moon. We learned a lot from those heady early days of the Space Age, and one of them was that building a city on the Moon would be a lot more expensive and difficult than sending a few guys over there in a can. Despite this dismaying discovery, we managed to launch several semi-permanent space habitats into orbit. First there was Skylab in the 1970s, followed by Mir. Now we've got the ongoing project known as the International Space Station, or ISS. All of these space stations had permanent or semi-permanent human crews living in them. That's right. A few members of our species have been living in space since the late 1960s. What difference does it make if they aren't on the Moon? They are in space. This is not the end of the Space Age, people. This is not a failure. This is the very definition of success. Not only are we actually visiting every damn nook and cranny in our solar system — and sending back some of the most awe-inspiring images and data you've ever seen — but we are not doing it like idiots. We are exploring before we shoot our fragile little bodies out there into the radiation-saturated unknown. That is what a smart species does. Back pats for all the Homo sapiens who decided to send a robot to Mars before sending astronauts. And the fact that we have sent our robot minions out to surf the rings of Saturn, drive the rocky canyons of Mars, and pop their heads above the magnetic envelope around the solar system? That is a fucking win. Those things are all evidence that we are in the Space Age and are here to stay. There's another thing to keep in mind when you start getting the urge to whine about how you aren't going to get to visit Jupiter next week. Humans have been analyzing the cosmos for thousands of years. Our dreams of space began when people began naming the constellations, and astronomy itself is a centuries-long project that once revolutionized the world by suggesting that we lived on a rocky ball orbiting a fiery blob. We've been at this for a long time. Copernicus, Galileo and their cohort didn't stop exploring space with their instruments and telescopes just because some damn guy in a funny hat threatened them with imprisonment. (Well, OK, Galileo pretended to stop, but the heliocentrism was out of the bag.) And we're certainly not going to stop now, when our high-tech creations are already out there, flying through space and rolling across the surface of Mars. Humanity has already made a long-term commitment to space. My point is that colonizing other worlds is not something that takes ten years, or even a hundred. It might take much longer than that before humans are living on Mars, or in orbit around Saturn. But we are undeniably on the path toward a future where humans live in space. Our ancestors, who dared to learn from the planets and stars, led us onto this path. And now we are actually seeing those planets up close, for the first time in the history of our species. This is what it feels like to be living through the dawning of the Space Age. It's not like riding on a rocket — it's a slow, difficult climb. Enjoy this small but incredible slice of time that you get to live through, and remember that Galileo would be weeping with envy and relief to know we made it this far. Just because it takes centuries doesn't mean we aren't making progress. We're riding a slow, powerful wave that will bear future generations to the stars. All images via NASA Reply252 replies Leave a reply
Thursday, 19 December 2013 Object Storage - unsung hero of Cloud and Big Data This sounds simple, but the implications are enormous. Is object storage right for you?        Massive growth in unstructured data which is straining traditional storage systems. Unclear about SAN and NAS - check out Tuesday, 10 September 2013 Fancy a career in IT Sales? To get started - check out Tuesday, 23 July 2013 What is a server? Servers are computers which typically support multiple users. As with all computers, they contain processors, memory, networking and data storage capabilities. Servers come in various shapes and sizes. This bit you probably already knew, however there are three major categories of server and you may have wondered what the difference is between say a "mainframe" and a "unix server". And how are these different from standard servers. Read on to find out more. 1 Mainframe (proprietary) Often dedicated for specific, enterprise wide tasks, mainframes are the grand-daddy of servers. Based on a monolithic architecture, mainframes consist of hardware and software which is often provided by a single vendor - or perhaps by two vendors who are almost entirely reliant on each other. The lock-in between hardware and software is the very reason why mainframes still exist today and less costly alternatives have not ousted them. The sheer cost of developing alternatives and the costs for customers to change away from systems which are tightly embedded in their business processes - means they are here for some time yet. Mainframes today represent a very small part of the total server market. They are categorised as "proprietary" because of the strong tie in between software and hardware. 2 Unix (open systems) In order to break the stranglehold of mainframes and to lower the cost of computing - Unix based systems were born back in the 1970s. Unix refers to the open standard operating system (software) that runs on these computers. Originally developed by AT&T then enhanced by many University under-graduates, the system broke the link between hardware and software, increasing competitiveness and reducing costs. However, as Unix evolved, new companies were set up to commercialise the new systems and provide business solutions based on combinations of hardware and software. This process led to proprietary versions of Unix evolving. Today, Unix systems represent well under half of the server market in terms of revenue but are seen as costly compared to the third category of server which evolved from the first PCs in the 1980s. Many organisations continue to migrate away from Unix but interestingly, there is a trend to incorporate some Unix like technologies into industry standard servers (for example - server virtualisation is derived from "partitioning" technology).  3 Industry Standard (open systems) The first PC was produced by IBM in 1981. Consisting of an Intel processor and chipset and storage controlled by a disk operating system called MS DOS (how original). The MS stood for Microsoft - Bill Gates' most significant software release ever. PC stands for Personal Computer - the exact opposite of a mainframe in that for the first time, users could have their own personal computing environment on their desk. IBM made the decision to open up the fundamental design of the PC so other manufacturers would help establish it as the industry standard. As PC's evolved, it quickly became apparent that connecting them together could be useful. Data could now be shared. The natural progression of this was to have perhaps one PC with a big hard drive - to hold data for others to share. And so the first industry standard server was born. Incidentally, there are some versions of Unix which can run on industry standard servers which confuses things slightly, and there is a fully open operating system called Linux too. Visit to find out more. Thursday, 23 May 2013 Why organisations spend on IT Unless investments in IT are providing business value, they will not be justified. Understanding this is vital for salespeople - unless wasting time chasing unqualified opportunities is the goal.  Why do organisations spend money on IT? Spend on IT will link to one or more of the four “Business Imperatives”. Initiatives which support one or more of these provide increased business VALUE to the organisation. VALUE is defined as activity or investment which directly supports the goals of the organisation. Strategic Value is gained from two Business Imperatives - to increase revenue and to reduce costs. Reduce Costs The costs associated with IT include not only the capital purchase of new equipment, but also the ongoing costs through its lifetime. The operational costs can be up to three or four times the purchase cost. Total Cost of Ownership, or TCO, is a term often used to describe this. TCO is becoming a little dated as a term because IT ownership is in fact optional. Enabled primarily by virtualisation technologies, software and hardware are becoming decoupled and increased mobility means organisations do not need to own infrastructure - they can simply pay for "services" to be provided - for some or all of their needs. Total Cost of Operations may be a more fitting meaning for TCO. Increase Revenue Commercial organisations are typically goaled to drive profitability. Simply put, Revenue minus Costs equals Profit. The way to achieve this is to provide the best possible levels of service for customers so they buy more products and or services. For public sector organisations, the focus on service provision is in fact the primary goal. Better service provision should drive improved funding which is used to cover the organisation's costs. The end result is similar to commercial organisations - but the focus is on service provision rather than profit. Organisations also need to drive Operational Value. The two Business Imperatives here are to increase agility and to reduce risk. Increase Agility Organisations need to respond ever more quickly to market demands. New applications must be rolled out in shorter time frames, test and development cycles must be slashed and the costs associated with new projects need to be curbed. In summary, as IT evolves, it must become more agile - in line with the business itself. Reduce Risk As businesses grow, become more globalised with 24/7 operations and become more regulated - whether by internal or industry governance - so the pressures to avoid business risk mounts. With IT so central to business operations, so Risk Management has become a critical operational imperative. Legal compliance, information management, business continuity, backup and disaster recovery are all initiatives which fall under this category and provide necessary value to the organisation. Unless an IT purchase supports at least one of the Business Imperatives and VALUE is increased, it will not be justified. Conversely, if you can demonstrate how an investment in a particular technology supports one or more business imperatives, this will strengthen your case to get budget approved. Monday, 20 May 2013 What is selling anyway? Have you ever been in a conversation and at the end thought "I don't think she listened to a word I said"? Or conversely "I don't have a clue what he was talking about?" Especially over the phone, it can be difficult sometimes to get a point across. It can be even harder to gain real commitment from someone you may not have spoken to before or at least have not met in person. And yet, that is the purpose of you calling your customers/ prospects. It may seem like quite an achievement just to get through to your customer. To have an engaged conversation can be harder again and your well written call plan will certainly help with this. But all this can be for nothing if you and the conversation are not remembered, if you have not had a personal influence on your customer. Why is personal influence so important? Well it boils down to the fundamental reality of what selling is. I often tell delegates in my training sessions that they cannot sell anything. I am not trying to be disparaging when I say this, it is not a personal slight on their abilities. It is a rather controversial statement to be making in a room of sales people and does tend to get the conversation started. Some will be intrigued, some will disagree and someone will say - "What do you mean by saying that? "Well", I continue, "who in here reckons they can sell?" One way or another, someone volunteers (or is volunteered). "OK", I challenge, "sell me something". This tends to flummox the volunteer. If she is smart, she will say "well what do you want?" And that is my point. You can only sell something to someone who wants it, who sees some value to be had and is willing to part with money in exchange for it. You cannot sell unless someone wishes to buy. Of course, there are some unscrupulous  people out there who find ways to cajole, manipulate or con their "customers" out of money by convincing them that they want something - but this is not legitimate selling. There is no sale without a willingness to buy. So selling is actually the process of communicating with people, ideally in a targeted way, with the purpose of firstly discovering a willingness to buy a product or service that you can fulfil. Then, it is personally influencing that person (or perhaps multiple decision makers) to buy from you. This final process is personal because the buyer is either buying for himself or putting his personal reputation on the line if buying on behalf of his company. Your ability to influence the purchasing decision will be governed by a number of factors such as price, product fit for purpose, your compliance with any purchasing rules etc. But the old adage, people buy from people, is as true today as it ever was because ultimately, a person needs to raise that purchase order and we are all driven to do what we perceive to be right by ourselves, our employer and by our peers (probably in that order). In order for you to build influence with your customers, you need to spend time with them. This is partly so you can build rapport and trust, and partly so you can get to understand their personal drivers. Doing "right by ourselves, employer and peers" is subjectively and personally defined. So unless you get to know the person, your ability to influence will be dramatically reduced. Win over the person, and your ability to win the sale will skyrocket. Monday, 7 January 2013 IT's Cloudy where the Sun don't shine For an explanation of Cloud and other things - visit
Web Results A preamplifier (preamp) is an electronic amplifier that prepares a small electrical signal for further amplification or processing. They are typically used to amplify signals from microphones, instr... How does an amplifier work? - Explain that Stuff Amplifiers are the tiny components in hearing aids that make voices sound louder . They're also the gadgets in radios that boost faraway signals and the devices ... Amplify - definition of amplify by The Free Dictionary Define amplify. amplify synonyms, amplify pronunciation, amplify translation, ... use) to increase the loudness of (sound), esp. by mechanical or electronic means . .... 2. go into detail, develop, explain, expand, supplement, elaborate, augment,  ... Amplification of sounds Amplification of sounds. Sounds can be made louder or amplified in a number of ways. By providing more energy in making the sound its loudness can be ... Principle of amplification - AstroSurf Principle of amplification - Electronic tubes and transistors. ... very weak signal extracted from a CD reader or a tuner you can easily generate a deafening sound . Sound is simply what we hear There are many factors involved in what sound a guitar will produce. The main ... The guitar needs the body and the air cavity in side to help amplify the sound. Sound & Waves: Good Vibrations Part II / Labs, Activities, and Other ... We will demonstrate how sound waves are produced and reveal how they ... List the amplifying materials you test and describe their effectiveness as amplifiers. Refraction of Sound - HyperPhysics But bending of sound waves does occur and is an interesting phenomena in ... But refraction can add some additional sound, effectively amplifying the sound. How does a violin produce sound? | Reference.com A violin produces sound through the vibration of its strings, which occurs ... of the violin to serve as a resonating chamber and amplifying the sound produced. Audition: Hearing, the Ear, and Sound Localization - Boundless The human sense of hearing is attributed to the auditory system, which uses the ear to collect, amplify, and transduce sound waves into electrical impulses that ... More Info Making Sounds with Musical Instruments by Ron Kurtus - Succeed ... Jan 8, 2008 ... What are the different ways musical sounds can be made? How are musical sounds amplified? What makes musical sounds pleasing? Activity 1- Vibrations & Pitch Describe the difference in the sounds ... Describe the difference in the sounds made by the thin and thick rubber bands. Which one seems ... Activity 2- Amplification of Sound Through Air. What did you  ... Basic questions about transistor amplification - Electrical ... Jan 5, 2012 ... Can anyone explain how a transistor can amplify voltage or current? According to ... Say for example, I want to amplify a sound wave. I whisper ...
(Science: zoology) The common marmoset (Hapale vulgaris). Formerly, the name was also applied to other species of the same genus. Origin: NL, fr. L. Jacchus a mystic name of bacchus, gr. Retrieved from "http://www.biology-online.org/bodict/index.php?title=Jacchus&oldid=50015"
26 Fairmount Avenue Quiz | Four Week Quiz B Buy the 26 Fairmount Avenue Lesson Plans Name: _________________________ Period: ___________________ Multiple Choice Questions 1. What is the last step of the backyard project? (a) Smoothing the ground. (b) Building the fence. (c) Planting seeds. (d) Plowing the earth. 2. How many people live with Tomie at the beginning of the book? (a) 3. (b) 6. (c) 4. (d) 5. 3. What hurts Tomie during the backyard project? (a) His eyes. (b) His hands. (c) His back. (d) His legs. 4. Who is home when Tomie gets home from school on his first day? (a) His brother. (b) His father. (c) His mother. (d) Nobody. 5. What mistake is made at the beginning of the day when the weeds are gotten rid of? (a) There are not enough tools. (b) The fire is too low. (c) The water is not hooked up. (d) There are too many people. Short Answer Questions 1. Where does Tomie find what he thinks is candy when he can't find any in the regular spot? 2. When does Tomie's father want to work on the back yard? 3. How close is the school to Tomie's dwelling? 4. Who is Mrs. Crane? 5. What does Tomie say to his teacher at the end of his first day of school? (see the answer key) This section contains 224 words (approx. 1 page at 300 words per page) Buy the 26 Fairmount Avenue Lesson Plans 26 Fairmount Avenue from BookRags. (c)2016 BookRags, Inc. All rights reserved. Follow Us on Facebook
Daily Press Eliminating standing water helps get rid of mosquitoes A company sprayed my neighbor's yard for mosquitoes, but when I called the company it admitted that its chemical killed all kinds of insects, including pollinators and beneficials. So I don't want to do that, but we are getting eaten alive by mosquitoes all day! Help! The Asian tiger mosquito, which bites ferociously during the day, has forced people to reexamine and get smarter about mosquito strategies. We agree that preserving beneficial and pollinating insects must be a priority, and some of those beneficials even eat mosquitoes! The first line of attack is eliminating breeding places — it takes only a tablespoon of standing water for these mosquitoes to breed. Especially check for water in plastic tarp depressions, tree holes and the ridges of black corrugated tubing (cover the end with a screen). The Maryland Department of Agriculture lists unusual breeding places and offers help on its website. I may be crazy, but some big holes are appearing in one of my trees and it looks to me like they are roughly squarish. What is doing this? How do I stop it? Those holes are the work of one of our largest and most majestic forest birds — the pileated woodpecker. With a vibrant red topknot, zebra-striped head and neck, and coal black wings, the pileated woodpecker makes a striking sight. The pileated woodpecker pecks on dead trees (snags) looking for carpenter ants, beetle larvae and termites. The male pileated fashions a very big hole for its family nest. The holes they create are later used for nesting by other birds, such as wrens. If your tree has a lot of holes, it may have undiagnosed insect problems. The International Society of Arboriculture has a list of certified arborist by county on its website. Those associated with tree service companies will inspect the tree for no charge. Snags are an important food source for a wide range of animal life and should be left in place where possible. Plant of the week Delphinium grandiflorum Gardeners can be a romantic, optimistic bunch who fall for and buy plants that, though beautiful, are difficult. This is true with delphiniums. Though they come in over 300 species of annuals, biennials and perennials, we are drawn to the brightly colored tall hybrids, stars of the traditional English cottage garden. While delphiniums do well in the Pacific Northwest, they are short-lived in Maryland's hot, humid summers. A better choice is Delphinium grandiflorum, not as formal looking as other species but happier in our climate. Though short-lived, they don't need staking or wind protection. They're primarily available in the famous delphinium blue shades. Delphiniums prefer full sun and well-drained, basic soil. (Apply lime if your soil test shows it is acidic.) They are heavy feeders and enjoy a dressing of well-rotted manure or compost as well as a balanced fertilizer. Removing spent flowers encourages more blooms. —Christine Pfister McComas Copyright © 2016, Daily Press
Tracing Letters: G 3.9 based on 10 ratings This tracing letters worksheet introduces kids to the letter G! Kids completing this worksheet practice writing the letter G and identifying pictures that have names beginning with G. See the rest of this tracing letters series here. Preschool The Alphabet Fine Motor Skills Worksheets: Tracing Letters: G Download Worksheet How likely are you to recommend to your friends and colleagues? Not at all likely Extremely likely
Loading the player... What is the 'Real Effective Exchange Rate - REER' The real effective exchange rate (REER) is the weighted average of a country's currency relative to an index or basket of other major currencies, adjusted for the effects of inflation. The weights are determined by comparing the relative trade balance of a country's currency against each country within the index. This exchange rate is used to determine an individual country's currency value relative to the other major currencies in the index, such as the U.S. dollar, Japanese yen and the euro. BREAKING DOWN 'Real Effective Exchange Rate - REER' The REER is used to measure the value of a specific currency in relation to an average group of major currencies. The REER takes into account any changes in relative prices and shows what can actually be purchased with a currency. This means that the REER is normally trade-weighted. The REER is derived by taking a country's nominal effective exchange rate (NEER) and adjusting it to include price indices and other trends. The REER, then, is essentially a country's NEER after removing price inflation or labor cost inflation. The REER represents the value that an individual consumer pays for an imported good at the consumer level. This rate includes any tariffs and transaction costs associated with importing the good. A country's REER can also be derived by taking the average of the bilateral real exchange rates (RER) between itself and its trading partners and then weight it using the trade allocation of each partner. Regardless of the way in which REER is calculated, it is an average and considered in equilibrium when it is overvalued in relation to one trading partner and undervalued in relation to a second partner. Benefits of Analyzing and Using the REER A country's REER is an important measure when assessing its trade capabilities and current import/export situation. The REER can be used to measure the equilibrium value of a country's currency, identify the underlying factors of a country's trade flow, look at any changes in international price or cost competition, and allocate incentives between tradable and non-tradable sectors. A country can positively affect its REER through rapid productivity growth. When this happens, the country realizes lower costs and can reduce prices, thus making the REER more advantageous for the country. Understanding a country's REER is extremely important when conducting economic analysis and policymaking. Therefore, the World Bank, the Eurostat, the Bank for International Settlements (BIS) and others all publish various REER indicators. These world institutions combine to provide the public with REER analysis on 113 countries around the globe. 1. Nominal Effective Exchange Rate ... The unadjusted weighted average value of a country's currency ... 2. Key Currency The currency used as a reference in an international transaction ... 3. Trade Surplus An economic measure of a positive balance of trade, where a country's ... 4. Joint Float Two or more countries agreeing to keep their currencies at a ... 5. Dollar Drain 6. Country Basket A selection of countries that are grouped together in order to ... Related Articles 1. Investing Explaining the Real Effective Exchange Rate 2. Trading Main Factors that Influence Exchange Rates 3. Trading 6 Factors That Influence Exchange Rates 4. Trading Interest Rate and Currency Value And Exchange Rate 5. Investing Explaining Fixed Exchange Rates 6. Investing Why Countries Keep Reserve Currency 7. Trading Drastic Currency Changes: What's The Cause? 8. Trading Understanding the Floating Exchange Rate 9. Investing How Foreign Exchange Affects Mergers and Acquisitions Deals 1. How does inflation affect the exchange rate between two nations? 2. What economic indicators are most used when forecasting an exchange rate? 3. What is foreign exchange? 5. What is the difference between a nation's current account deficit and its currency ... Learn the respective meanings of the two terms, current account deficit and currency valuation, and understand the relationship ... Read Answer >> 6. Why do forex traders use a currency converter? Hot Definitions 1. Federal Direct Loan Program 2. Cash Flow 3. PLUS Loan 4. Graduate Record Examination - GRE 5. Graduate Management Admission Test - GMAT 6. Magna Cum Laude Trading Center
How IPaC works: • Enter project location Provide the geographic location for the project Entering the project location is the first step in defining your project in IPaC. Draw the location on a map, or upload a shapefile if you have one. While doing so, you can interact with map layers that show the distribution of important biological resources, such as critical habitat, wetlands, GAP land cover, and so forth. • Get a resource list See listed species and other FWS trust resources in this location and get an official species list Getting a resource list lets you determine whether any threatened and endangered species, designated or proposed critical habitat, Migratory Birds of Conservation Concern, or other natural resources of concern may be affected by your project. You can also request an official species list if your project requires this regulatory document. • Analyze impacts Provide project details to get recommended conservation measures Analyzing impacts involves providing details about your project to learn of its potential impacts on species and the resources they rely upon. IPaC helps you determine what the impacts are likely to be and provides design recommendations (i.e., conservation measures) for addressing them. By obtaining this information early in the project development process, you can more easily incorporate it into your planning, thus saving time and money, and avoiding potential project delays. • Begin consultation Federal agencies can submit project data to the U.S. Fish & Wildlife Service to begin a consultation The trust resource report, official species list, and recommended conservation measures report are a starting point to building a biological assessment or habitat conservation plan. In the future, IPaC will allow you to provide even more details about your project, generate a draft biological assessment, and if your project has a federal nexus, collaborate with Service biologists in an interactive, online consultation.
transitional cell cancer Pronunciation: (tran-ZIH-shuh-nul sel KAN-ser) Cancer that forms in transitional cells in the lining of the bladder, ureter, or renal pelvis (the part of the kidney that collects, holds, and drains urine). Transitional cells are cells that can change shape and stretch without breaking apart. Source: NCI Dictionary of Cancer Terms Date last modified: 2007-01-10
Sanitary Sewer Operations: Blockage Removal These can be temporary events that stop flow for a short time, allow the “weeping” of water to filter through solids, or it can cause a sewer spill when water backs up and relieves at the lowest point in the system via manholes, cleanouts, or plumbing systems inside structures. Blockages & Safety Blockages fall into five primary categories: Blocked flow with manhole(s) holding water but not spilling; Spilling but contained/ponding on its own; Spilling and relieving into waterways; Spilling inside a structure, and; Large diameter pipe/forcemain spilling requiring by-pass. Outside of the local regulatory agency reporting criteria, there are elements of tactical response and remediation that should be observed. From a safety standpoint, blockages should be relieved from the downstream, dry manhole. Many serious accidents have occurred from using high velocity flushing machines from the full manhole. If relieving the blockage from the full manhole is the only option, extreme caution should be used when flushing equipment is used in this situation. Relieving blockages Relieving blockages can be quick and easy, or difficult and slow depending on circumstances. The key is patience in working the high velocity jet nozzle. Selecting the proper tool is critical, nozzles specifically designed for this task such as penetrator and/or chisel point should be utilized. They will provide the thrust required to break through and break up the blockage. Blockage removal is typically achieved by working the nozzle into or over the blockage. The key is to keep the nozzle moving at all times. In order to avoid and/or minimize damaging the pipe, do not to leave the nozzle in one position for extended periods. Once the nozzle passes through the blockage, a steady hose rewind rate of less than one (1) foot per second should be maintained to ensure effective cleaning of the blockage material. Debris removed from the blockage should be examined and evaluated to determine the possible cause. Blockage removal commonly calls for “two up/two down” cleaning once the blockage has been removed. This refers to the number of pipe sections above and below the stoppage point and any intersecting sewer lines that may be impacted. CCTV inspection should be a primary follow up tool to assess cause/cure elements of mechanical problems, root intrusion, or fats, oils & grease (FOG) that can be controlled. In review of types of blockages, weeping blockages are typically found during routine maintenance or from customer odor complaints from bio-mass accumulation in the system. It is not unusual for a long term weeping blockage to have over two pipe sections holding materials that will need cleaning. This usually requires a brief investigation of manholes upstream from the clear manhole to assess the pipe sections that will need cleaning once the blockage is removed. Blockages are a very visible failure of the system and should be managed in a professional and expeditious manner to maintain public confidence. Stephen Tilson is a nationally known collection systems operations consultant, providing successful SSO reduction programs and equipment operation education. He can be reached by email at Related Articles Find articles with similar topics
Briefs: Environmental Nutrition It seems that every major health organization recommends that we eat more fish. It's a fantastic source of lean protein, packed with vitamins and minerals, and extremely low in fat. The fish that are a bit higher in fat, such as salmon and mackerel contain huge amounts of health-boosting omega-3 fatty acids, however. For that reason, the American Heart Association suggests you put fish on your menu at least twice a week. While it's easy enough to buy a pound or two of fresh fish from the supermarket fish counter or your local fishmonger and cook it up, once in a while having someone else do the work is a much-appreciated convenience. The cost of fresh fish can be steep, and prepackaged fish can often fit into your grocery budget a bit better. But does your health pay a price for all the prep work? We put on our mittens and scoured the frozen fish aisles of the supermarket to find out. Pre-prepped frozen fish is a fantastic base for a quick and healthy meal, as long as you keep these tips in mind: 1. All fish portions are not created equal. The nutrition information on some frozen fish packages may look too good to be true. That's because the portion size they refer to may not be much more than a few bites. When you're comparing fish to fish, be sure you're comparing adequate portion sizes--roughly three to four ounces (or 85 to 113 grams.) 2. Check the salt. The benefit of having someone else prepare your fish may come at the cost of a higher sodium content. Read the Nutrition Facts Panel to be sure the fish will fit into your daily sodium limits (shoot for below 2,300 milligrams per day; 1,500 if you're at risk for hypertension.) 3. Batter beware. In most cases, batters and breadings add calories, fat and sodium to your fish. 4. Look for omega-3 rich choices. Many types of seafood contain small amounts of omega-3s that can add up, but try to include omega-3 superstars more often, such as mackerel, lake trout, herring, tuna, salmon, sardines, anchovies, and whitefish. By the Editors of Environmental Nutrition If you consume soy-containing products--ubiquitous on supermarket shelves--chances are you've eaten foods that contain genetically modified organisms (GMOs), also known as genetically engineered (GE). According to the Grocery Manufacturers Association, about 80 percent of foods in the U.S. contain GMOs, such as corn and soy ingredients. Currently, there's no way you can know due to lack of required food labeling. However, consumers increasingly feel they have a right to know what is in the food they eat--particularly when it comes to GMOs; in a 2012 survey by the consumer research company The Mellman Group, 91 percent favored labeling. GMOs are created when the genetic material of an organism is modified in a way that does not occur in nature. It may be modified to create disease- or herbicide-resistant crops, or to include vitamins normally not found in the crop. Introduced commercially in 1996, the most common GMOs in the food supply are soy and corn. The U.S. Food and Drug Administration reports no concern over the safety of GE foods and has taken a stance against labeling, alleging that the foods are basically the same as other foods and don't pose any health risk. However, some health advocacy groups are concerned about potential health and environmental risks of GMOs. The consumer health advocacy organization Environmental Working Group, along with more than 500 other organizations representing healthcare, consumer advocates, farmers, businesses, environmentalists and more is pressuring the FDA to label GMOs via the Just Label It Campaign ( GMO labeling is required in over 40 countries, but not in the U.S., though individual states, such as Connecticut and Vermont, have introduced legislation calling for labeling. For example, California initiated grassroots efforts to advocate for labeling through a coalition called The Committee for the Right to Know, which submitted the California Right to Know Genetically Engineered Food Act in November 2011. While we're waiting for mandatory labeling, there are many ways you can avoid GMOs if you're so inclined.You can limit them by avoiding processed foods--most GE crops are used for processed ingredients such as soybean oil and high fructose corn syrup--and purchasing organic products, which are not allowed to contain GMOs. And now, you can look for the Non-GMO Project-Verified seal on food labels, which indicates they are GMO-free. The Non-GMO Project, a nonprofit collaboration of manufacturers, retailers, distributors and farmers that promotes informed choice, provides a third-party verification and labeling program ( for food products that do not contain any GMOs.
Nearly Half of Physicians Report Burnout Compared to other working adults, physicians are far more likely to be dissatisfied with their work-life balance or have symptoms of burnout, according to a national study in the Archives of Internal Medicine. Almost half (46%) of U.S. physicians reported burnout symptoms, such as feelings of cynicism or “depersonalization” toward patients. In addition to causing medical errors, the high rate of burnout in the health care industry is one of the leading causes of the physician shortage as doctors leave the industry to find other jobs, according to a survey from November 2011. In addition to evaluating burnout rates among physicians by specialty, it compared physicians to other employed adults. More than 7,200 physicians answered surveys and a modified version of their questionnaire was compared with a probability-based sample of nearly 3,500 working adults. Physicians were more likely to have symptoms of burnout with 38% of physicians compared to 28% of other working adults. Individuals with an MD or DO degree had an increased risk for burnout, even compared to people with master’s degrees, professional degrees or other doctoral degrees. The highest level of education completed related to burnout. Physicians at the front line of care access were at the greatest risk of burnout: family medicine, general internal medicine and emergency medicine. Preventive care specialists are among the least affected. In fact, preventive care specialists, along with those in occupational medicine and environmental medicine, reported being the most satisfied with the time they have for personal or family life. General surgeons are the least satisfied. Here are the specialties reporting the largest and smallest percentages of burnout: Most burnout 5. Otolaryngology 4. Family medicine 3. Neurology 2. General internal medicine 1. Emergency medicine Least burnout 5. Radiation oncology 4. Pathology 3. General pediatrics 2. Dermatology 1. Preventive medicine, occupational medicine or environmental medicine Most Popular
Electrode-defined quantum dots provide a scalable architecture for quantum information processing by trapping electrons and controlling their spin state, either ‘up’ (red) or ‘down’ (blue). Image: 2014 Matthieu Delbecq and Shinichi Amaha, RIKEN Center for Emergent Matter ScienceA single electron trapped in a semiconductor nanostructure can form the most basic of building blocks for a quantum computer. Before practical quantum computers can be realized, however, scientists need to develop a scalable architecture that allows full control over individual electrons in computational arrays. Matthieu Delbecq and colleagues from the RIKEN Center for Emergent Matter Science, in collaboration with researchers from Purdue University in the United States, have now demonstrated the scalability of quantum dot architectures by trapping and controlling four electrons in a single device. Electrons have a property known as spin that can be either ‘up’ or ‘down’. This is the same binary coding as used in conventional computing, but electrons can also be linked at the quantum level to form quantum bits, or ‘qubits’, that can have many more usable states, providing dramatic improvements in computational performance. “The number of manipulated electrons is increased only by one with respect to previous structures,” explains Delbecq, “but even a small increase in the number of electrons significantly increases the complexity of device manipulation.” Each of the dots in the device created by Delbecq’s team was formed by three nanoscale metallic electrodes on a semiconductor substrate. The capacitance between each dot couples the electron in one dot to that in the next, and the researchers could tune the strength of this coupling by adjusting the voltages applied to the electrodes. All this was achieved at extremely low temperatures, just a fraction above absolute zero. The researchers demonstrated a scheme for both controlling the electrons in the four quantum dots and measuring or ‘reading out’ the spin state of the electrons. “The next step is to form four spin qubits with this architecture and use them to actually perform computations,” says Delbecq. The results demonstrate that quantum dot architecture has the potential to be scaled up to the number of qubits needed to realize a fully functional quantum computer. Source: RIKEN
Delta College logo Recent American History Course NumberHIS 222W Lab Hours0 Lecture Hours45 Course DescriptionPrerequisites: READING LEVEL 3 and WRITING LEVEL 3. Surveys modern America from 1850. Examines topics such as: transportation, activism, politics, labor, industrialism, growth of government and regulation, war, economics, social diversity, civil rights, legalism, constitutionalism, Cold War ideology. (45-0) Outcomes and Objectives Compose an effective narrative that describes and analyzes the history of the United States in response to an analytical question. 2. Select from a range of media best suited to communicating a particular argument, narrative, or set of ideas. Describe and analyze various types of historical sources appropriate to the study of the United States. 1. Describe the differences between primary and secondary sources. Describe, analyze, and evaluate conflicting historical interpretations within the context of the United States. 1. Identify and describe conflicting historical interpretations. 2. Analyze the evidence supporting conflicting historical interpretations. 3. Evaluate the rhetorical effectiveness of conflicting historical interpretations. Analyze and evaluate the ways in which the history of the United States informs the current political, cultural, and social issues of the United States and its relationship to global culture. 1. Compare, contrast, and contextualize the political, cultural, and social history of the United States and the present. Analyze global paradigms relevant to the traditional narrative of Later American History. Use writing tasks to promote learning. 1. Analyze course content in written form. 2. Demonstrate knowledge of subject matter. 3. Document attainment of skills learned. 4. Explain the subject matter in a coherent writing style. Copyright ©
RIP H.R. Giger, father of the chupacabra The reasoning goes like this: Giger designed the monster, Sil, featured in the 1995 science-fiction film "Species". Soon after Species came out, a Puerto Rican woman named Madelyne Tolentino claimed she saw a creature near her house. She described it as having large eyes, walking on two legs, having no ears or nose, and a row of spikes on its spines. Tolentino's description strongly resembled the creature in Species. So much so that it was probably inspired by it. And Tolentino's account was the most influential early description of a chupacabra. Radford explains: The Sil creature and the chupacabra creature that Tolentino described are remarkably similar... As I discuss in my book "Tracking the Chupacabra: The Vampire Beast in Fact, Fiction and Folklore," both Sil and the chupacabra also have the same origin stories. The two best-known explanations for the chupacabra are that it is either an alien life form or the result of secret U.S. government genetics experiments gone wrong — a sort of Frankenstein scenario. These are also the origin explanations for the creature in "Species." Sil is both an extraterrestrial alien and the result of secret U.S. government genetics experiments gone wrong. The similarities between Sil and the chupacabra — both physically and in the stories told about them — are unmistakable. Thus the original and most influential chupacabra eyewitness in history described a monster she'd seen in a movie as a mysterious beast she encountered in real life. Over time the chupacabra has changed form, and most modern reports are not of the original chupacabra that Giger designed (and Tolentino described) but instead resemble mangy dogs, coyotes and even raccoons. It's unlikely that Tolentino intentionally created a hoax that spawned a famous monster. Instead she simply confused a real-life memory with something she experienced in a film. This is a common — and harmless — phenomenon known in psychology as confabulation. So Giger designed Sil. Tolentino borrowed details from Sil when she described the creature outside her house. And Tolentino's account spawned the chupacabra legend. In that way, Giger became the father of the chupacabra. Giger's Sil from Species, vs. an image of the Chupacabra based on Tolentino's description Posted on Wed May 14, 2014 Not sure how convinced I am by this theory. Tolentino sees an entity one night and instead of noting its particular features her head conjures up a creature from a movie she saw? And she never thinks to herself, "Wow! That weird entity looks just like that thing I saw in that movie," The human mind's mechanisms are subtle, and such things no doubt happen, but I don't think we can assume such a thing has happened in a specific case. R.I.P. H.R.Giger! Posted by Pete Byrdie  in  UK  on  Thu May 15, 2014  at  12:09 AM
card verification value (CVV) On a typical credit card, there are two components to the CVV. The first code is recorded by the card issuer in a magnetic stripe that runs lengthwise along the back of the card. This stripe resembles magnetic tape and can contain a large amount of data. The code is recovered by sliding the card through a magnetic stripe reader that picks up the binary data in a manner similar to the way a tape drive works. The second code is a multi-digit numeral printed flat on the card, separate from the longer, embossed account numeral. On a VISA, MasterCard or Discover Card, the printed CVV contains three digits and is located on the back near the signature area. On an American Express card, it contains four digits and is located on the front near the embossed account numeral. When properly used, the CVV is highly effective against some forms of fraud. For example, if the data in the magnetic stripe is changed, a stripe reader will indicate a "damaged card" error. The flat-printed CVV is (or should be) routinely required for telephone or Internet-based purchases because it implies that the person placing the order has physical possession of the card. Some merchants check the flat-printed CVV even when transactions are conducted in person. CVV technology cannot protect against all forms of fraud. If a card is stolen or the legitimate user is tricked into divulging vital account information to a fraudulent merchant, unauthorized charges against the account can result. A common method of stealing credit card data is phishing, in which a criminal sends out legitimate-looking email in an attempt to gather personal and financial information from recipients. Once the criminal has possession of the CVV in addition to personal data from a victim, widespread fraud against that victim, including identity theft, can occur. This was last updated in February 2008 Continue Reading About card verification value (CVV) Dig Deeper on Debit and credit card fraud prevention Forgot Password? Your password has been sent to: Extensiones de Documento y Formatos de Documento Accionado por:
On Liberty Test | Final Test - Hard Buy the On Liberty Lesson Plans Name: _________________________ Period: ___________________ Short Answer Questions 1. Do Mormons face persecution in Mill's society? 2. Is it unethical to drive people down into being more slavish and less independent of will and thought? 3. What does the author imply that exist to make the best of the citizenry? 4. What does the author believe is immoral? 5. What does John Stuart Mill repeat is needed and beneficial to humanity on the whole? Short Essay Questions 1. What are two of the most significant questions regarding one's conduct? 2. What does Mill have to say about one's place in society in the Victorian age? 3. What is one strength of which the readers of this book must be aware? 4. Who is Wilhelm Von Humboldt? 5. With what question does the author begin this chapter? 6. What is the purpose of this chapter? 7. What else does the author want to do with his principles? 8. How would Aristotle have worded this previous statement about nurturing? 9. What does Mill observe about Mormons? 10. What is the author's belief that will happen if a government does not nurture its citizens? Essay Topics Write an essay for ONE of the following topics: Essay Topic 1 Diversity of situations is important to Mill. Part 1) What is meant by diversity of situations? Part 2) Why is diversity in this sense important, according to Mill? Do you agree with Mill?Why or why not? Part 3) How can a society or state keep this type of diversity? Compare the diversity of situations in Mill's time to today. Based on this text, what would he say about our diversity of situations? Part 4) Do you feel that a lack of diversity of situations is a problem in our society today? Why or why not? Essay Topic 2 Mill discusses the amount of sovereignty an individual has over him or herself. Part 1) What is sovereignty? What does Mill question, regarding one's sovereignty? Part 2) How does Mill study and consider the amount of sovereignty one has over him/herself? To what, if any, conclusion does he come? Part 3) How much sovereignty do you have? How do you feel about this? Would you like more or less? Why? Essay Topic 3 Copernicus' knowledge of the Solar system is discussed. Part 1) Describe the story of Copernicus. Why does the author use this as an example to prove the importance of freedom of speech? Part 2) If Copernicus had been wrong, should he still have been allowed to express his beliefs? Part 3) What might cause someone to persecute and stifle those with whom they disagree? how does this hurt everyone involved? (see the answer keys) This section contains 854 words (approx. 3 pages at 300 words per page) Buy the On Liberty Lesson Plans On Liberty from BookRags. (c)2016 BookRags, Inc. All rights reserved. Follow Us on Facebook
Sunbathers Expect more sunbathing as global temperatures continue to rise. Reuters/David Gray This year marks a "symbolic and significant milestone" for average global temperatures on land, which will reach a whole 1°C higher than temperatures before industrialisation, according to the World Meteorological Organisation. The rise in temperature is due to the combination of El Nino &mdash a weather pattern that's characterized by warmer-than average water temperatures in the Pacific Ocean &mdash and man-made climate change caused by greenhouse gas emissions, according to WMO. "We expect 2016 to be the warmest year ever, primarily because of climate change but around 25% because of El Niño,” Adam Scaife, head of the Met Office's long-range forecasts, told the Guardian on Sunday. "The state of the global climate in 2015 will make history as for a number of reasons," WMO Secretary-General Michel Jarraud said in a November statement."Levels of greenhouse gases in the atmosphere reached new highs and in the Northern hemisphere spring 2015 the three-month global average concentration of CO2 crossed the 400 parts per million barrier for the first time. He added: "2015 is likely to be the hottest year on record, with ocean surface temperatures at the highest level since measurements began. This is all bad news for the planet." Global temperature growth This graph from the Met Office shows that global temperatures are now more than 1°C above pre-industrialisation (1850-1900) averages for the first time. Met Office The link between climate change and extreme weather The FT notes that this study "is part of a relatively new and still contentious branch of climate science" known as attribution science, which "tries to determine possible connections by using climate models and weather data." Wacky weather around the world Adam Scaife from the Met Office, also sees a pattern of record temperatures emerging, telling the Financial Times that the Met Office's latest forecast “suggests that by the end of 2016 we will have seen three record, or near-record years in a row for global temperatures.” It isn't just the UK that has been hit by unusual weather this month though. We're seeing extreme weather patterns all over the world. The previous target, set in 2010, had been to make sure that "dangerous climate change" was prevented by keeping global temperature rises below 2°C.
From Wikipedia, the free encyclopedia Jump to: navigation, search Aliases SCT, entrez:6343, Secretin External IDs OMIM: 182099 MGI: 99466 HomoloGene: 137358 GeneCards: 6343 Species Human Mouse RefSeq (mRNA) RefSeq (protein) Location (UCSC) Chr 11: 0.63 – 0.63 Mb Chr 7: 141.28 – 141.28 Mb PubMed search [1] [2] View/Edit Human View/Edit Mouse Secretin is a hormone that regulates water homeostasis throughout the body and influences the environment of the duodenum by regulating secretions in the stomach, pancreas, and liver. It is a peptide hormone produced in the S cells of the duodenum, which are located in the intestinal glands.[3] In humans, the secretin peptide is encoded by the SCT gene.[4] Secretin helps regulate the pH of the duodenum by (1) inhibiting the secretion of gastric acid from the parietal cells of the stomach and (2) stimulating the production of bicarbonate from the centroacinar cells and intercalated ducts of the pancreas.[5] It also stimulates bile production by the liver; the bile emulsifies dietary fats in the duodenum so that pancreatic lipase can act upon them. Meanwhile, in concert with secretin's actions, the other main hormone simultaneously issued by the duodenum, cholecystokinin, is stimulating the gallbladder to contract, delivering its stored bile for the same reason. Prosecretin is a precursor to secretin, which is present in digestion. Secretin is stored in this unusable form, and is activated by gastric acid in the lower intestine to neutralize the pH and ensure no damage is done to the small intestine by the aforementioned acid.[6] In 2007, secretin was discovered to play a role in osmoregulation by acting on the hypothalamus, pituitary gland, and kidney.[7][8] Secretin was the first hormone to be identified.[9] In 1902, William Bayliss and Ernest Starling were studying how the nervous system controls the process of digestion.[10] It was known that the pancreas secreted digestive juices in response to the passage of food (chyme) through the pyloric sphincter into the duodenum. They discovered (by cutting all the nerves to the pancreas in their experimental animals) that this process was not, in fact, governed by the nervous system. They determined that a substance secreted by the intestinal lining stimulates the pancreas after being transported via the bloodstream. They named this intestinal secretion secretin. Secretin was the first such "chemical messenger" identified. This type of substance is now called a hormone, a term coined by Bayliss in 1905.[11] Secretin also has an amidated carboxyl-terminal amino acid which is valine.[13] The sequence of amino acids in secretin is H–His-Ser-Asp-Gly-Thr-Phe-Thr-Ser-Glu-Leu-Ser-Arg-Leu-Arg-Asp-Ser-Ala-Arg-Leu-Gln-Arg-Leu-Leu-Gln-Gly-Leu-Val–NH2.[13] Secretin is released into circulation and/or intestinal lumen in response to low duodenal pH that ranges between 2 and 4.5 depending on species.[15] Also, the secretion of secretin is increased by the products of protein digestion bathing the mucosa of the upper small intestine.[16] The acidity is due to hydrochloric acid in the chyme that enters the duodenum from the stomach via the pyloric sphincter. Secretin targets the pancreas, which causes the organ to secrete a bicarbonate-rich fluid that flows into the intestine. Bicarbonate is a base that neutralizes the acid, thus establishing a pH favorable to the action of other digestive enzymes in the small intestine and preventing acid burns.[17] Other factors are involved in the release of secretin such as bile salts and fatty acids, which result in additional bicarbonates being added to the small intestine.[18] Secretin release is inhibited by H2 antagonists, which reduce gastric acid secretion. As a result, if the pH in the duodenum increases above 4.5, secretin cannot be released.[19] Secretin stimulates the release of a watery bicarbonate solution from the pancreatic and bile duct epithelium. Pancreatic centroacinar cells have secretin receptors in their plasma membrane. As secretin binds to these receptors, it stimulates adenylate cyclase activity and converts ATP to cyclic AMP.[20] Cyclic AMP acts as second messenger in intracellular signal transduction and leads to an increase in the release of watery bicarbonate. It is known to promote the normal growth and maintenance of the pancreas. Secretin increases water and bicarbonate secretion from duodenal Brunner's glands to buffer the incoming protons of the acidic chyme.[21] It also enhances the effects of cholecystokinin to induce the secretion of digestive enzymes and bile from pancreas and gallbladder, respectively. It counteracts blood glucose concentration spikes by triggering increased insulin release from pancreas, following oral glucose intake.[22] Although secretin releases gastrin from gastrinomas, it inhibits gastrin release from the normal stomach. It reduces acid secretion by parietal cells of the stomach.[23] It does this through at least three mechanisms: 1) By stimulating release of somatostatin, 2) By inhibiting release of gastrin in the pyloric antrum, and 3) By direct downregulation of the parietal cell acid secretory mechanics.[24] This helps neutralize the pH of the digestive products entering the duodenum from the stomach, as digestive enzymes from the pancreas (e.g., pancreatic amylase and pancreatic lipase) function optimally at slightly basic pH.[21] Secretin has been widely used in medical field especially in pancreatic functioning test because it increases pancreatic secretions. Secretin is either injected[25] or given through a tube that is inserted through nose, stomach then duodenum.[26] This test can provide information about whether there are any abnormalities in the pancreas which can include gastrinoma, pancreatitis or pancreatic cancer. Secretin has been proposed as a possible treatment for autism based on a hypothetical gut-brain connection; as yet there is no evidence to support it as effective.[27][28] Secretin modulates water and electrolyte transport in pancreatic duct cells,[29] liver cholangiocytes,[30] and epididymis epithelial cells.[31] It is found[32] to play a role in the vasopressin-independent regulation of renal water reabsorption.[7] It has been suggested that abnormalities in such secretin release could explain the abnormalities underlying type D syndrome of inappropriate antidiuretic hormone hypersecretion (SIADH).[8] In these individuals, vasopressin release and response are normal, although abnormal renal expression, translocation of aquaporin 2, or both are found.[8] It has been suggested that "Secretin as a neurosecretory hormone from the posterior pituitary, therefore, could be the long-sought vasopressin independent mechanism to solve the riddle that has puzzled clinicians and physiologists for decades."[8] Food intake[edit] See also[edit] 1. ^ "Human PubMed Reference:".  2. ^ "Mouse PubMed Reference:".  3. ^ Häcki WH (1980). "Secretin". Clinics in Gastroenterology. 9 (3): 609–32. PMID 7000396.  4. ^ a b Kopin AS, Wheeler MB, Leiter AB (1990). "Secretin: structure of the precursor and tissue distribution of the mRNA". Proceedings of the National Academy of Sciences of the United States of America. 87 (6): 2299–303. Bibcode:1990PNAS...87.2299K. doi:10.1073/pnas.87.6.2299. JSTOR 2354038. PMC 53674free to read. PMID 2315322.  5. ^ Whitmore TE, Holloway JL, Lofton-Day CE, Maurer MF, Chen L, Quinton TJ, Vincent JB, Scherer SW, Lok S (2000). "Human secretin (SCT): gene structure, chromosome location, and distribution of mRNA". Cytogenetics and Cell Genetics. 90 (1–2): 47–52. doi:10.1159/000015658. PMID 11060443.  6. ^ Gafvelin G, Jörnvall H, Mutt V (Sep 1990). "Processing of prosecretin: isolation of a secretin precursor from porcine intestine" (PDF). Proceedings of the National Academy of Sciences of the United States of America. 87 (17): 6781–5. doi:10.1073/pnas.87.17.6781. PMC 54621free to read. PMID 2395872.  7. ^ a b Chu JY, Chung SC, Lam AK, Tam S, Chung SK, Chow BK (2007). "Phenotypes developed in secretin receptor-null mice indicated a role for secretin in regulating renal water reabsorption". Molecular and Cellular Biology. 27 (7): 2499–511. doi:10.1128/MCB.01088-06. PMC 1899889free to read. PMID 17283064.  8. ^ a b c d e Chu JY, Lee LT, Lai CH, Vaudry H, Chan YS, Yung WH, Chow BK (2009). "Secretin as a neurohypophysial factor regulating body water homeostasis". Proceedings of the National Academy of Sciences of the United States of America. 106 (37): 15961–6. Bibcode:2009PNAS..10615961C. doi:10.1073/pnas.0903695106. JSTOR 40484830. PMC 2747226free to read. PMID 19805236.  9. ^ Henriksen JH, Schaffalitzky de Muckadell OB (2002). "Sekretin - det første hormon" [Secretin--the first hormone]. Ugeskrift for Laeger (in Danish). 164 (3): 320–5. PMID 11816326. INIST:13419424.  10. ^ Bayliss WM, Starling EH (1902). "The mechanism of pancreatic secretion". The Journal of Physiology. 28 (5): 325–53. doi:10.1113/jphysiol.1902.sp000920. PMC 1540572free to read. PMID 16992627.  11. ^ Hirst, BH (2004), "Secretin and the exposition of hormonal control", J Physiol, 560: 339, doi:10.1113/jphysiol.2004.073056, PMC 1665254free to read, PMID 15308687.  13. ^ a b DeGroot, Leslie Jacob (1989). McGuigan, J. E., ed. Endocrinology. Philadelphia: Saunders. p. 2748. ISBN 0-7216-2888-5.  14. ^ Polak JM, Coulling I, Bloom S, Pearse AG (1971). "Immunofluorescent localization of secretin and enteroglucagon in human intestinal mucosa". Scandinavian Journal of Gastroenterology. 6 (8): 739–44. doi:10.3109/00365527109179946. PMID 4945081.  15. ^ a b Frohman, Lawrence A.; Felig, Philip (2001). "Gastrointestinal Hormones and Carcinoid Syndrome". In Ghosh, P. K.; O'Dorisio, T. M. Endocrinology & metabolism. New York: McGraw-Hill, Medical Pub. Div. pp. 1675–701. ISBN 0-07-022001-8.  16. ^ Ganong, William F. (2003). "Regulation of Gastrointestinal Function". Review of Medical Physiology (21st ed.). New York: McGraw-Hill, Medical Pub. Div. ISBN 0-07-140236-5. [page needed] 17. ^ 18. ^ Osnes M, Hanssen LE, Flaten O, Myren J (1978). "Exocrine pancreatic secretion and immunoreactive secretin (IRS) release after intraduodenal instillation of bile in man". Gut. 19 (3): 180–4. doi:10.1136/gut.19.3.180. PMC 1411891free to read. PMID 631638.  19. ^ Rominger JM, Chey WY, Chang TM (1981). "Plasma secretin concentrations and gastric pH in healthy subjects and patients with digestive diseases". Digestive Diseases and Sciences. 26 (7): 591–7. doi:10.1007/BF01367670. PMID 7249893.  20. ^ Gardner, JD (1978). "Receptors and gastrointestinal hormones". In Sleisenger, MH; Fordtran, JS. Gastrointestinal Disease (2nd ed.). Philadelphia: WB Saunders Company. pp. 179–95.  21. ^ a b Hall, John E.; Guyton, Arthur C. (2006). Textbook of medical physiology. St. Louis, Mo: Elsevier Saunders. pp. 800–1. ISBN 0-7216-0240-1.  22. ^ Kraegen EW, Chisholm DJ, Young JD, Lazarus L (1970). "The gastrointestinal stimulus to insulin release. II. A dual action of secretin". The Journal of Clinical Investigation. 49 (3): 524–9. doi:10.1172/JCI106262. PMC 322500free to read. PMID 5415678.  23. ^ Palmer, KR; Penman, ID (2010). "Alimentary track and pancreatic disease". In Colledge, NR; Walker, BR; Ralston, SH. Davidson's Principles and Practice of Medicine (20th ed.). Edinburgh: Churchill Livingstone. p. 844. ISBN 0-7020-3085-6.  24. ^ Boron, Walter F.; Boulpaep, Emile L. (2012). "Acid secretion". Medical Physiology (2nd ed.). Philadelphia: Saunders. p. 1352. ISBN 978-1-4377-1753-2.  25. ^ "Human Secretin". Patient Information Sheets. United States Food and Drug Administration. 2004-07-13. Archived from the original on May 11, 2009. Retrieved 2008-11-01.  28. ^ Sandler AD, Sutton KA, DeWeese J, Girardi MA, Sheppard V, Bodfish JW (1999). "Lack of benefit of a single dose of synthetic human secretin in the treatment of autism and pervasive developmental disorder". The New England Journal of Medicine. 341 (24): 1801–6. doi:10.1056/NEJM199912093412404. PMID 10588965.  29. ^ Villanger O, Veel T, Raeder MG (1995). "Secretin causes H+/HCO3- secretion from pig pancreatic ductules by vacuolar-type H(+)-adenosine triphosphatase". Gastroenterology. 108 (3): 850–9. doi:10.1016/0016-5085(95)90460-3. PMID 7875488.  30. ^ Marinelli RA, Pham L, Agre P, LaRusso NF (1997). "Secretin promotes osmotic water transport in rat cholangiocytes by increasing aquaporin-1 water channels in plasma membrane. Evidence for a secretin-induced vesicular translocation of aquaporin-1". The Journal of Biological Chemistry. 272 (20): 12984–8. doi:10.1074/jbc.272.20.12984. PMID 9148905.  31. ^ Chow BK, Cheung KH, Tsang EM, Leung MC, Lee SM, Wong PY (2004). "Secretin controls anion secretion in the rat epididymis in an autocrine/paracrine fashion". Biology of Reproduction. 70 (6): 1594–9. doi:10.1095/biolreprod.103.024257. PMID 14749298.  32. ^ Cheng CY, Chu JY, Chow BK (2009). "Vasopressin-independent mechanisms in controlling water homeostasis". Journal of Molecular Endocrinology. 43 (3): 81–92. doi:10.1677/JME-08-0123. PMID 19318428.  33. ^ Lee VH, Lee LT, Chu JY, Lam IP, Siu FK, Vaudry H, Chow BK (2010). "An indispensable role of secretin in mediating the osmoregulatory functions of angiotensin II". FASEB Journal. 24 (12): 5024–32. doi:10.1096/fj.10-165399. PMC 2992369free to read. PMID 20739612.  34. ^ Cheng CY, Chu JY, Chow BK (2011). "Central and peripheral administration of secretin inhibits food intake in mice through the activation of the melanocortin system". Neuropsychopharmacology. 36 (2): 459–71. doi:10.1038/npp.2010.178. PMC 3055665free to read. PMID 20927047.  Further reading[edit] External links[edit]
Khan in the US 1. #486 Mcbride 2. #487 Ayala 3. #488 Moody 4. #489 Pope 5. #490 Khan 6. #491 Sparks 7. #492 Norton 8. #493 Marsh 9. #494 Owen people in the U.S. have this name View Khan on Whitepages Raquote 8eaf5625ec32ed20c5da940ab047b4716c67167dcd9a0f5bb5d4f458b009bf3b Meaning & Origins Muslim: from a personal name or status name based on Turkish khan ‘ruler’, ‘nobleman’. This was originally a hereditary title among Tartar and Mongolian tribesmen (in particular Genghis Khan, 1162–1227), but is now very widely used throughout the Muslim world as a personal name. In Iran and parts of the Indian subcontinent it is used as an honorific title after a person's personal name. 490th in the U.S. Nicknames & variations Quick facts Tens of Thousands of people in the U.S have this name to be exact New York has the most people named Khan per capita Top state populations
Friday, November 30, 2012 The Ten Merits of Sake --Otomo no Tabito (c. 662-731) The Japanese have long valued Sake, for its taste, medicinal use and even as a beauty product. For at least 2000 years, Sake has occupied a special place within Japanese culture. I have often tried to promote the benefits of drinking Sake and recently learned about a historical list of the "Ten Merits of Sake." This was a fascinating list of the benefits of Sake and I wanted to share it with my readers, to give you more reason why you should partake of this wondrous beverage. This list was provided in a kyōgen play called Mochisake which was written during the Muromachi period (1338-1573 AD). Kyōgen is a form of traditional comic theater, often including slapstick and satire, and the plays are usually short, containing only two or three roles. They are meant to be easy to understand, intended to make people laugh, and there are over 250 plays in the official repertoire. Makes me think of a Three Stooges episode. Mochisake, which can roughly be translated as Rice Cake & Sake, is a play about a couple farmers who each are traveling to the city to pay their back taxes, which they had been unable to pay because of a terrible snow storm. Each of the farmers is also carrying a special item which they hope might cause the tax collector to be easy on them. Kind of a bribe. One of the farmers has some kagami mochi, mirror rice cake, which looks like two oval mochi atop each other. The other farmer has some kikuzake, sake flavored with chrysanthemum petals. The farmers do not know each other but meet en route and end up talking with each other, discussing their mutual problem. When they finally reach the city, they go before the tax collector, explain about the blizzard, and present their mochi and sake. The tax collector is in an excellent mood and he forgives them both. In fact, they end up celebrating together, sharing the mochi and sake as well as singing and dancing. During the course of the play, the Ten Merits of Sake are mentioned: 1) Sake can be better for your health than any medicine.  2) Sake will enable you to live longer.  3) Sake will recover you from fatigue and weariness.  4) Sake will drive gloom away and cheer you up.  5) You can make friends with anyone over a drink of sake.  6) Sake will create the atmosphere where everyone can express their opinions frankly (even to their superiors or seniors).  7) Sake is a good friend of those who live alone.  8) Sake will make you feel warm to endure cold weather.  9) Sake can serve as a versatile but nourishing meal during a trip.  10) Sake will be a great gift when you visit friends. (from Sake, Health and Longevity by Yukio Takizawa) This is a great list, though one could make the case that it could apply to wine as well. It is indicative though of the deep love the Japanese possess for Sake, of how deeply it is rooted in their culture and history. In addition, Sake is not seen as a drink for the elite, as some pretentious, hoity toity alcohol, which is a problem that sometimes plagues wine. Sake is a drink for everyone, of whatever social status, of whatever profession. It pleases both peasant and Emperor. As the holiday season approaches, remember #10, that Sake makes a great gift. No comments:
=head1 NAME Popular Perl Complaints and Myths =head1 Description This document tries to explain the myths about Perl and overturn the FUD certain bodies try to spread. =head1 Abbreviations =over 4 =item * B = Misconception or Myth =item * B = Response =back =head2 Interpreted vs. Compiled =over 4 =item M: Each dynamic perl page hit needs to load the Perl interpreter and compile the script, then run it each time a dynamic web page is hit. This dramatically decreases performance as well as makes Perl an unscalable model since so much overhead is required to search each page. =item R: This myth was true years ago before the advent of mod_perl. mod_perl loads the interpreter once into memory and never needs to load it again. Each perl program is only compiled once. The compiled version is then kept into memory and used each time the program is run. In this way there is no extra overhead when hitting a mod_perl page. =back =head3 Interpreted vs. Compiled (More Gory Details) =over 4 =item R: Compiled code always has the potential to be faster than interpreted code. Ultimately, all interpreted code needs to eventually be converted to native instructions at some point, and this is invariably has to be done by a compiled application. That said, an interpreted language CAN be faster than a comprable native application in certain situations, given certain, common programming practices. For example, the allocation and de-allocation of memory can be a relatively expensive process in a tightly scoped compiled language, wheras interpreted languages typically use garbage collectors which don't need to do expensive deallocation in a tight loop, instead waiting until additional memory is absolutely necessary, or for a less computationally intensive period. Of course, using a garbage collector in C would eliminate this edge in this situation, but where using garbage collectors in C is uncommon, Perl and most other interpreted languages have built-in garbage collectors. It is also important to point out that few people use the full potential of their modern CPU with a single application. Modern CPUs are not only more than fast enough to run interpreted code, many processors include instruction sets designed to increase the performance of interpreted code. =back =head2 Perl is overly memory intensive making it unscalable =over 4 =item M: Each child process needs the Perl interpreter and all code in memory. Even with mod_perl httpd processes tend to be overly large, slowing performance, and requiring much more hardware. =item R: In mod_perl the interpreter is loaded into the parent process and shared between the children. Also, when scripts are loaded into the parent and the parent forks a child httpd process, that child shares those scripts with the parent. So while the child may take 6MB of memory, 5MB of that might be shared meaning it only really uses 1MB per child. Even 5 MB of memory per child is not uncommon for most web applications on other languages. Also, most modern operating systems support the concept of shared libraries. Perl can be compiled as a shared library, enabling the bulk of the perl interpreter to be shared between processes. Some executable formats on some platforms (I believe ELF is one such format) are able to share entire executable TEXT segments between unrelated processes. =back =head3 More Tuning Advice: =over 4 =item * L =back =head2 Not enough support, or tools to develop with Perl. (Myth) =over 4 =item R: Of all web applications and languages, Perl arguable has the most support and tools. B is a central repository of Perl modules which are freely downloadable and usually well supported. There are literally thousands of modules which make building web apps in Perl much easier. There are also countless mailing lists of extremely responsive Perl experts who usually respond to questions within an hour. There are also a number of Perl development environments to make building Perl Web applications easier. Just to name a few, there is C, C, C, C, etc... =back =head2 If Perl scales so well, how come no large sites use it? (myth) =over 4 =item R: Actually, many large sites DO use Perl for the bulk of their web applications. Here are some, just as an example: B, B, B( http://imdb.com ), B ( http://valueclick.com ), B, B ( http://cmpnet.com ), B/B, and B to name a few. Even B has taken interest in Perl via http://www.activestate.com/. =back =head2 Perl even with mod_perl, is always slower then C. =over 4 =item R: The Perl engine is written in C. There is no point arguing that Perl is faster than C because anything written in Perl could obviously be re-written in C. The same holds true for arguing that C is faster than assembly. There are two issues to consider here. First of all, many times a web application written in Perl B than C thanks to the low level optimizations in the Perl compiler. In other words, its easier to write poorly written C then well written Perl. Secondly its important to weigh all factors when choosing a language to build a web application in. Time to market is often one of the highest priorities in creating a web application. Development in Perl can often be twice as fast as in C. This is mostly due to the differences in the language themselves as well as the wealth of free examples and modules which speed development significantly. Perl's speedy development time can be a huge competitive advantage. =back =head2 Java does away with the need for Perl. =over 4 =item M: Perl had its place in the past, but now there's Java and Java will kill Perl. =item R: Java and Perl are actually more complimentary languages then competitive. Its widely accepted that server side Java solutions such as C, C and C, are far slower then mod_perl solutions (see next myth). Even so, Java is often used as the front end for server side Perl applications. Unlike Perl, with Java you can create advanced client side applications. Combined with the strength of server side Perl these client side Java applications can be made very powerful. =back =head2 Perl can't create advanced client side applications =over 4 =item R: True. There are some client side Perl solutions like PerlScript in MSIE 5.0, but all client side Perl requires the user to have the Perl interpreter on their local machine. Most users do not have a Perl interpreter on their local machine. Most Perl programmers who need to create an advanced client side application use Java as their client side programming language and Perl as the server side solution. =back =head2 ASP makes Perl obsolete as a web programming language. =over 4 =item M: With Perl you have to write individual programs for each set of pages. With ASP you can write simple code directly within HTML pages. ASP is the Perl killer. =item R: There are many solutions which allow you to embed Perl in web pages just like ASP. In fact, you can actually use Perl IN ASP pages with PerlScript. Other solutions include: C, C, C, C and C. Also, Microsoft and ActiveState have worked very hard to make Perl run equally well on NT as Unix. You can even create COM modules in Perl that can be used from within ASP pages. Some other advantages Perl has over ASP: mod_perl is usually much faster then ASP, Perl has much more example code and full programs which are freely downloadable, and Perl is cross platform, able to run on Solaris, Linux, SCO, Digital Unix, Unix V, AIX, OS2, VMS MacOS, Win95-98 and NT to name a few. Also, Benchmarks show that embedded Perl solutions outperform ASP/VB on IIS by several orders of magnitude. Perl is a much easier language for some to learn, especially those with a background in C or C++. =back =head1 Credits Thanks to the mod_perl list for all of the good information and criticism. I'd especially like to thank, =over 4 =item * Stas Bekman Estas@stason.orgE =item * Thornton Prime Ethornton@cnation.comE =item * Chip Turner Echip@ns.zfx.comE =item * Clinton Eclint@drtech.co.ukE =item * Joshua Chamas Ejoshua@chamas.comE =item * John Edstrom Eedstrom@Poopsie.hmsc.orst.eduE =item * Rasmus Lerdorf Erasmus@lerdorf.on.caE =item * Nedim Cholich Enedim@comstar.netE =item * Mike Perry E http://www.icorp.net/icorp/feedback.htm E =item * Finally, I'd like to thank Robert Santos Erobert@cnation.comE, CyberNation's lead Business Development guy for inspiring this document. =back =head1 Maintainers Maintainer is the person(s) you should contact with updates, corrections and patches. =over =item * Contact the L =back =head1 Authors =over =item * Adam Pisoni Eadam@cnation.comE =back Only the major authors are listed above. For contributors see the Changes file. =cut