text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
For many of us, making healthier food choices is a way of life. We read labels, we’re mindful of our portions, we limit the amount of junk and processed foods we eat, we limit our alcohol intake and we make a concerted effort to eat “clean” and healthy. We put in the time and effort to exercise and create a fit body. So with this kind of effort, why aren’t many of us looking, feeling and living our best? While healthy eating and exercise are huge steps in the right direction, they’re only a piece of the wellness puzzle. Let’s take a look at some other important pieces. Eating When Hungry Take a look at “why” you eat. Are you eating when your body is hungry or do you eat based on the time, the size of the plate or the event around you? There’s a big difference between hunger and appetite. The body thrives when eating because of hunger, but stores fat when eating because of appetite. Here’s how to tell the difference. Hunger is a physiological response to the body needing food. Your stomach may rumble, you may feel lightheaded and you want food quickly. Often, it doesn’t really matter what you eat as long as you get something into your system quickly. Appetite is triggered by emotions, by something you’ve just seen, thought about or even by a delicious smell. With appetite, you feel an immediate urge to eat something and typically, it’s something specific. Different textures are indicative of certain emotions when driven by appetite. For example, you may want something crunchy when you’re angry and something smooth and creamy when you’re sad. Reacting to appetite is a recipe for weight gain because your body doesn’t need the food. When that’s the case, your body’s only option is to convert it to fat and store it for you. With mindless eating, you’re taking in much more than you need because you’re distracted while eating. Are you eating while cooking, passing food to others, or taking in too much because you’re reading or watching TV during a meal? You can take in thousands of excess calories through “mindless munching.” Emotional eating may also be a factor when you eat to soothe, calm, numb and relax from our problems or pain. It’s a self-soothing technique where you medicate using food as your drug of choice. Mindless or emotional eating not only cause weight gain, but can waste years as you struggle to maintain a healthy weight. Shaking It Up If your body isn’t challenged it gets bored. Are you doing the same routine consistently and has it gotten easier over time? If so, it’s time to shake things up and create some “muscle confusion.” This could mean varying your pace or throwing in some intervals or bursts of intense plyometric movements to dramatically increase the intensity for a short period of time. It could also mean using weights, resistance bands, machines, trying an organized sport, a new fitness class, DVD or different route to run. In addition to cardio routines, building muscle is critical to any fitness plan. This fires up our metabolism, gives us a fit, toned body as well as improving our quality of life by simply making every day tasks easier to perform. Look at your day and add some activity and muscle building in addition to your workouts. Are you sitting behind a desk for the rest of the day? While workouts are important, they can’t make up for an otherwise sedentary lifestyle. Adequate sleep gives you more clarity, ability to focus, greater concentration, and more energy to get you through the day. Without enough sleep, you’ll look for energy through sugar and caffeine; the perfect recipe for weight gain. The empty calories coming from sugar gives you a temporary energy surge and inevitable crash, which leaves you craving more sugar to pick you up again. This short-lived fuel doesn’t sustain or nurture your body and the calories add up quickly. Also, without enough sleep, certain hormones and chemicals don’t have an opportunity to rebalance and replenish themselves. Unfortunately, these chemicals also promote fat storage and increased appetite. Managing Immunity Suppressing Stress You can be eating well and exercising but if you’re living with chronic, unmanaged stress, you’re suppressing your immune system as well as causing physical, mental and emotional wear and tear. If you’re under constant stress and you’re an emotional eater, the stress you feel will trigger a binge. You also won’t be interested in healthy meal planning, label reading and portion control because you’re consumed by your stress. In addition, you may look for comfort foods, which are loaded in fat, sugar and calories. When stress suppresses your immune system, you’re not as able to fight off bacterial and viral invasion so you’re more susceptible to illness. Under stress, you’re also keeping your bodies tight which leads to muscle aches, pulls, tears, headaches and more. Chronic stress also affects your digestive, nervous and even reproductive system. Digestive disturbances such as irritable bowel syndrome, ulcers, Chrohn’s disease and acid reflux have all been shown to be connected to stress. Your relationships are either good or bad for your health. Supportive, loving, positive and nurturing relationships improve the immune system, flood your body with “feel good” hormones and chemicals while giving you a sense of connectedness. Negative, critical, judgmental and pessimistic people suppress your immune systems, flood your body with stress hormones and discourage you from being, doing and having more. Take a look at your relationships and how they are contributing to or undermining your health. A lack of confidence, low self esteem, poor self image and a belief system that doesn’t serve you will prevent you from living the life you want…no matter how well you eat and how much you exercise. If you feel worthy and deserving of love, health, wellness and success, then your thoughts, behaviors and actions will support those goals. If you feel unworthy of love, health, wellness and success, unfortunately your actions and behaviors will support those beliefs too. True health is being healthy from the inside, out. Which of these pieces of the health puzzle are you missing? Begin to put them in place to form a healthy body that thrives from a healthy lifestyle, a healthy mind and a healthy spirit.
<urn:uuid:5e5e2294-360c-4223-80f6-34bc47c9b158>
CC-MAIN-2016-26
http://www.bestlifedesign.com/lifestyle/finding-and-fixing-the-holes-in-your-wellness-plan.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94528
1,399
2.625
3
Young geographers team up with scientists to lobby Prime Minister Press release issued: 9 November 2012 Aspiring geographers from across Bristol will be joined by some of the country’s leading scientists to learn about one of the most important and pressing challenges of our time — climate change The Great Environment Debate 2012, hosted by academics from the University of Bristol’s Cabot Institute, will bring young people together to teach them how human actions modify the Earth’s climate and the challenges that lie ahead. Around 150 pupils from local schools and colleges will be attending the event, comprising a series of visual presentations and practical workshops on climate change related topics. During the day pupils will be asked to produce environmental action plans that they believe are feasible to implement for less waste in their community and school. At the end of the event pupils will have an opportunity to debate climate change mitigation actions and policies and use what they have learnt to form the basis of a letter to the Prime Minister advising what the Government should be doing to cut carbon emissions by 2050. Dr Chris Deeming, Senior Research Fellow in the University’s School of Geographical Sciences, said: “Climate change is an especially challenging environmental problem because it’s global. But even individuals can easily take big steps to reduce their contribution to climate change. Educating the next generation on the measures they can undertake to limit climate change is crucial to the future of our planet. “Young people’s voices in the climate change debate are important, and young people should be actively encouraged to debate such issues in public life.” The ESRC-funded workshop, which takes place on Friday 9 November in the University’s Wills Memorial Building, is part of Thinking Futures a free festival of events open to members of the public. A full programme is available on the ‘Thinking Futures’ website. The Festival has been organised by the University of Bristol’s Faculty of Social Sciences and Law with support from the Centre for Public Engagement.
<urn:uuid:f704fac3-2b4d-4ee3-93da-d4338cce1bc8>
CC-MAIN-2016-26
http://www.bristol.ac.uk/news/2012/8909.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944544
417
2.703125
3
Modern warfare requires more than simple brute strength. The Information Age has revealed new ways to communicate and access information, and these new rules have also affected the way wars are fought and won. Imagery gathered by satellites and drones must make its way to units on the ground for data analyzation. While a drone can take a photograph, it cannot tell you what it’s looking at, or what to do about it, if anything. A human brain is necessary to spot a terrorist stronghold, or to recognize that a missile launcher in one photo has been moved, when compared with an older photo. "Full exploitation of this information is a major challenge," officials with the Defense Advanced Research Projects Agency (DARPA) wrote in a 2009 brief on "deep learning." "Human observation and analysis of [intelligence, surveillance and reconnaissance] assets is essential, but the training of humans is both expensive and time-consuming. Human performance also varies due to individuals’ capabilities and training, fatigue, boredom, and human attentional capacity." Working with a team of researchers at MIT, DARPA is hoping to take all of that human know-how and shrink it down into processing unit no bigger than your cellphone, using a microchip known as "Eyeriss." The concept relies on "neural networks;" computerized memory networks based on the workings of the human brain. A palm-sized neural network chip could be installed in drones or satellites, allowing these units to conduct their own learning in real time, without the need for human analysis. Instead of a team of individuals combing through imagery looking for a single target, a drone could simply alert soldiers on the ground once the drone has identified a target. The technology could also work in disaster zones, allowing drones to spot and identify people in need, and then communicate location and other data to aid workers. Current deep learning technology requires a large number of servers and the energy necessary to run those computers. Data can be sent to warehouses containing the computers for analyzation, but that requires an Internet connection, which isn’t always readily available in combat situations and, when it is, is not always secure. But Eyeriss could change the way the game of war is played. Packing more processing power into a much smaller space, the microchip could allow our handheld devices to become even smaller, and allow drones and satellites to operate without a need for massive server warehouses or hundreds of human analysts.
<urn:uuid:d55475af-bf21-4357-87ed-20f6b38d6488>
CC-MAIN-2016-26
http://sputniknews.com/us/20160206/1034319282/darpa-neural-microship-eyeriss.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943015
504
3.21875
3
Robert Pozen, Contributor At the depth of the Depression in 1933, Congress passed the Glass-Steagall Act to separate commercial banking from the securities business: JPMorgan here, Now there’s a backlash against that repeal. Some commentators and some politicians blame the repeal for the 2008 financial crisis. They want to resurrect Glass-Steagall. This is a bad idea. The repeal of Glass-Steagall was at most a minor factor in leading up to the financial crisis, and its repeal was instrumental in resolving the liquidity squeeze on Wall Street. The wall between commercial and investment banking was long filled with holes: Even under Glass-Steagall commercial banks could invest in bonds, manage mutual funds, execute securities trades on the order of their customers and underwrite government-related securities. The main thing they couldn’t do was underwrite corporate stocks and bonds. Even that prohibition was loosened, as regulators permitted bank holding companies to set up special subsidiaries devoted in part to underwriting corporate stocks and bonds. In other words, the main impact of repealing Glass-Steagall was to allow banking organizations to become more active in underwriting. It was after the repeal that banks got deeply into underwriting mortgage-backed securities, and it was mortgage securities that triggered the crisis. But this doesn’t mean that the expansion of underwriting was an important cause of losses at these banks. If this argument were true, we would expect to see the portfolios of big banks crammed with low-rated mortgage paper that they could not sell in their underwritings. In fact, big banks held primarily the mortgage securities with the highest ratings, which they would have been permitted to hold under Glass-Steagall. It could also be argued that large banks would be tempted to make bad loans to companies in order to get their underwriting business. While such practices have periodically occurred, they have for years been prohibited by several federal statutes. Moreover, when the sec looked at the banks that became insolvent during 2008, it found that the main cause of insolvency was losses on traditional bank loans unrelated to their securities underwritings. At the same time, the repeal of Glass-Steagall facilitated the rescue of four large investment banks and thereby helped reduce the severity of the financial crisis. When Bear Stearns and Merrill Lynch got into serious trouble, they were promptly acquired with federal assistance by JPMorgan Chase and Banks have a significant advantage over broker-dealers in obtaining short-term financing in illiquid markets. A bank can rely on insured deposits and Fed loans as well as short-term financing in the form of commercial paper. Commercial paper buyers are a fickle bunch. Bank depositors are more stable retail customers. Given the globalization of the financial markets, it would be foolhardy to prohibit U.S. banks from engaging in securities activities that are performed by their global competitors. And it would be almost impossible to obtain an international agreement that all countries would impose–many for the first time–the restrictions of Glass-Steagall on their local banking activities. Indeed, even when Glass-Steagall restricted the securities activities of U.S. banks, the law never extended to those activities conducted by those banks outside the U.S. In short, reinstating the Glass-Steagall Act would not prevent another financial crisis. It would just increase the severity of any such crisis by limiting the options for helping securities firms in liquidity crunches. Moreover, imposing restrictions on the securities activities of U.S. banks would put them at a tremendous disadvantage relative to their foreign competitors. Robert Pozen is chairman of MFS Investment Management and the author of the forthcoming Too Big to Save? How to Fix the U.S. Financial System (John Wiley).
<urn:uuid:c50a5a98-d5ea-41ab-ad5d-29810a47c3e6>
CC-MAIN-2016-26
http://www.forbes.com/forbes/2009/1005/opinions-glass-steagall-on-my-mind.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971983
767
2.71875
3
An Australian report on the children of same-sex couples has revealed that they have higher self-esteem, greater family cohesion, and better overall health than the national average. The Australian study of child health in same-sex families is the world’s largest attempt to study how children raised by same-sex couples compare to children raised by heterosexual couples. According to a preliminary report on the study of 500 children across the country of Australia, young people are not only thriving, but they also have higher rates of family cohesion than other families. The study looked at important indicators including self-esteem, emotional behaviour and the amount of time spent with parents. Children of same-sex couples scored higher than the national average for overall health and how well the family gets along. The researchers hypothesised that if a student experiences stigma at school, the families of same-sex couples are “generally more willing to communicate and approach the issues”, resulting in a closer family dynamic. In another recent study by the University of Cambridge’s Centre for Family Research it was found that children adopted by same-sex parents do not suffer any disadvantage and that the vast majority were not bullied at school. Professor Susan Golombok, director of the Cambridge centre and report co-author, said: “What I don’t like is when people make assumptions that a certain type of family, such as gay fathers, will be bad for children. “The anxieties about the potentially negative effects for children of being placed with gay fathers seem to be, from our study, unfounded.”
<urn:uuid:865d962a-1862-499e-a7da-b27936b73276>
CC-MAIN-2016-26
http://www.pinknews.co.uk/2013/06/28/australia-children-with-same-sex-parents-have-better-overall-health-than-national-average/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96645
328
2.8125
3
This app was removed from the App Store. Everyday Sight Words - lite iOS Universal Education This app is the real deal when it comes to teaching sight words. It's highly educational, effectively contextualizes words by organizing them by location (e.g. "bathroom," "bedroom," or "park"), and makes excellent use of audio learning cues. It' also provides lots of positive reinforcement when the user succeeds. Appeals to a Wide Audience Easy to Use Good Challenge Range Great for Tablets Read further to learn more... - An interactive and thoughtfully designed app for children to learn everyday sight words in a smart way that fosters thinking and understanding. - The free version is fully featured but includes only 4 everyday environments (Bedroom, Bathroom, Breakfast, Classroom). Where each of them consists of more than 12 words to learn. To obtain more environments like Playground, Park, Garden, Kitchen, Street, Store, and others, please upgrade to the full version. - Kids Lean Everyday Words is the first in series of Language Learning Curriculum from Nimble Minds. - Intended for kids ages 2-7 - What will children learn? Children will learn the sight words and names of the objects they find in their everyday environments. They will also learn about the different everyday situations/places they encounter in a typical day or will encounter as they grow up. - How is it different from other word learning apps? At Nimble Minds, our focus is to help kids become a smart learner, so they can achieve more without working too hard. Instead of using flash cards and rote memorization techniques, this app employs an active learning process where children interact with the graphics to gain knowledge. They are given a particular situation and asked to discover a new object or identify a particular object. Children learn new words for objects with respect to their environments in which they exist. At the same time, they also understand the relationship of one object with another. The graphics are HD and have been made super attractive to gain children's attention, which in return allows them to get involved and learn the words without even working hard to remember. The child friendly voice over, sounds of applause, cheerful music and encouraging remarks like 'superb', 'very good', and 'you got it', in the app, will ensure a very positive learning experience for the young developing minds. - Activities included in the game: The game contains two modes: Learn and Play In Learn mode, kids tap on the question mark sitting above the object, to find out what it is called. In play mode, their recently gained knowledge from Learn section is tested and reinforced by asking them to identify and find the objects for the words given. Every mode and each situation is designed so children enjoy success time after time and receive encouraging remarks in a friendly manner. - This app does not contain any ads because we believe it is inappropriate for kids to come across any indecent and unnecessary images, while they are learning and playing. It is also equipped with child locks to prevent kids from accidently buying the paid version. However, if you may find this app to be productive and amusing for your child, please feel free to upgrade to the full version. - If the app crashes or freezes, please let us know at email@example.com and we will get it fixed right away. Negative ratings for crashes and technical issues are really disappointing and we don't intend that to happen. Tags: sight, sight words, kids, children, learn, game, fun, education, words, play, everyday, english, read, language, smart, flashcard, memory, autism, autistic adhd, situations, life, daily
<urn:uuid:699d5491-47e7-4746-acab-9b3553f7585c>
CC-MAIN-2016-26
http://appshopper.com/education/sight-words-for-kids-lite
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00156-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948146
774
3.171875
3
Boston — "One If By Land, Two If By Sea." The lantern signal that once warned Boston patriots how the British were attacking might serve as sound advice for present-day visitors to this historic city: While Boston's landmarks merit the first look, her seaside, or harbor, deserves the second. Boston Harbor's 180 miles of shoreline embraces 50 square miles of water. The more than 30 islands within it total 1,200 acres of land. The harbor and its interconnecting bays have provided a highway between villages, colonies, and nations for centuries. Trading, shipbuilding, and fishing -- the "sacred cod" was declared a symbol of Massachusetts as early as 1936 -- were the basis of Boston's maritime success for over 200 years. The signal light that gave Beacon Hill its name was established in 1635 to guide ships safely home. From the first simple town dock in 1630, the number of wharves in Boston grew to 78 by 1708. The great Long Wharf built in 1710 could handle the largest ships in the world even 100 years later. In 1773, Boston Harbor was the site of a tea party, for which Parliament closed the port of Boston in retaliation.During the American Revolution, 365 vessels were commissioned in Boston, and the Massachusetts State and the Continental navies were active out of the port. Following the war, Boston shipowners ran an almost private trade with China. With the discovery of gold in California in 1848 came the demand for speed at any cost: The chipper ship era was born. More than one-third of all American clippers were built in the Boston area. Boston-built clippers like the Flying Cloud of 1851 hold the records for west and eastbound passages. The steamboat Massachusetts, operating between Boston and Salem, introduced steam service to the harbor in 1817. Steam cruises of Boston Harbor will be revived this year, with a replica of an 1980 harbor service steam launch being built at the Museum of Transportation. In 1840, Cunard brought his 1,154-ton paddlewheel steamship Britannia to Boston, distinguishing the city as the first American port to have regularly scheduled transatlantic steamship service. So important did Bostonians view this service that when the Britannia became icebound in Boston Harbor in 1844, citizens used ice plows, horses, and steams of 50 men to cut a canal to the open sea. The ship sailed only two days behind schedule. A lithograph entitled "Committee of Bostonians Saves City's Reputation" commemorating the event was banned in Boston for many years as bad public relations for the port. Other details of Boston Harbor history will be included in the "Gateway to the sea" exhibit covering 350 years of Boston as a port of trade. A gift of New England Mutual Life Insurance Company to the people of Boston, the exhibit will recreate the periods of important development of the port through maps, paintings, Nathaniel Stebbins marine photographs, and lithographs, and bring back the days when a trip to Boston meant a walk along the waterfront to see what ships and cargoes from around the world were in port. "Gateway to the sea" opens at Boston City Hall on May 30; on July 8 it will move to the Museum of Transportation as a permanent exhibit. Already at that museum is an interesting display that shows 350 years -- by 30-year intervals -- of landfilling operations that have dramatically altered the shape and size of Boston Harbor. The land for Logan Airport, for instance, is the result of five islands being flattened and 2,000 acres of water area around them being filled. Family farms were established on many of the Harbor Islands from the time colonists first settled Boston. Since the 17th century, the islands have served as sites for public facilities: quarantine hospitals, pauper colonies, immigration stations, almshouses, and prisons. Island fortifications, in various stages of ruin and repair, are today's most visible testimony to Boston Harbor's military history. Nine islands were used in the network of defense for Boston in World War II, but it was the Civil War that produced the Harbor's architectural gem, Fort Warren on Georges Island. New and fully garrisoned in 1861, its strength was 1,500 men. Now, restored and a National Historic Landmark, Fort Warren is an explorer's delight: with drawbridge, dungeons, and cavernous vaulted-ceilinged common rooms. Still circulating since the days when it was the North's major Civil War prison are tales of attempted escapes, executions for treason, and a ghost. One hundred fifty years ago, excursions to the Boston Harbor islands via inexpensive public steamboat transportation for picnics in the fresh sea air or fine meals in restaurants at flourishing resort hotels were a favorite summer pastime. So were illegal Sunday boxing matches. These sorts of excitement may have changed, but Boston Harbor today is still the setting for a wide range of outings. Access to the harbor and its islands is easy. Daily, from May 1 to late September, Bay State/Provincetown on Long Wharf offers options from half-hour lunchtime cruises to 1 1/2-hour narrated cruises of the harbor with a stop at Georges Island -- you can catch a later boat back -- to evening cruises with the unique series of live jazz, big band, and classical concerts known as "Water Music." In 1970, legislation was passed to create the Boston Harbor Island State Park. Georges Island is headquarters for this unusual public recreation area, and is reached by frequently scheduled ferries. Gallups, Lovell, and other islands are reached from Georges by free water "taxi" service. The dramatic contrast between the city's skyscrapers visible in the distance and the semiwilderness of many of the islands, accentuated by the sounds, smells , and colors associated with an ocean shore, creates a delightful change-of-pace experience.
<urn:uuid:ca327f4f-03c7-4319-a50a-a7eed406fdad>
CC-MAIN-2016-26
http://www.csmonitor.com/1980/0401/040152.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961189
1,232
3.09375
3
I'd really like to focus on why some new programming languages are adopted in the mainstream, and others remain relatively niche. I'd like to know about things like specific use cases, backwards compatibility, or some new features, simple or complex implementation difficulty. Specific examples would be appreciated, but let's not get caught up on the exact definition of "mainstream" or "niche" here. closed as primarily opinion-based by Andres F., kevin cline, Snowman, gnat, MichaelT Mar 7 at 0:57 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.If this question can be reworded to fit the rules in the help center, please edit the question. If anyone really knew, they'd be very rich people. That said, here's my guess: BASIC is an awful little language that came with (all?) PCs when they first came out. The Apple II (the default computer for many high schools), and the ATARI 400/800 (the first super cheap home computer) both came with BASIC. If you had a UNIX machine (either AT&T system V or Berkeley's) you had C. If you wanted to program the machine, that's what you had to use. EDIT: With the advent and ubiquity of the 'net, this requirement transforms into: Ease of Transition Java is a good example here. If you knew C programming, Java was not a very far jump ahead, and yet it gave many of the features that C lacked and C++ failed to provide cleanly. C++ for obvious reasons was an easy step forward from C, and it was easy to sell to management as an "improved" C. C++ had the added benefit of being backward compatible with much of the existing legacy C code base. Perl was an amalgamation of C, awk, sed, and other Unix utilities all in one bundle. Prior to its appearance, most system administration was done through shell scripts gluing everything together in an unsatisfactory way. Bringing everything under one process with the data structures and control of a C-like language was a godsend. Fills a Need C took off because it allowed you to produce close-to-assembly efficiencies without getting bogged down in the machine-specific, hard to maintain world of assembly. FORTRAN took off because it allowed for easy translation of mathematical ideas into code without having to get lost in the details of the machine. Likewise for LISP and symbolic manipulation. Python grew out of the need for a "better" Perl. (I'm biased here, so I won't say more.) PHP was essentially the BASIC for the web -- it was installed by default on many web servers, and it was easy to hack together something useful quickly. Advocacy, User-base, Contributed Code Let's face it, Haskell would not be anywhere as popular as it is for a not-in-production language if it weren't for the tireless advocacy of its developers and user base. Many languages have a cult-of-personality behind the language's creator(s), and we all know who they are. FORTRAN has huge sets of established and vetted mathematics code; same for Java and the web/systems-integration/MVC-systems; same for Perl and CPAN; same for TEX and document management; etc. The It Factor For some reason some languages just seem to have the right amount of new, with enough of a nod to the old, with a way that makes it seem easy or needed. That is, it makes its own case. And who knows just how this happens? Anyway, that's my best guess for why some make it. As for why some don't... well, if they don't meet the above criteria, that's probably why they failed. Languages becomes popular because they have an advantage over existing languages in an area that is needed. I'll be a cynic: money and coming with that, marketing. It's no coincidence that C# is supported by Microsoft, Java by Oracle and Objective C by Apple. Only Google's Go hasn't really lifted of so far. Of course money is not the only reason but having deep pockets sure helps to place your language in the market. On one hand it is marketing, more precisely presentations, blogs etc. It is important to have features that mainstream programmers can relate to and see benefits over what they know (Java vs. C++ - garbage collector vs. memory management). Last, but not least is to have low entry barrier - examples, good documentations, seamless install, good community and support, vibrant development. Almost all the languages had non-trivial, real-world problem solving programs written using them very early in their life. Unix was written in C, so were the tools on Unix, when C was very young and evolving. Anaconda (RedHat's installer program) was written in Python when Python was young and did not have the popularity of today. These are what I can recall off handed. This list could touch each of the languages that has survived its formative years. Then, large scale adoption in the universities can help a language's longevity. Java is very popular at universities as a teaching language. To some extent, Lisp and dialects of Lisp enjoy this status too.
<urn:uuid:30e27f95-9a8a-4ccb-90cf-6eca5c9c8315>
CC-MAIN-2016-26
http://programmers.stackexchange.com/questions/91728/what-drives-the-adoption-or-not-of-new-programming-languages/91733
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00197-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968353
1,152
2.65625
3
The Scotland Act 1998 sets out those areas of legislative responsibility over which the Scottish Parliament at Holyrood has devolved authority. Accountability for international elements of the Met Office's role is reserved to the Westminster Parliament. But there are many areas specifically relevant to Scotland that we can provide information on including: Tourism, economic development and financial assistance to industry Climate change in Scotland and coastal marine Natural and built heritage Agriculture, forestry and fishing Last updated: 2 September 2010
<urn:uuid:ea94f2a7-50d6-4f43-b3fc-63b66e759282>
CC-MAIN-2016-26
http://www.metoffice.gov.uk/about-us/what/parliamentary/scotland
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.9173
100
2.53125
3
Bathymetric map of North Atlantic Ocean showing the track of H.M.S. CHALLENGER produced in 1873 by Augustus Petermann and published in Petermann's Geographische Mitheilungen for 1873, Map 24. Image ID: map00344, Voyage To Inner Space - Exploring the Seas With NOAA Collect Credit: NOAA Central Library Historical Collections • High Resolution Photo Available of the U.S. Department of Commerce, National Oceanic & Atmospheric NOAA Central Library | NOAA Disclaimer June 10, 2016
<urn:uuid:b33829a8-f662-48b2-bf69-4333f11c1a00>
CC-MAIN-2016-26
http://www.photolib.noaa.gov/htmls/map00344.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.696701
122
2.59375
3
EPA seen as underreporting emissions: what that means for natural gas use Researchers say US methane emissions could be 25 percent to 75 percent higher than EPA estimates. The finding affects how natural gas stands up, environmentally, to other fossil fuels. For years, natural gas has enjoyed a reputation as the cleanest fossil fuel because it releases less carbon dioxide into the atmosphere when it is burned than coal or oil. With technology that allows companies to extract gas from reservoirs trapped in underground shale deposits in the US, domestic gas has become plentiful and cheap. As a result, it is increasingly replacing coal in power generation, and is substituting for gasoline and diesel fuel in fleets of retrofitted vehicles. But from a climate standpoint, how clean is it? Some recent studies have suggested that the harvesting and production of natural gas – including by fracking – emits enough carbon, via methane, into the atmosphere that it may not be worth substituting it for coal. Into this debate steps a finding by researchers that the US may be emitting from 25 percent to 75 percent more methane from all sources than is estimated in annual federal assessments of the country's emissions. That finding, in itself, is concerning, as methane is a potent greenhouse gas, but is it a factor in the debate over natural gas? Supporters of the industry can breath easy. The researchers, who were analyzing the federal emissions assessments with an eye for improving them, concluded that even if – for the sake of argument – the natural gas industry were responsible for all the additional methane emissions, natural gas would still have an advantage in lower carbon emissions compared with coal for industrial uses. The analysis appears as a "Policy Forum" article in the current issue of the journal Science. It comes at a time when some studies on strategies for curbing greenhouse-gas emissions have suggested that by curbing industrial emissions of methane, as well as black-carbon soot and other relatively short-lived heating agents, the world could slow the pace at which global average temperatures have risen over the past century. Indeed, in 2012, the US Environmental Protection Agency issued new rules for reducing air pollution from oil and gas exploration and production – including methane emissions from gas production. The regulations are to take full effect next year. When they do, by some estimates they will lower methane emissions by 1 million to 1.7 million tons a year. The new analysis appears to be the first to take a comprehensive independent look at US methane emissions. It takes advantage of a number of recent studies that use a range of direct measurements – from remote sensing aircraft and sensors on towers to on-site checks for leaks. Methane-emission estimates from such studies tend to show that EPA records are underestimating such emissions. Drawing on these studies, a team that includes scientists from seven universities, two national laboratories and the National Oceanic and Atmospheric Administration, as well as three non-government organizations, estimated that the country is emitting some 7 million to 21 million more tons of methane than the EPA records, with 14 million tons as the most likely number. The team cites several reasons for the discrepancy. For instance, the EPA takes measurements at locations that agree to cooperate with the agency, which can skew its estimates. And for purely budgetary reasons, there’s a limit on the number of sites the agency can sample regularly. The team also tried to estimate what portion of the EPA’s emissions estimates were from the natural gas operations industry, in no small part to test the conclusions from a small handful of studies asserting that when these emissions are taken into account, natural gas loses its advantage over coal in reducing CO2 emissions. "The exact contribution of natural gas to this overall excess is unknown at this point; there's not enough scientific evidence to give a firm answer," said Adam Brandt, the lead author of the analysis and a Stanford University researcher who focuses on energy resource engineering. So the team assumed the worst – that natural gas emissions make up the entire 14-million-ton-per-year EPA undercount, something they acknowledge as highly unlikely given that range of possible emissions sources. Even if the gas industry was responsible for all the methane emissions that exceed the EPA inventory, looking at natural gas as a temporary "bridge" fuel and its effect on greenhouse gas emissions over the next 100 years, "substituting natural gas in place of coal still looks significantly beneficial." Even at the highest estimate for methane emissions and assuming all of that came from the natural-gas industry, "which we think is unlikely," gas still beats coal, Dr. Brandt said at a briefing on the study held earlier this week. The picture is somewhat more cloudy for transportation, where cities are converting bus fleets and some corporations and state government are converting cars to run on natural gas. The team found that over the same 100-year span, it was unclear if there was a benefit to the climate from using natural gas instead of gasoline for cars and light trucks. When applied to heavy vehicles, such as buses, natural gas appeared unlikey to be any more beneficial than diesel fuel – although the researchers acknowledge that the shift from diesel to natural gas also is occurring for general air-quality reasons.
<urn:uuid:1d652d0d-a983-477b-bd04-1a8afcbb94f7>
CC-MAIN-2016-26
http://www.csmonitor.com/Environment/2014/0214/EPA-seen-as-underreporting-emissions-what-that-means-for-natural-gas-use
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959691
1,059
3.4375
3
More than a quarter of children aged between 10 and 12 cannot add two small sums of money without using a calculator, a survey has revealed. Youngsters are leaving primary school unable to spell, add or do times tables and their parents do not have the time to help them, new research shows. Around a third cannot do division or basic algebra while half do not know what a noun is or cannot identify an adverb. Almost a third cannot use apostrophes correctly. Despite this, parents only manage to spend less than 10 minutes a day helping their children with their learning, according to online tuition service mytutor, which commissioned the survey. More than a quarter (27%) of children surveyed could not add £2.36 and £1.49 to get £3.85; more than one in five (22%) could not use the correct version of "they're", "there" or "their" in a sentence; almost a third (31%) could not pick the correct use of an apostrophe from three simple sentences, and more than four in 10 (42%) couldn't spell the word "secretaries" correctly. In addition, more than a third (36%) could not divide 415 by five and a quarter did not know the answer to seven multiplied by six. Almost half of parents surveyed (48%) said they think their child is worse at maths than they were at the same age, and more than a third (36%) felt their child's English was worse than theirs was at the same age. Almost four in 10 parents (39%) said they spend less time learning with their children than their parents did a generation ago - with only 30% spending more time than their parents did. Nearly six out of 10 parents (59%) spend less than an hour a week learning with their children, breaking down to around eight-and-a-half minutes a day. One in five parents spend less than 30 minutes a week learning with their offspring. Nick Smith, head of online tuition at mytutor, said: "Maths and English are key skills for children as they enter secondary school, yet our study shows that many are already slipping behind their peers and could be lacking confidence. "Despite half of parents thinking their children aren't as good as they were at the same age, most parents only manage to spend fewer than 10 minutes a day reading with them, helping them with homework or doing educational activities at home. "Addressing these shortcomings early can make an enormous difference to a child's school career, with tutored children generally making more than a year's worth of progress with just 20 hours of tuition." The survey of 1,000 children aged between 10 and 12 found that one in four did not know their times tables, a quarter could not use decimal points and two in five cannot spell simple plurals. Mr Smith added: "Hectic modern lifestyles are leaving parents with less and less time to spend learning with their children - whether that is helping with homework or other educational activities. "Many think that their child's learning is suffering as a result, yet fewer than one in 10 of the parents we asked had used private tuition to give their children a boost to their learning - with many citing travelling time and a lack of suitable local tutors as reasons." OnePoll surveyed 1,000 parents and 1,000 children on behalf of mytutor.co.uk in December 2011. Shadow education secretary Stephen Twigg said: "Labour raised standards in Maths and English, with a focus on the 3Rs through initiatives such as the literacy and numeracy hours. "In 1997, only six in 10 children reached the required standard Level 4 in English and Maths. By 2010, it had gone up to eight in 10. "Clearly, as this reports demonstrates, there is still much to be done to ensure children leave primary school with a grip of the basics. "But the Tory-led Government is ignoring the warning signals in this report. Instead of focusing on the 3Rs, they are cutting funding for programmes which provide one-to-one support for reading and writing. This means 9,000 more children will be at risk of falling behind this year alone." A Department for Education spokesman said: "Getting the basics right at primary school is vital. "That's why we are placing such emphasis on improving pupils' reading ability early on, using the proven method of synthetic phonics to teach children to read. "We are committed to improving standards in maths - bringing more specialist maths teachers into the classroom and focusing on basic arithmetic."
<urn:uuid:80e9a42c-2f5b-4334-a031-a40efed89ed9>
CC-MAIN-2016-26
http://www.huffingtonpost.co.uk/2012/01/22/quarter-of-primary-school-children-unable-to-spell-or-add-up-survey-reveals_n_1222427.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97896
942
2.84375
3
adopted in 1780, with a Bill of Rights prefixed, declaring that “all men were born free and equal, and have certain natural, essential, and unalienable rights,” among which is liberty. decided that under this Constitution slavery could not and did not exist. This was a very different process from that described by Mr. Davis But were the slaves thus made free “sold to the South ” ? Happily, that question may be answered. According to the census of the Province of Massachusetts Bay , taken in 1765, the colored population in 182 towns was 4,978. Dr. Jesse Chickering , in his “Statistical View of the Population of Massachusetts ,” a work of the very highest authority, estimates that a number not exceeding 147 ought to be added for 16 towns from which there were no returns, and 74 for two towns where the returns did not specify color, making 5,199 in all. The next census was that of 1790. The table for Massachusetts | ||Total Colored Population.| From 1765, fifteen years before slavery ceased, to 1790, ten years after its cessation, the colored population, instead of being diminished by a sale of slaves to the South , increased 264. In the next ten years, “soon after” 1789, it increased 989. In the next, the increase was only 285. The great increase of 989, from 1790 to 1810, was at the very time of the decrease of colored people in Rhode Island , as stated above. The increase for the next ten years, 285, represents very nearly the usual increase in subsequent decades. Even that small increase has been due mostly, and perhaps wholly, to immigration; for their natural increase, in our climate, is about nothing. So far is this statement, which Mr. Davis has put forth with all the solemnity of an official document, from being true; so unsupported are some of the grounds on which Southern men are officially exhorted to separate themselves utterly from their fellow-citizens of the North ; and so easily detected and conclusively proved is a misrepresentation, which would be so discreditable to us, as a fact. May we not hope that men who, whether deliberately or carelessly, indulge in such statements, will soon lose their present control over Southern minds?--Boston Courier , July 9.
<urn:uuid:7be38728-2e5d-4ffc-b45a-00f006bae95c>
CC-MAIN-2016-26
http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A2001.05.0135%3Achapter%3D280%3Apage%3D403
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972545
510
3.96875
4
- Reference materials Does your child have a future in reference book publishing? Here's an entry-level assignment that can serve as a preview. Your child chooses a person he or she is studying in school—a famous inventor, a U.S. president, or another noteworthy public figure. Then he or she writes an encyclopedia entry for that person, including as many facts (such as date of birth, hometown, notable achievements, and the like) as possible. When your biographer is finished, open an encyclopedia, and read the "real" article. See how many similarities you can find between the actual entries and the homespun bios. If there are any discrepancies between entries, check other sources for verification—don't assume the encyclopedia is always right. So, Thomas Edison invented the record player, the light bulb, and the ping-pong ball, did he? Only your young unofficial biographer knows for sure! From 365 TV-Free Activities You Can Do With Your Child Copyright © 2007, F+W Publications, Inc. Used by permission of Adams Media, and F+W Publications Company. All rights reserved. To order this book go to amazon.
<urn:uuid:1f3d3e42-483f-4a87-bd38-9d60ce5e84e1>
CC-MAIN-2016-26
http://fun.familyeducation.com/writing/activities/51287.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924819
242
3.640625
4
Experts are finding that exposure to bisphenol A (BPA)—an additive used in numerous products, but most commonly found in plastic water bottles, cash register receipts, and soup can liners—may be affecting human health more than we know. Last year, the U.S. Food and Drug Administration (FDA) said BPA was safe for people of all ages, yet banned the use of the chemical in sippy cups, baby bottles, and other containers used by children. Prior research on BPA, which acts like synthetic estrogen in the body, has been linked to miscarriages and birth defects in primates, as well as to behavioral problems in girls, impotence in men, an increased risk of heart disease, and other health problems. Three studies that link BPA to even more health problems, including birth defects and cancer in animal testing, were presented during The Endocrine Society’s annual meeting this week in San Francisco. BPA Linked to Increased Cancer Risk Early exposure to BPA increases a person’s risk of developing prostate cancer, according to research by Gail Prins, a professor of physiology at the University of Illinois at Chicago. Prins tested daily levels of BPA exposure in animal models and found that BPA’s estrogen effect “reprograms” prostate stem and progenitor cells. Prins arrived at her conclusion by implanting prostate stem cells from dead male organ donors into mice. The cells formed human prostate tissue, and Prins fed the mice doses of BPA similar to those found in previous studies of pregnant American women. After one month, signs of cancer developed in a third of the mice fed BPA, nearly three times the rate of cancer in the mice not given BPA. “This is the first direct evidence that exposure to BPA during development, at the levels we see in our day-to-day environment, increases the risk for prostate cancer in human prostate tissue,” Prins said in a press release accompanying the research. BPA May Affect Undescended Testicles French researchers have linked BPA exposure to defects in a testicular hormone in babies with undescended testicles, a condition that affects up to five percent of full-term newborns. Lead study author Dr. Patrick Fenichel, the head of reproductive endocrinology at the University Hospital of Nice in France, and others studied 180 boys, 26 of whom had undescended testicles at 3 months of age. Testing the infants' umbilical cord blood, the researchers measured levels of BPA and peptide 3, a hormone that aids in the descent of the testicles. Researchers found that infants with higher BPA levels in their blood also had lower levels of peptide 3 hormones. Fenichel speculated that estrogen-like BPA affected peptides in humans in the same way it did in animal trials, meaning that BPA levels could impact testicle descent, but more research is needed to confirm this theory. BPA May Increase the Risk of Obesity Researchers also say prenatal exposure to BPA may lead to inflammation in fat tissue, increasing a person’s chances of being obese later in life. Researchers at the University of Michigan at Ann Arbor fed two groups of pregnant sheep corn oil, one with nothing added and another with added BPA up to the level found in human umbilical cord blood. When they were born, half of the lambs in each group were overfed. After seven months, the sheep whose mothers were fed BPA and a normal diet had increased biomarkers for obesity and metabolic syndrome. “This research is the first study to show that prenatal exposure to BPA increases postnatal fat tissue inflammation, a condition that underlies the onset of metabolic diseases such as obesity, diabetes, and cardiovascular disease,” lead author Almudena Veiga-Lopez said in a press release. The researchers chose sheep because their body fat is similar to that of humans.
<urn:uuid:4632513c-2270-44db-bc44-f5ee35d9cac2>
CC-MAIN-2016-26
http://www.healthline.com/health-news/children-bpa-may-increase-risk-of-obesity-and-prostate-cancer-061913
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95478
823
3.125
3
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... A name of no real ethnic significance, but used as a convenient popular and official term to designate the modern descendants of those tribes of California, of various stocks and languages, evangelized by the Franciscans in the latter part of the eighteenth and early part of the nineteenth centuries, beginning in 1769. The historic California missions were twenty-one in number, excluding branch foundations, extending along the coast or at a short distance inland from San Diego in the south, to Sonoma, beyond San Francisco Bay, in the north. Besides these, two others, established in 1780 in the extreme south-eastern corner of the present state, had a brief existence of less than a year when they were destroyed by the Indians. As their period was so short, and as they had no connexion with the coast missions, they will be treated in another place (see YUMA INDIANS). The following are the twenty-one missions in order from south to north, with name of founder, location, and date of founding. In several cases the mission was removed from the original site to another more suitable at no great distance. It will be noticed that the northward advance does not entirely accord with the chronological succession:-- Nowhere in North or South America was there a greater diversity of languages and dialects than in California. Of forty-six native linguistic stocks recognized within the limits of the United States by philologists, twenty-two, or practically one-half, were represented in California, of which only six extended beyond its borders. Seven distinct linguistic stocks were found within the territory of actual mission colonization, from San Diego to Sonoma, while in the border territory north and east, from which recruits were later drawn, at least four more were represented. As most of the dialects have perished without record, it is impossible to say how many there may have been originally, or to differentiate or locate them closely. As tribal organization such as existed among the Eastern Indians was almost unknown in California, where the ranchería, or village hamlet, was usually the largest political unit, the names commonly used to designate dialectic or local groups are generally merely arbitrary terms of convenience. For the linguistic classification the principal authorities are Kroeber, Barrett, and other experts of the University of California. The Indians of this stock bordered on the northern frontier of the mission area, and although no mission was actually established in their territory in the earlier period, numbers of them were brought into the missions of San Rafael and San Francisco Solano. Broadly speaking, the Pomo territory included the Russian River and adjacent coast region with all but a small portion of the Clear Lake basin. Barrett has classified their numerous local bands and rancherías into seven dialectic divisions, but all probably mutually intelligible. Of their southern bands, some of the Gallinomero (or Kainomero), of lower Russian River, were brought into San Rafael mission and the Gualala also were represented either there or at Sonoma. The so-called "Diggers" of the present mission schools at Ukiah and Kelseyville are chiefly Pomo. The Yuki tribes were in four divisions, two of which were north of the Pomo territory and therefore beyond the sphere of mission influence. The two southern bodies, originally one, speaking one language with slight dialectic variations, and commonly known as Wappo (from Spanish guapo), occupied; They were probably represented at Sonoma mission, as they probably are also under the name of "Diggers" in the present mission school at Kelseyville. This stock held all (excepting the Wappo projection) between the Sacramento River and the main Coast Range from San Pablo (San Francisco) and Suisun Bays northwards to Mount Shasta, including both banks of the river in its upper course. The various dialects are grouped by Kroeber into three main divisions or languages, of which the southern, or Patwin, includes all south from about Stony Creek, and possibly also those of Sonoma Creek on the bay. Indians of these southern bands were brought into the missions of Sonoma, San Rafael, and even San Francisco (Dolores) across the bay. At Sonoma mission, among others, we find recorded the Napa and Suisun bands. According to Kroeber the whole region of Putah Creek was thus left vacant until repopulated after 1843 by Indians who had originally been taken thence to Sonoma mission. The numerous bands of this stock occupied three distinct areas, viz., The territory of this linguistic group extended from the coast inland to the San Joaquin River, and from San Francisco and Suisun Bays on the north southwards to about the line of Point Sur, including the seven missions of San Francisco (Dolores), San José, Santa Clara, Santa Cruz, San Juan Bautista, San Carlos, and Soledad. Although there was no true tribal organization, a number of divisional names are recognized, probably corresponding approximately to dialectic distinctions. On the peninsula, and later gathered into San Francisco mission were the Romonan (at present San Francisco), Ahwaste, Altahmo, Tulomo, and Olhone, or Costano proper, all apparently of one language in different dialects. The Saclan, about Oakland, were in the same mission. The Karkin along Carquinez straits and the Polye further south were gathered into San José. Santa Clara had two native dialects, while Santa Cruz apparently had another. About San Juan Bautista was spoken the Mutsun dialect, known through a grammar and phrase book written by the resident missionary, Father Arroyo de la Cuesta, in 1815, and published in Shea's "American Linguistics" in 1861. Eastward were the Ansaima and about the mouth of the Salinas were the Kalindaruk. At San Carlos the principal band was the Runsen, of which a remnant still exists, and at Soledad were Chalone, besides others of Esselen, Salinan, and Yokuts lineage. The Esselen, or Ecclemach, constituting a distinct stock in themselves, occupied a small territory on Carmel and Sur rivers, south of Monterey Bay, until gathered into San Carlos, and perhaps into Soledad mission. This stock centred upon the waters of the Salinas, chiefly in Monterey and San Luis Obispo Counties, from the seacoast to the Coast Range divide, and from the head streams of the Salinas down (north) nearly to Soledad. San Antonio and San Miguel missions were within their territory. Nothing definite is known of their divisions, excepting that there seem to have been at least three principal dialects or languages, viz., of San Miguel, of San Antonio, and of the Playanos, or coast people. Besides those native to the region, there were also Yokuts from the east and Chumash from the south in the same missions. The Indians of this stock had true tribal divisions, numbering about forty tribes, and holding a compact territory from the Coast Range divide to the foothills of the Sierras, including the upper San Joaquin, Kings River, Tulare Lake, and most of Kern River, besides a detached tribe, the Cholovone, about the present Stockton. Together with the Miwok and eastern Costanoan tribes, they were known to the Spaniards under the collective name of Tulareños, from their habitat about Tulare lake and along San Joaquin River, formerly Rio de los Tulares. Their numerous dialects varied but slightly, and may have been all mutually intelligible, the principal difference being between those of the river plains and of the Sierra foothills. Although outside of the mission territory proper, the Yokuts area was a principal recruiting ground for the missions in the later period, hundreds of Indians, and even whole tribes, being carried off, either as neophyte subjects or as military prisoners of war, to San José, San Juan Bautista, Soledad, San Antonio, San Miguel, San Luis Obispo (?), and probably other neighbouring missions. One Spanish expedition, about 1820, carried off three hundred men, women, and children from a single ranchería to San Juan Bautista, where their language was afterwards recorded by Father La Cuesta. The Tachi and Telamni from Tulare lake and eastward were brought into San Antonio. A few are now gathered upon Tule River reservation, while a few others still remain in their old homes. The Indians of this stock held approximately the territory from San Luis Obispo Bay south to Point Mugu, including the Santa Maria, Santa Inés, and Santa Clara Rivers, the adjacent eastern slope of the Coast Range divide and the islands of Santa Cruz, Santa Rosa, and San Miguel. The missions San Luis Obispo, Purísima, Santa Inés, Santa Barbara, and San Buenaventura were all within this area. They seem to have been represented also at San Miguel. There were at least seven dialects, viz., at each mission, on Santa Cruz, and on Santa Rosa. That of San Luis Obispo was sufficiently distinct to be considered a language by itself. This is the first stock within the mission area which extended beyond the limits of California, the cognate tribes within the state being an outpost of the same great linguistic group which includes the Piute, Ute, Comanche, and Pima of the United States, the Yaqui, Tarumari, and famous Aztec of Mexico. The five missions of San Fernando, San Gabriel, San Juan Capistrano, San Luis Rey, and its branch mission of San Antonio de Pala, were all in Shoshonean territory, and the great majority of the Mission Indians of today are of this stock. Those within the mission sphere were of five languages, each with minor dialectic differences, nearly equivalent to as many tribes, as follows:- This stock also has its main home beyond the eastern boundaries of the state, and includes the Mohave, Walapai, and others. San Diego mission was within its territory, as also the two short-lived missions on the Colorado. Nearly all the present Mission Indians not of Shoshonean stock are Yuman. Those within the mission sphere were of two languages, viz., Yuma in the east, about the junction of the Gila and Colorado rivers; and Diegueño in the west, in two main dialect groups: Very Little is in print concerning the languages of the mission territory. For vocabularies and grammatic analysis the reader may consult Bancroft's volume on "Myths and Languages", Power's "Tribes of California", Gatschet in "Wheeler's Rept.", and above all, Barrett and Kroeber in the University of California publications (see bibliography), with other works and collections therein noted. Among the important single studies are a "Grammar of the Mutsun Language" by Fr. Arroyo de la Cuesta, published in Shea's "American Linguistics", IV (1861); a Chumashan (?) catechism and prayer manual by Fr. Mariano Payeras of Purísima, about 1810, noted by Bancroft; and a Manuscript grammar and dictionary of the Luiseño language, by Sparkman, now awaiting publication by the University of California. The missionaries were more than once urged in prefectual letters to acquire the native languages in order better to reach the Indians, and in 1815 the official report states that religious instruction was given both in Indian and Spanish. The Indians of California constituted a culture body essentially distinct from all the tribes east of the Sierras. The most obvious characteristic of this culture was its negative quality, the absence of those features which dominated tribal life elsewhere. There was practically no tribal organization and in most cases not even a tribal name, the ranchería, or village settlement, usually merely a larger family group, being the ordinary social and governmental unit, whose people had no common designation for themselves, and none for their neighbours excepting directional names having no reference to linguistic or other affiliation. Chiefs were almost without authority, except as messengers of the will of the priests or secret society leaders. The clan system is held by most investigators to have been entirely wanting, although Merriam claims to have found evidence of it among the Miwok and Yokuts. Excepting basketry, all their arts were of the crudest development, pottery being found only in the extreme south, while agriculture was entirely unknown. Both mentally and physically they represented one of the lowest types on the continent. The ordinary house structure throughout the mission area was a conical framework of poles thatched with rushes and covered with earth, built over a circular excavation of about two feet deep. The fire was built in the centre, and the occupants sat or lay about it, upon skins or sage hushes, without beds or other furniture. The Gallinomero, north of San Francisco Bay, built a communal house of L shape, with a row of fires down the centre, one for each family. The "sweat-house", for hot baths and winter ceremonies, was like the circular lodge, but much larger. The dance place or medicine lodge was a simple circular inclosure of brushwood open to the sky, with the sacrifice poles and other ceremonial objects. Agriculture being unknown, the food supply was obtained in part by hunting and fishing, but mostly by the gathering of wild seeds, nuts, and berries. The islanders lived almost entirely by sea-fishing, while about San Francisco they depended mainly on the salmon. The Chumashan coast tribes fished from large dugout canoes. Hunting was usually confined to small game, particularly rabbits and jack rabbits, the larger animals being generally protected by some religious taboo. On account of a prevalent ritual idea which forbade the hunter to eat game of his own killing, men generally hunted in pairs and exchanged the result. Grasshoppers were driven into pits and roasted as a dainty. Among vegetable foods the acorn was first in importance, being gathered and stored in large quantities, pounded into meal in stone mortars or ground on metates, leached with water to remove the bitterness, and cooked as mush (porridge) or bread. Wild rice was also a staple in places, while in the blossom season whole communities lived for weeks upon raw clover tops. The men went nearly or entirely naked, excepting for a skin robe over the shoulders in cold weather. Women usually wore a short skirt with fringes of woven or twisted bark fibre. Both sexes commonly kept their hair at full length, but bunched up behind. Some bands shaved one side of the head. Tattooing was practised by both sexes to some extent. Shell beads were used for necklace purposes, and eagle and other feathers for head adornments. Dance-leaders and priests at ceremonial functions wore feather crowns and short skirts trimmed with feathers. Light sandals were sometimes worn. Musical instruments were the rattle, flute, and bone whistle. The drum was unknown. Weapons were the bow and arrow, wooden club, stone knife, and a curved throwing stick for hunting rabbits. Cremation was universal, excepting in the Chumashan. Marriage and divorce were simple, and polygamy was frequent. Of the mythology and ceremonial of the coast tribes of the mission area northwards from Los Angeles we know almost nothing, as the Indians have perished without investigation, but the indications are that they resembled those of the known interior and southern tribes. For these our best authorities are the missionary Boscana, Powers, Merriam, and especially the ethnologists of the University of California. The southern tribes Juaneño, Luiseño, Diegueño, etc. base their ritual and ceremonial upon a creation myth in which Ouiot, or Wiyot, figures as the culture hero of an earlier creation in which mankind is not yet entirely differentiated from the animals, while Chungichnish (Chinigchinich of Boscana) appears as the lord and ruler of the second and perfected creation, which, however, is a direct evolution from the first. The original creators are Heaven and Earth, personified as brother and sister. The rattlesnake, the tarantula, and more particularly the lightning and the eagle, are the messengers and avengers of Chungichnish. In the Diegueño myth the whole living creation issues from the body of a great serpent. The principal ceremonies, still enacted within recent memory, were the girls' puberty ceremony, the boys' initiation, and the annual mourning rite. In the puberty ceremony the several girls of the village who had attained the menstrual age at about the same time were stretched upon a bed of fresh and fragrant herbs in a pit previously heated by means of a large fire, and, after being covered with blankets and other herbs, were subjected to a sweating and starving process for several days and nights while the elders of the band danced around the pit singing the songs for the occasion. The ordeal ended with a procession, or a race, to a prominent cliff, where each girl inscribed symbolic painted designs upon the rock. The boys' initiation ceremony was a preliminary to admission to a privileged secret society, the officers of which constituted the priesthood. A principal feature was the drinking of a decoction of the root of the poisonous toloache, or jimson-weed (datura meteloides), to produce unconsciousness, in which the initiate was supposed to have communication with his future protecting spirit. Rigid food taboos were prescribed for a long period, and a common ordeal test was the lowering of the naked initiate into a pit of vicious stinging ants. A symbolic "sand painting", with figures in vari-coloured sand, was a part of the ritual. The corpse was burned upon a funeral pile immediately after death, together with the personal property, by a man specially appointed to that duty, the bones being afterwards gathered up and buried or otherwise preserved. Once a year a great tribal mourning ceremony was held, to which the people of all the neighbouring rancherías were invited. On this occasion large quantities of property were burned as sacrifice to the spirits of the dead, or given away to the visitors, an effigy of the deceased was burned upon the pyre, and the performance, which lasted through several days and nights, concluded with a weird night dance around the blazing pile, during which an eagle or other great bird, passed from one to another of the circling dance priests, was slowly pressed to death in their arms, while in songs they implored its spirit to carry their messages to their friends in the other world. The souls of priests and chiefs were supposed to ascend to the sky as stars, while those of the common people went to an underworld, where there was continual feasting and dancing, the idea of future punishment or reward being foreign to the Indian mind. The dead were never named, and the sum of insult to another was to say "Your father is dead." In connexion with childbirth most of the tribes practised the couvade, the father keeping his bed for some days, subjected to rigid diet and other taboos, until released by a ceremonial exorcism. Besides the great ceremonies already noted, they had numerous other dances, including some of dramatic or sleight-of-hand character, and, among the southern tribes, a grossly obscene dance which gave the missionaries much trouble to suppress. Among the Gallinomero, and perhaps others, aged parents were sometimes choked to death by their own children by crushing the neck with a stick. Ordinary morality could hardly be said to exist even in theory. Infanticide and abortion were so prevalent that even the most strenuous efforts of the missionaries hardly succeeded in checking the evil. In this and certain other detestable customs the coast tribes were like the California Indians generally, whom Powers characterizes, in their heathen condition, as perhaps the most licentious race existent. Even before the arrival of the missionaries, their blood, like that of all the coast tribes as far north as Alaska, had been so poisoned by direct or transmitted contact with dissolute sealing and trading crews, that the race was already in swift decline. The confiscation of the missions and the subsequent influx of the gold-hunters doomed the race to extinction. By the confiscation of the missions (1834-38) the Indians lost their protectors together with their stock and other movable property, and by the transfer of California to the United States in 1848 they were left without legal title to their lands, and sank into a condition of homeless misery under which they died by thousands and were fast approaching extinction. With the exception of occasional ministrations by secular priests or some of the few remaining missionaries, they were also left entirely without spiritual or educational attention, notwithstanding which the Christian Indians continued to keep the Faith and transmitted the tradition to their children. At last, as the result of a governmental investigation in 1873, a number of village reservations were assigned by executive proclamation in 1875 to the southern remnant, the northern bands being already extinct. By subsequent legislation there are now established some thirty small "Mission Indian" reservations, all in western and central San Diego and Riverside Counties, California, with a total population, in 1909, of 2775 souls, representing five tribes and languages, viz., Luiseño, Serrano, Cahuilla, Agua Caliente, and Diegueño. The largest groupings are at Monongo adjoining Banning (chiefly Cahuilla) 238; Pala (Luiseño and Agua Caliente) 226; Pechanga (Luiseño) 170; and Santa Ysabel No. 3 (Diegueño) 165. They are practically all Catholics and besides twelve government day-schools with a total enrolment of 286 there are 17 Catholic schools served by secular priests under the diocese of Los Angeles, with a total enrolment in 1909 of 1894 pupils. Of these the largest are at Pala (260), La Jolla (195), Pauma (180), Soboba, or San Jacinto (163), Campo (125), and Martinez (125). All are day-schools, excepting St. Boniface boarding-school at Banning with 100 pupils. About the same time Catholic mission work was begun among the remnant tribes on the northern border of the original mission territory. In 1870 the mission of St. Turibius was founded by Father Luciano Osuna, north of Kelseyville in Lake County. In 1889 Saint Mary's mission was established near Ukiah in Mendocino County. The Indians of both stations are locally called "Diggers", but are properly Pomo and Yukai and some of the older ones still have recollection of the early mission fathers. They are in charge of the Friars Minor and Capuchins. All these northern missions are in the Archdiocese of San Francisco. According to a careful estimate made by Merriam, the original Indian population of the mission territory, eastwards to the San Joaquin and lower Sacramento rivers, was approximately 50,000 souls. About 30,000 were domiciled in the missions at the time of confiscation. Following the ruin of the missions and the invasion of the Americans, they died in such thousands that of all those north of the present Los Angeles, comprising perhaps four-fifths of the whole, not 300 are believed to survive today. The southern tribes, being of manlier stock and in some degree protected by their desert environment, have held themselves better, and number today on the "Mission Indian" reservations, as already stated, 2,775 souls, a decrease, however, of 152 in nine years. The Mission Indians of California have dwindled to fewer than one-sixteenth of their original number, and indications point to their extinction. (See CALIFORNIA.) AMES, Report in regard to condition of Mission Inds. in Rept. Comsner. Ind. Aff. for 1873 (Washington, 1874); H. H. BANCROFT, Hist. California, I and II (San Francisco, 1886); IDEM, Native Races, I: Wild Tribes (San Francisco, 1886); IDEM, Native Races, III: Myths and Languages (San Francisco, 1886); BARRETT, Ethno-Geography of the Pomo and Neighboring Indians in Univ. of California Pubs. in Am. Arch. and Ethnology, VI, no. 1 (Berkeley, 1908); IDEM, Geography and Dialects of the Miwok Indians, ibid., no.2 (Berkeley, 1908); BARROWS, Ethno-Botany of the Coahuilla Inds. (Chicago, 1900); BARTLETT, Personal Narrative of Explorations (New York, 1854); BOSCANA, Chinigchinich (San Juan Capistrano Inds.), translation published in ROBINSON, Life in California (New York, 1846); Bureau of Am. Ethnology Seventh ann. rept. (Indian linguistic families)(Washington, 1891); Bur. Cath. Ind. Miss., ann. repts. of Director (Washington); COUES (ed.), On the Trail of a Spanish Pioneer (Fr. Garces) (New York, 1900); Comsnr. of Ind. Affairs, ann. repts. of (Washington); DUBOIS, Religion of the Luiseño Inds. in Univ. of Cal. Ethn. pubs., VIII, no. 3 (Berkeley, 1908); DUFLOT DE MOFRAS, Exploration du territoire de l'Oregon, des Californies, etc. (Paris, 1844); ENGELHARDT, Franciscans in California (Harbor Springs, 1897); FORBES, California (London, 1839); HODGE (ed.), Handbook of Am. Inds. (Bull. 30, Bur. Am. Ethn.) (Washington, 1907-11); HRDLICKA, Physical Anthropology of California in Univ. of Cal. Hrdlicka pubs, in Am. Arch. and Ethn., IV (Berkeley, 1906); JACKSON, Ramona (Boston, 1885); KAPPLER, Ind. Affairs; Laws and treaties. (Washington, 1903); KROEBER, papers in Univ. of Cal. pubs. in Am. Arch. and Ethn. (Berkeley), viz., Languages of the (South) Coast of California; Types of Ind. Culture in California (II, 1904); Yokuts Language Shoshonean Dialects of California; Ind. Myths of South Central Cal. Religion of the Ind. of California (IV, 1907); Ethnography of the Cahuilla Inds.; A Mission Record of the Cal. Inds. Evidences of . . . Miwok Ind. (VI, 1908); Shoshonean Dialects of Southern California (VIII, 1909); MERRIAM, papers in Am. Anthropologist, new series (Lancaster), viz., Indian Population of California (VII, 1905); Mewan Stock of California (IX, 1907); Totemism in California (X, 1908); E. B. POWERS, Missions of California (San Francisco, 1897); S. POWERS, Tribes of California in Cont. to N. Am. Ethn., III (Washington, 1877); ROBINSON (anon.), Life in California (contains also BOSCANA'S account) (New York, 1846); RUST, Puberty Ceremony of the Mission Inds. in Am. Anthropologist, new series, VIII (Lancaster, 1906); SHEA, Catholic (Indian) Missions (New York, 1854); SMITH, In re Cal. Missions Inds. to date (Scquoya League Bull. 5 in Out West, separate), (Los Angeles, 1909); SPARKMAN, Culture of the Luiseño Inds. in Univ. of Cal. Pubs., Am. Arch. and Ethn., VII (Berkeley, 1910); TAYLOR, Indians of California; articles in Cal. Farmer (San Francisco, 1860-1); WATERMAN, Mission Indian Creation Story in Am. Anthropologist, new series, XI (Lancaster, 1909); IDEM, Religious Practices of the Diegueño Inds., Univ. of Cal. pubs. in Am. and Ethn., VIII (Berkeley, 1910); WHEELER (in charge), Rept. upon U. S. Geographical Surveys etc., VII, Archæology [California Indian papers by GATSCHET (languages), HENSHAW (Voyage of Cabrillo) and YARROW], (Washington, 1879); ROYCE AND THOMAS, Indian Land Cessions in Eighteenth Rept. (part 2) But. Am. Ethnology (Washington, 1899). APA citation. (1911). Mission Indians (of California). In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/10369a.htm MLA citation. "Mission Indians (of California)." The Catholic Encyclopedia. Vol. 10. New York: Robert Appleton Company, 1911. <http://www.newadvent.org/cathen/10369a.htm>. Transcription. This article was transcribed for New Advent by Douglas J. Potter. Dedicated to the Immaculate Heart of the Blessed Virgin Mary. Ecclesiastical approbation. Nihil Obstat. October 1, 1911. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:1560b870-5e77-404a-b56a-150401bd6417>
CC-MAIN-2016-26
http://noreply@newadvent.org/cathen/10369a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95753
6,290
3.5625
4
by Brooks Hays Washington (UPI) May 19, 2013 NASA's Kepler spacecraft -- a satellite launched in 2009 and tasked with scanning space for Earth-like planets orbiting around distant stars -- has been out of commission for almost a year. But scientists at NASA recently came up with a temporary fix, and a jury-rigged Kepler is preparing to be put back on the job. The spacecraft lost maneuverability in spring of last year after two of its four wheels broke. Its wheels were central in stabilizing Kepler's imaging instrumentation and pointing it in the right direction. With only two, Kepler spins out of control. "The approval provides two years of funding for the K2 mission to continue exoplanet discovery, and introduces new scientific observation opportunities to observe notable star clusters, young and old stars, active galaxies, and supernovae," Kepler Project Manager Charlie Sobeck said in a statement. Beginning at the end of the month, scientists at NASA's Ames Research Center in Mountain View, California, will have the go-ahead to begin their next Kepler mission. The craft will be positioned in such a way that pressure from the sun's rays keep the observatory stable. Kepler will only be able to work for 80-odd days at a time, after which it will have to be momentarily rotated to protect the imaging lens from direct sunlight. Kepler's sole instrument is a called a photometer. It continually monitors the stars of a certain brightness, and periodically transmits its data to Earth. Since its launch, Kepler as detected more than 3,800 potential exoplanets, and 960 of these have been confirmed by NASA scientists. That means more than half of all known alien planets have been discovered by Kepler. Space Telescope News and Technology at Skynightly.com |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.|
<urn:uuid:f92f3e8d-79bc-4005-8675-e471fcde3508>
CC-MAIN-2016-26
http://www.spacedaily.com/reports/NASA_fixes_Kepler_sort_of_puts_the_spacecraft_back_on_planet_hunting_duty_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914053
594
2.578125
3
Lucrezia Marinella was a Venetian author of the sixteenth century, who published prolifically in a range of genres, primarily devotional literature (in prose and verse) and philosophical polemics. Her work, La nobiltà et l'eccellenza delle donne, co' difetti et mancamenti de gli uomini, (The Nobility and Excellence of Women and the Defects and Vices of Men), published in 1600, was one of the first polemical treatises written by a woman in Italian as part of an ongoing debate about the nature and worth of women, often called the querelle des femmes (the debate about women). The Nobility and Excellence of Women is an erudite recapitulation of the arguments and evidence brought forward to support claims for the merits of women, but it is more than a summary. Marinella provides a cogent, extended argument for the superiority of women's intellectual and moral capacities, effectively constructing an account of a nature proper to women and distinct from the nature of man. The Nobility and Excellence of Women is remarkable in several respects, aside from its philosophical and rhetorical skill. First, although several of Marinella's predecessors on the pro-woman side of the debate had argued both that men and women were equal in so far as they shared a rational soul, and also that women were superior, they had failed to address adequately the tension between the claims of equality and of superiority; Marinella addresses it directly and persuasively. Her argument takes the bodies of women as a starting point, from which she adduces evidence to demonstrate that women's moral characters are better than those of men, and that the moral superiority of women leads to an intellectual superiority. Second, Marinella advances the case being made by women and their supporters beyond a demand for sympathy and respect from men to a demand for freedom, power, and equality (Cox 1995, 520). Although she did not propose concrete reforms, she did analyze the situation of women in explicitly political terms. Third, although many had decried the viciousness of those who argued for the inferiority of women, Marinella was one of the first to supply an explanation of the motives of men who published misogynist works, and to connect those motives to the exclusion of women from public life (Jordan 1990, 259; Cox 1995, 516). - 1. Life - 2. The Nobility and Excellence of Women and the Defects and Vices of Men: context, sources, and structure - 3. Nobility as a function of causes - 4. The different natures of men and women - 5. The evidence from bodies that the sexes have different natures - 6. Methods and issues of interpretation - Academic Tools - Other Internet Resources - Related Entries Lucrezia Marinella was born in Venice in 1571, and lived there until her death in 1653. Her father, Giovanni Marinelli, was a physician, and the author of a number of medical treatises, two of which concerned women. He encouraged her intellectual interests, and allowed her access to biological and medical works, as well as works of philosophy (natural and moral) and literature. She was thus able to obtain a good education. Marinella composed works in a number of different genres, including lyric and narrative poetry, and devotional literature, but her skill as a philosophical polemicist demonstrated direct knowledge of the classical literary tradition and training in rhetoric and dialectic, all of which were unusual among women at the time (Panizza and Wood 2000, 65). Marinella's first published work appeared in 1595; her most important work, the treatise entitled The Nobility and Excellence of Women and the Defects and Vices of Men (hereafter The Nobility) was published in Venice in 1600, revised and expanded in 1601, and reprinted in 1620. She continued writing and publishing until her death. Marinella married another physician, Girolomo Vacca, relatively late in life. In the sixteenth century political and economic conditions in Venice, and their impact on marriage opportunities, gave women more liberty, which may have favored feminist polemics (see Cox 1995). In that context, Marinella's late marriage may have afforded her greater opportunities for education and a measure of independence. There is some evidence that she was accepted as part of a group of intellectuals who formed the second Venetian academy, and that the academy supported and encouraged the expression of her feminist ideas (Kolsky 2001, 976). Marinella was commissioned to write The Nobility, either by Lucio Scarano, to whom it is dedicated (also a physician, and a philosopher) or by its publisher, Giovanni Battista Ciotti (Kolsky 2001, 975; Ross 2009, 291). It was intended as a response to a treatise by Giuseppe Passi, I donneschi diffetti (The Defects of Women). The commission attests both to the reputation Marinella enjoyed as an intellectual, and to the support she received from a wider circle. In the sixteenth century the Italian vernacular increasingly replaced Latin as the language appropriate to a broad range of topics and genres (Panizza and Wood 2000, 65, 195), so Marinella was not eccentric in her choice of the vernacular (her father also wrote in Italian, and had publicly urged others to do so). The choice did, nonetheless, mean that her treatise in defense of women was available to more people, especially women. In her own time she was renowned as learned and eloquent, and she acquired a reputation as a rigorous scholar and a skillful philosopher; this was certainly in part due to the merits of her published work, but the respect she enjoyed—very unusual for a feminist—may have had something to do with the seclusion and sexual modesty she practiced. 2. The Nobility and Excellence of Women and the Defects and Vices of Men: context, sources, and structure Although many of Marinella's works, especially the long poem L'Enrico, overo Bisantio acquistato (1635), include philosophical themes, The Nobility stands as her most important, and perhaps her only uncontestable, contribution to philosophy. It is a contribution to a debate about the nature and the merits of women, which had its origin in The Book of the City of Ladies (1405) by Christine de Pizan (an argument for the moral superiority of her sex) written in response to The Romance of the Rose (~1275) by Guillaume de Lorris and Jean de Meun, in which women were vilified. Polemical works arguing that men were superior to women, or that women were superior to men, had proliferated in subsequent centuries, in French, Italian, Latin, German, Spanish and English. Such treatises generally relied on some combination of argument (usually drawn from ancient sources), examples, and citations from scripture and from literary or philosophical authorities. The Nobility appeared when the debate had been running for two hundred years. Marinella was responding directly to the treatise by Giuseppe Passi, I donneschi diffetti (The Defects of Women), published in 1599 in Venice and Milan, and so the structure and methodology of The Nobility mirror those of The Defects of Women (Kolsky 2001, 974). Passi had cited a variety of ancient and medieval authorities, many of whom cite Aristotle as the source of their arguments, which may explain Marinella's particular concern both to use and to discredit Aristotle; at any rate, she takes Passi to be a contemporary representative of a misogynist tradition beginning with Aristotle and extending forward through Boccaccio. The Defects of Women stands as an extreme example of the genre of attacks on women, with Passi claiming that women are covered from head to toe in vices and defects (Passi 1599, 240). The argument of the treatise begins with the claim that the female is imperfect, created only as a ‘necessary evil’. The imperfection of women is fundamentally that they are especially subject to passion (8). The heat of women's bodies is both the source and the sign of her subjection to passion (29). Because women are in thrall to their passions, they may not, strictly speaking, be rational animals (216). This possibility is suggested and legitimized by Aristotle's assertion that the deliberative faculty of women has no authority (215). Passi satirizes intellectual women (278–9), and comes close to suggesting that women are a different species from men, insisting that women should be treated like animals because both women and animals lack reason and virtue (Malpezzi Price and Ristaino 2008, 108). The Defects of Women treatise is remarkable for its virulence, but in no way original in substance. There were precedents for many of Marinella's claims and arguments in The Nobility, perhaps as early as Christine de Pizan (although there is no evidence that Marinella had read Pizan—see Ross 2009, 326 (n.6)), but certainly in the work of Henricus Agrippus, De nobilitate et praecellentia foeminei sexus (On the Nobility and Excellence of the Feminine Sex), published in Latin in 1529 and translated soon after into Italian (in 1549 by Alessandro Piccolomini), and the treatise Il Cortegione (The Courtier) (1528) by Baldassare Castiglione. Agrippa drew an analogy between the subjection of women and political tyranny, and this may have paved the way for Marinella's political approach to the issue of women's nature. Castiglione, using the form of the dialogue, offers a refutation of certain Aristotelian claims about the imperfection of women (through the voice of Giuliano de' Medici), and an exposition of Neoplatonic love theory as popularized by Marsilio Ficino (through the voice of Pietro Bembo). Marinella also draws on the work of Leone Ebreo, the Dialoghi d'amore (1535), in which love is a cosmic force infusing all of creation, and the love between men and women, and not only between men, is recognized as a route to divinity. While Marinella drew on these works extensively, her argument marks a philosophical advance because it is detailed, systematic, and cogent. Although The Nobility was published in the same year that Moderata Fonte's dialogue (Il merito delle donne) defending women, was published, Il merito had been written some years before. Marinella makes several references to Fonte, but none to Il merito. It is impossible to know how well acquainted she was with Fonte's work, or what she thought of it. (For a discussion of Marinella's knowledge of Fonte's work see Kolsky 2001, 981–2). In The Nobility Marinella argues that there is a feminine nature that is different from, and superior to, masculine nature. It was a commonplace that a person was called to a certain office in life by God; the nobility of a person was a function of that office and how well one carried out its duties. “Questions of virtue thus inevitably allude to a social hierarchy that was generally accepted as a reflection of the hierarchy of creation, an order in nature or of nature, instituted not fortuitously but providentially, and therefore not subject to alteration by human beings,” (Jordan 1990, 21). In this intellectual context, arguing that women were superior in nature to men was a way of arguing that the office in life that women were intended by God to fulfill was itself better. Marinella's central claim in The Nobility is, “…that the female sex is nobler and more excellent than the male,” (1601b, 39). More precisely, she says that she will show “…that they [women] surpass men in the nobility of their names, causes, nature, operations and the things men say about them,” (41). That names might indicate something about the things to which they refer was a commonplace of the Renaissance, with origins in the interpretation of Plato's Cratylus. The causes of a phenomenon were similarly taken to indicate something important of the phenomenon itself—better causes producing better effects. The “nature” of woman Marinella shows to be, on the one hand, a nature shared with men and, on the other, a distinct nature; as the formal cause of a substance, the nature determines the worth of that substance. The “operations” of women are the things they are able to do insofar as they are ensouled beings rather than inanimate objects. Since Marinella argues that women have better souls than men, she takes as evidence for this the superior merit of the activities that they perform with the soul. Finally, Marinella's objective to demonstrate that men themselves make evident the superiority of women (by means of “the things they say about women”) represents her most important strategy: to take the evidence usually adduced by men to demonstrate the inferiority of women, and reveal through interpretation that in fact it demonstrates the superiority of women. The Nobility is divided into two parts, the first of which demonstrates the nobility and excellence of women, the second of which sets out the defects and failings of men. Both the respects in which she claims superiority for women and the contrast she draws between the excellences of women and the vices of men are standard in the contributions to the querelle des femmes that take the side of women. What is unusual with Marinella is the learning, the sophistication, and the systematic and cogent development of the arguments. The argument for women's superiority is largely set out in the first part of the treatise. But the second part, on the defects of men, is not incidental to the central claim that women are nobler. Marinella details the defects of men, and in particular their evil motives, in order to support her positive argument for women's nobility, by demonstrating that the motives men have for denigrating women are ignoble, stem from defects of nature, and are thus evidence of the inferiority of men. So the defects of men are introduced not only so that women might appear more excellent in comparison, but also, and more importantly, to show that the deficiencies attributed to women by men are more properly the deficiencies of male nature, and that those very deficiencies are responsible for the fallacious claims about women made by some men (Aristotle and Passi in particular). So Marinella offers an explanation for the misogynist claims to which she is responding, and that explanation supports her claim that women are better than men in certain precise respects. Marinella draws on both Platonist and Aristotelian accounts of causation, interpreted through ancient, medieval, and Renaissance commentators (she cites Plotinus, Lombard, Ebreo, and Ficino), when she argues that women are superior to men with respect to the causes that have generated them. Aristotle had posited four kinds of cause: material, formal, efficient, and final. In the case of an unqualified change, such as the generation of a substance (e.g., a person or a squirrel), the causes can be understood as follows: the material cause is the stuff out of which the substance is made, which remains as a constituent of the substance; the formal cause is the principle of organization that bestows on the individual substance both its form and its function, making it a member of a natural kind, with the characteristic properties and behaviors of that kind; the efficient cause initiates the process of generation; the final cause is the aim or end point of the process, which will often be in effect the same as the formal cause, because the aim of a process of generation is the mature and perfected form and function of the substance. Consider the example of the generation of an individual belonging to a natural kind, the squirrel (a natural substance). The material cause of the squirrel is the flesh, bone, blood, etc. from which the squirrel is constituted; the formal cause of the squirrel is its shape and function; the efficient cause of the squirrel is the male parent of that squirrel (because, on Aristotle's view it is the male parent that initiates the process of generation of offspring); and the final cause is to be a mature squirrel and carry out the functions of a squirrel (whatever those may prove to be). Marinella, following this tradition, distinguishes the “efficient or productive cause” from the material cause in the production of every creature, among which she includes woman, and man. On her view, all created things (for example, all angels, heavenly bodies, people, elements—earth, water, fire and air—and animals) ultimately have the same productive or efficient cause, namely God. With respect to the efficient cause, then, created things differ not at all. But there are distinctions in worth among created kinds (and also among individuals, insofar as the causes of two individuals of the same kind might differ), and these distinctions are a function of the differences in the ideas of God, whom Marinella compares to an architect or painter, who effects the production of buildings or art-works through the formulation of an idea or plan. The ideas are the formal causes, which, as we have seen, are the principles of organization that bestow on the individual substance both its form and its function; these produce the different kinds of creatures, and the differences among individuals within a kind. On this account, then, God's creative process resembles the production of artifacts: in the same way that the painter will have better and worse ideas (in the sense that the ideas for the paintings will be ideas of better and worse things), so too God will have ideas of better and worse things that he brings into being: what God creates is not of uniform value. (The conception of ‘ideas’ here, while clearly Platonic in origin, parallels Aristotelian formal causes.) In describing the important differences among ideas in the mind of God, and hence among formal causes, Marinella adverts in particular to the different purposes that different kinds are to serve: That same courteous hand created angels, heavens, men, and the rude, dull earth, all in varying degrees of perfection….It is the creator who decides which things are of less value and which are worthier, and more particularly, which have a less noble purpose and which a more remarkable one. (Marinella 1601b, 52 (references to Marinella are all to 1601b)) That is, while God as the productive cause of every created thing is one and the same, the formal causes—the ideas in the mind of God—will be different, and of different value. Moreover, the ends to which the productive cause, God, intended to put each creature are different, and so the final causes responsible for the creation of different creatures will also be different. Marinella does not immediately conclude that women are superior to men; rather she concludes that it is possible: Different degrees of perfection can be found, therefore…in everything in the world…If this is the case…why should not woman be nobler than man and have a rarer and more excellent purpose than he, as indeed can be manifestly understood from her nature? (1601b, 53) In other words, if (i) the ideas of created kinds in God's mind differ with respect to the intrinsic worth of the kind and with respect to the purpose of the kind, and if (ii) God's idea of woman was different from God's idea of man and so women have a different divinely determined purpose from that of men—then (iii) it is possible that women are nobler than men. Women and men might then be different with respect to the idea and purpose (the formal and final causes) in the mind of God, but they are not different with respect to the efficient cause that brings them into being—God himself. Marinella's discussion of the fourth kind of cause, the material cause, forms an important part of her argument, in which she makes a number of distinct points about the body as material cause of the ensouled being. She believes women are better than men with respect to the material cause, first citing an argument made by Christine de Pizan and restated by Agrippa: because woman was created from the rib of man, and man was already an ensouled being, and hence a living being, the material cause of woman is better than that of man, who was made from earth, which is lifeless matter. This argument depends on the implicit premise that ensouled beings are superior to inanimate beings, but that was a view of the hierarchy of being current since antiquity, and so one that Marinella believes herself entitled to use. This is only the first demonstration of the superiority of women's bodies to men; Marinella has quite a lot to add on the subject of the material or bodily superiority of women (discussed in Section 5 below), insofar as that superiority is a sign or mark of a better soul. To demonstrate that women are superior in nature to men, Marinella has to establish that it is possible for members of the same species to have souls that are the same in kind and yet different in merit. She acknowledges the widely accepted view that a species form is the same in every individual: …if we speak as philosophers, we will say that man's soul is equally noble to woman's because both are of the same species and therefore of the same nature and substance, (1601b, 55); …if we wished to apply the common reasoning, we would say that women's souls are equal to men's. (1601b, 57) She is adverting here to Moderata Fonte and again to Agrippa, who had begun his Proclamation by claiming that God has attributed to both man and woman an identical soul, which sexual difference does not at all affect…Thus, there is no preeminence of nobility of one sex over the other by reason of the nature of the soul; rather, inwardly free, each is equal in dignity, (Agrippa 1529, 43); she likely also has in mind Castiglione, who wrote the male will not be more perfect than the female as regards their formal substance, because the one and the other are included under the species man, and that in which the one differs from the other is an accident and is not of the essence. (Castiglione 1528, 214) She agrees that women have the same rational souls as men, and belong to the same species, but denies that it follows from this that their souls are no nobler than the souls of men. That is, she argues, on the basis of the theory of causation that she set out, that it is not impossible that within the same species there should be souls that are from birth nobler and more excellent than others…I say that women's souls were created nobler than men's. (1601b, 55) Marinella quite explicitly rejects, then, the idea that men and women must be equal in nobility because they belong to the same species; but she also anticipates the objection that because the species essence, the rational faculty of soul, is the same in every individual person, we might expect that men and women would be equal in worth. It follows, and again she recognizes this, that she must assert that the form of a species is not without diversity, and so she explicitly argues for variations in the idea or form or soul of the human species. “Women's souls can, therefore, be nobler and more prized in their creation than men's,” (1601b, 57). In light of what she has said about ideas in the mind of God as productive causes of created beings, this must mean: the idea of woman (or perhaps the ideas of individual women) in the mind of God is an idea of something with a nobler and better purpose, with the result that the creature produced is nobler and more excellent. This is consistent with the claim that men and women have the same rational souls, if we allow that the human soul is constituted by something more than the faculty of reason, and so two human (and hence rational) souls might be put to different purposes. Marinella seeks to show that the purposes to which a rational soul is put will depend, at least in part, on the desires of the rational agent. The question she confronts is, then, if men and women have souls that make them the same in species—the same in rational capacity—how can women be superior to men? Marinella's answer depends on distinctions in the faculties of soul, and in particular on a distinction between the rational part and the desiring part, in which moral virtues are located. She aims to show that women are morally superior to men, and that this causes them to be “even better than men at learning the same arts and sciences,” (1601b, 83). That is, her argument is that the moral superiority of women has an impact on their rational faculties, which causes them to be intellectually superior (better at the same arts and sciences) although they are created with the same rational faculty. The argument depends ultimately on Marinella's views about the causal role of bodily temperature on the soul's functions, and about the moral status of the actions that issue from the soul's functioning, views that will be elaborated in the following section. But Marinella produces a variety of kinds of evidence that, with respect to the moral virtues, and especially with respect to the control of the passions, women are superior. First, she claims that women are superior to men with respect to a variety of individual moral and intellectual virtues, and provides evidence in the form of examples of excellent women who manifest these virtues (“It is a fact known to everyone that women are continent and temperate, for we never see or read about them getting drunk or spending all day in taverns, as dissolute men do, nor do they give themselves unrestrainedly to other pleasures,” (1601b, 94).) Second, she points out that since men, whatever they might say about women, treat women with signs of honor, and since “nobody honors another person unless they know that the person has some gift or quality that is superior to his own” (1601b, 69), we should conclude that men themselves recognize the superiority of women. But, as we will see in the next section, she ultimately traces the moral superiority of women, which is responsible for their intellectual superiority, to differences in men's and women's bodies. Marinella clearly suggests, then, that both deliberative and speculative reason are exercised more effectively by women because of the moral advantages of their sex. Granting that women and men have the same rational faculties of soul, if women are morally superior to men, they will also thereby attain intellectual superiority, making them better at learning the same arts and sciences. Although the souls of women are “still nobler” than those of men with respect in particular to the moral faculty, that nobility will have an impact on the rational faculty with the result that women are better than men intellectually and not only morally. So while Marinella asserts (on the authority of Aristotle and scripture) that we can know the souls of men and women to be forms of the same species, rational souls, and equal in that respect, she also insists that this fundamental sameness allows nonetheless for distinctions in merit. Women's souls are superior to those of men, because the moral character of women makes their faculty of desire, and ultimately also their faculty of reason, better than men's. The superiority of women's desires she traces to their temperate physiology, and she takes the proof of the superiority of their souls to be manifested in the beauty of their bodies. Although the souls of men and women are identical with respect to the rational faculty, Marinella claims that women have better souls than men because (i) they have better desires which in turn (ii) affect the capacity for reason, effectively rendering women better able to access and act on their reason, so that (iii) women behave better—and in particular, they behave with more moderation. She also argues that the female body offers evidence of the superiority of women's souls, both as a cause and as an effect, despite the equality of rational capacity enjoyed by men and women. She adopts the common distinction between body and soul: Women, like men, consist of two parts. One, the origin and cause of all noble deeds, is referred to by everyone as the soul. The other is the transitory and mortal body. (1601b, 55) The soul, on her view, commands the body (or ought to); at the same time, it is dependent on the body for its operations (55). That is, the operations of the soul—including desires, thoughts, decisions, and actions—require the body. Because the soul relies on the body for its operations, the body manifests or expresses the character of the soul and its faculties, in a variety of ways. When feminist philosophers first considered the question of sexual difference in the Renaissance, they were writing in response to overtly misogynistic claims (in Marinella's case, the claims of Passi), which centred on the physical, moral and intellectual failings of women. One question feminists then had to face was whether to concede the facts, and dispute the explanation, or to dispute the facts. The question was whether it was better (i) to concede that women might appear to be inferior to men in a variety of ways (to be, for example, more foolish or more focused on frivolous pursuits) but to dispute that nature made them such or (ii) to dispute that women were more foolish, morally weak, or physically incapable in fact. Marinella by and large adopts the second strategy, arguing that women in fact display in their behavior not ignorance, unreason, vanity or flightiness, but on the contrary all the moral excellences that their detractors accuse them of lacking. At the same time, she does believe that men have suppressed the abilities and limited the opportunities of women, particularly with respect to intellectual endeavours, and that is a reason to expect that women might not be able to speak for themselves—their souls cannot express themselves directly (Marinella 1601b, 80; Malpezzi Price and Ristaino 2008, 116). The suppression of women's abilities, and the concrete suppression of their speech, justifies in Marinella's view moving to consider the evidence of women's bodies in order to understand their souls, and to argue for the superiority of women on that basis. Because the soul of a person operates through the body, the body manifests certain characteristics of the soul, and so stands to offer evidence of the character of the soul. Marinella intends then to demonstrate the superiority of women's souls through the superiority of their bodies. She appeals to two physical indications of the greater nobility of women's souls: (i) the moderate temperature of the female body, and (ii) their bodily beauty. Consider her claim that “the greater nobility and worthiness of a woman's body is shown by its delicacy, its complexion, and its temperate nature, as well as by its beauty,” (1601b, 57). She believes the delicacy and complexion peculiar to women's bodies are caused by the more moderate temperature of the body, so—despite this list—there are only two fundamental differences in the bodies of men and women, temperature and beauty. Temperature, on Marinella's view, is the physical cause (the material cause) of the superiority of women's souls: it is necessary that I should clarify to some extent the nature of the body, because nearly all of its virtues and defects depend on its temperature, so that reason, even though it is master, is frequently dazzled and blinded by the senses. (1601b, 77) The right, moderate, temperature, will ensure that reason is not blinded by the senses, and hence will allow reason to retain control over desire; and moderate temperature is, on Marinella's account, most often found in women. The philosophical foundation for the claim that the temperature of women's bodies is a sign of their virtue is an interpretation of Aristotle's natural philosophy. (It is possible that she also had Galen in mind.) Marinella claims that the lower temperature of women's bodies causes them to have superior moral virtues, and in this she is original; Castiglione had argued that women were more temperate physiologically, but he did not draw the connection to moral temperance so explicitly (see Castiglione 1528, 219). Marinella reports that Aristotle says that women are “Less hot than men and therefore more imperfect and less noble,” (130). She agrees with Aristotle that relative to men women are cooler in temperature, but she disagrees that women are cold in absolute terms. She then develops an argument that demonstrates, in part by pointing out discrepancies in Aristotle's own account of the effects of heat and cold on certain soul functions, that the relative coolness of the female is a moral, and ultimately an intellectual, advantage. Aristotle believed that the fundamental difference between male and female animals was a different capacity to concoct excess blood in the body into semen through a process that involved the transmission of heat to the blood; ultimately this difference was caused by a difference in heat, or the capacity to transmit heat, in the hearts of males and females (Marinella cites History of Animals IX, but there is considerable evidence for this view in the Generation of Animals as well; see, for example GA IV. 1 766a31–6). That is, the distinction between male and female animals resides in their heart, which is the source of natural heat, such that women are less capable of producing vital heat. Aristotle consistently says that animals that are more intelligent have the “purest” blood; and more generally, he asserts that the quality of blood affects the intelligence and temperament of animals (see, e.g., Generation of Animals 2.6 744a28–32, Parts of Animals 2.4 650b19–25, 651a12–16). Moreover, Aristotle suggests that these differences in blood occur not only across animal species, but also between the sexes in a species. His claim is that hot, thin, pure blood is best, because such blood correlates both with courage (manliness) and with practical wisdom (Parts of Animals II. 2 748a2–14). The implication is clearly that those animals that benefit from hot, thin, pure blood are superior both with respect to deliberative reasoning and with respect to the moral virtue of courage. So men, in virtue of their body temperature, have an advantage, both intellectual and moral, over women. But Aristotle's views on the effects of temperature on the blood, and through the blood on the soul's faculties, present certain challenges of interpretation: in the same passage of the Parts of Animals just cited, Aristotle says first that cold, thin blood is best for intelligence, and then that hot, thin blood is best. Marinella exploits this ambiguity, elaborating on Castiglione's point that women are, in themselves, temperate rather than cold (see Castiglione 1528, 219). She agrees that women are colder, and embraces the notion that cold blood is intelligent blood. But she goes further than Castiglione in asserting that hot blood is associated with intemperate passions, and deduces the superiority of women with respect to intelligence, temperance, and general virtue or nobility. She says, I now believe that Aristotle did not consider the workings of heat with a mature mind, nor what it signifies to be more or less hot, nor what good and bad effects derive from this. (1601b, 130) She links here maturity with femininity, and femininity with relative coldness of temperature, thus extending the claim that cold blood supports greater intelligence to the claim that cold blood supports superior moral strength, by claiming that cooler blood encourages temperance with respect to pleasure and desire. Passi, and other opponents of women, accused women of intemperance, lasciviousness, and inconstancy; Marinella argues in response that the cooler temperature women enjoy allows them to keep their desires in check so that they can reason more effectively than men, who are over-heated relative to women. The degree of heat in a living body affects directly the specific character of the operations of the soul, making, for example, reasoning or desiring more or less principled, or more or less impulsive. Marinella asserts (citing Plutarch as her authority) that “heat is an instrument of the soul,” (1601b, 130). That is, the soul will operate in some instances through the mechanism of heat; the soul must use the body's heat as an instrument to conduct its operations—where those operations are not only rational activities, but are also the operations of desire and appetite. Now, since these operations of the soul affect in turn the actions that a person takes, the effects of body temperature extend beyond the direct effects on the activities of the soul, to the choices and actions the person takes, causing them to be virtuous or vicious. This is borne out by what Marinella says about the relation between temperate body heat and both moral and intellectual virtues. “Little and failing heat, as in old people, is powerless for the soul's operations,” whereas excessive heat “makes souls precipitous and unbridled,” (1601b, 130). So insufficient heat makes the soul's operations (cognitive and moral) ineffectual, but excessive heat makes the soul's operations unprincipled and impulsive. Now, the effects of insufficient or excessive heat are deeds, good or bad. Insufficient heat will lead to inaction, and excessive heat will lead to vicious action. In response, then, to Passi and Aristotle, Marinella concedes the empirical point, that women are colder but (i) disputes that women are colder absolutely (because, as Castiglione pointed out, temperature is relative, and hence women might be colder than men and yet temperate ‘in themselves’), and (ii) disputes the relation between heat and nobility, first citing various examples of those who are hotter than, but not nobler than, some others. She points out that regional climactic differences historically considered to affect body temperature exceed and confound sex differences—so African and Spanish women will be hotter than German men. If, as Marinella's opponents believe, greater heat necessarily leads to greater virtue, then they should concede that African women will be more virtuous than German men. But those who argue that women are less noble than men because they have less heat than men, will not allow that men living in colder climates are less noble than women living in warmer climates. So they should abandon the general principle that heat correlates with greater nobility. Moreover, Marinella claims that some individuals will have had ‘natures’ hotter than Plato and Aristotle (it is not clear what the evidence for this is, but it is clear that she supposes there is some independent measure of heat other than virtue). And she assumes we can all agree that no one has ever been more virtuous than Plato and Aristotle. So, once again, her opponents must concede that it is not a universal truth that greater heat leads to greater virtue. We might expect Marinella, having disputed the relation between heat and nobility, to abandon any attempt to argue that women are superior to men in virtue of their body temperature. But she does not acknowledge an inconsistency in arguing both that there is no correlation between greater heat and greater virtue and that women are superior because they are colder. If one were to object to her argument that men who live in colder climates ought to be cooler than women who live in warmer climates, and therefore better than those women, she would reply that those men had effectively become women: …if a man performs excellent deeds it is because his nature is similar to a woman's, possessing temperate but not excessive heat, and because his years of virile maturity have tempered the fervor of that heat he possessed in his youth and made his nature more feminine so that it operates with greater wisdom and maturity. (1601b, 131) This suggests also that in those places where women become hotter because of the climate, they will become worse—but not relative to men who live in the same climate. If there is a correlation between lower body temperature and virtue, and one allows that in warmer climates everyone will have a higher body temperature, but that women will generally have a lower body temperature than men in the same climate, then women will be more virtuous than men in any given region. Marinella argues, then, that Aristotle was correct to conclude that women are usually, if not always, cooler than men, but she insists that women are temperate rather than cold. She then links, causally, the physical state of a cooler body temperature with a capacity to execute the soul's operations with sufficient force, but without excessive passion: those who are physically temperate are also psychologically and morally temperate. When men succeed in virtuous deeds, it will be because they have become more feminine—more moderate in temperature—often with maturity, which Marinella assumes brings with it moderation in temperature. If women are (generally) the right temperature for virtue, and men are (generally) hotter than women, then clearly men are too hot. Moreover, there is independent evidence for men's excessive heat in their actions, in particular in the intemperance they exhibit and the passionate love they indulge in. The greater nobility of women's moral character allows them to perform more noble intellectual acts, because of the relation between desire and intellect. A person has a better character when the faculty of desire submits willingly to the faculty of reason. And when that is the case, reason is freer to exercise its own functions, without interference from desire. So the nobler moral character of women permits their intellects to focus on rational activities, without the distraction of having to control unbridled or mistaken desires. This accounts for the greater intellectual ability, as well as the superior moral character, that Marinella attributes to women. The passionate love men experience for women is excited by the beauty of women, and that beauty is the second of the fundamental differences Marinella cites between the bodies of women and those of men, as evidence of the superiority of women's souls and ultimately of women themselves. While temperature is a cause of superiority, beauty is a manifestation of the superior character of the souls of women, and evidence of the nobility of the idea in the mind of God that is the form of woman: “…the Idea of women is nobler than that of men. This can be seen by their beauty and goodness, which is known to everybody” (1601b, 53). Marinella understands the body to manifest the character of the soul, so she believes that we can know that women's souls are nobler than men's through “the effect they [the souls] have and from the beauty of their bodies,” (55). The beauty is an effect of “a grace or splendor proceeding from the soul as well as from the body,” (57). And Marinella states quite explicitly, “The soul…is the cause and origin of physical beauty,” (58). Taken together, these assertions suggest that the soul bestows grace or beauty on the body, and that those qualities then ‘proceed’ from the body as well as from the soul. If we accept that the soul is the cause of physical beauty, and we assume that effects resemble causes, then we can learn something about the character of the soul from the character of the body. To understand how Marinella conceives of beauty, and her philosophical purposes in appealing to beauty as evidence of moral and intellectual superiority, we need to consider her sources, and also the relation between beauty, heat, and the virtues of women as she contrasts them with the vices of men. We should also notice that she claims on the one hand that all women are beautiful, and that no man is: “I say that compared to women all men are ugly,” (63); and on the other hand, that she allows that there are variations in beauty between individual women, and that men can be more or less beautiful (167). The philosophical sources for the view that women's beauty is a sign of a virtuous soul are interpretations of Plato's dialogue, the Symposium. Ficino's translation of the Symposium was especially influential at the time Marinella was writing, and she does refer to it, but she seems to have based much of her discussion on Leone Ebreo's Dialoghi d'amore. Two aspects of Platonic theory are pertinent here. The first is the view that particular beings in this world have the features they do by means of participation in ideal Forms. So women are beautiful because they participate in the Form of the Beautiful Itself; and their beauty is itself a mark or sign of their participation. “Divine beauty is…the first and principal cause of women's beauty,” (1601b, 60). The Platonist understanding of causation allows that while divine being is the first and principal cause, there are creaturely causes which mediate the effects of that first cause, so that the soul of a woman can be the immediate cause and origin of her physical beauty while divine beauty is its first cause. Marinella cites Ebreo for the claim that corporeal beauty is an image of divine beauty, and then argues: If it [corporeal beauty] came solely from the body, each body would be beautiful, which it is not. Beauty and majesty of body are, therefore, born of superior reason. (1601b, 59) The second aspect of Platonic theory that serves Marinella's argument from beauty to the superiority of women is the view that erotic desire, understood as a response to (and a desire for) beauty, is an impulse which leads us ultimately to the Form of Good Itself, by means of particular good things, which are identified with beautiful things. Agrippa had made a similar argument, which may have influenced Marinella. He wrote Since beauty itself is nothing other than the refulgence of the divine countenance and light which is found in things and shines through a beautiful body, women—who reflect the divine—were much more lavishly endowed and furnished with beauty than man (Agrippa 1529, 50), and adds that “all are dazzled by her beauty and love and venerate her on many accounts” (51). Marinella reports that ‘Platonists’ say: “External beauty is the image of divine beauty,” (1601b, 58). She agrees with the poets who say that beauty is a path that guides us directly to the contemplation of divine wisdom; and in her own voice asserts that beauty is a golden chain, that “always raises us toward God, from whom it is derived,” (64–5). So the fact that men experience desire for women, because they perceive them as beautiful, is a sign that men recognize that women are good, and indeed better than men. This is because we desire to possess the good forever, and we do not desire what we already have. If, then, men desire women it must be that women are better than men, and closer to the divine. Marinella's own account of beauty is simply this: it is “a ray of light from the soul that pervades the body in which it finds itself,” citing Plotinus as the source. She also cites Ficino's letters, and a variety of poets, in support of the view that beauty is a kind of light—or like a ray of light—from the soul, which is compared to the sun. On this account, then, beauty does not lie in the symmetry of features, or youth, or indeed in any material feature of women's bodies. It is rather an ineffable aura that pervades womankind. Marinella has support for this view from Platonist philosophers, and indeed from those who oppose her and yet allow that “women's lovely faces shine with the grace and splendor of paradise” and she uses it to undermine the claim for superiority of men. If men were in fact superior to women, then it would be women who desired men, not men who desired women, whereas in fact, they [men] are forced to love them [women] for this beauty, while women are not forced to love men, because that which is less beautiful, or ugly, is not by nature worthy of being loved…They [men] would not be loved by women were it not for our courteous and benign natures, to which it seems discourteous not to love our male admirers a little. (1601b, 63) Marinella thus appears to assert that women do not desire men, but experience only a polite reciprocal affection for those men who love them. This is likely to have been a strategic claim, aimed as a retort against Passi and other men writing against women, who often accused women of lasciviousness and promiscuity. Marinella is claiming that sexual desire is so foreign to women (at least to most women) that they are incapable of lascivious sentiments or acts. If women are beautiful and temperate in their bodies, it is because their souls are better organized than the souls of men; in particular, it is because their desires are obedient to reason. And men are susceptible to woman's beauty because of the intemperate heat of the male body, which is both produced by the deficiency of the male soul, and a manifestation of that deficiency: I wish to go further and show that men are obliged and forced to love women, and that women are not obliged to love them back, except merely from courtesy. I also wish to demonstrate that the beauty of women is the way by which those men who are moderate creatures are able to raise themselves to the knowledge and contemplation of the divine essence. (1601b, 62) So Marinella turns the arguments of men—both the argument that women are defective and colder, and the argument that women are defective and weak of will—to her own purposes, demonstrating that women are better for being cooler, and are less weak of will than are men, who succumb to the passionate desire for beauty much more readily than do women. Marinella demonstrated extraordinary scholarship by the standards of her day, particularly with respect to the variety of sources she was able to cite, and with greater accuracy than many of her contemporaries. Many defenders of women had contented themselves with responding to vaguely defined opponents; Marinella, by contrast, cited her authors and their texts with some precision. This indicates both that she had access to the texts themselves, and not only to reports on the texts, and also that she understood the scholarly force of accurately and precisely representing the claims of an author. This is one aspect of her methodology that serves as a strength, and singles her out from the crowd of pro-woman writers (see Ross 2009, 289). Her extensive use of citation served not only to exhibit those claims with which she was to take issue, but also to allow her to interpret for her own purposes the claims of authors with whom she disagreed. She often cites the same author both as an authority in support of her arguments, and as a target against whom she argues—and the effect of this is to undermine the authority of that source. Marinella was then engaged in establishing the unreliability of the very authorities whose work was supposed to support the inferiority of women. If Aristotle did not maintain a coherent account of the relation between temperature and rational capacity, we should not trust his assessment of the relative merits of men and women. What is unusual about Marinelli's historicism is that it undermines an important and representative authority for patriarchy, and consequently links historicism to feminism. She denies the whole category of the authoritative with its implied opposition, the category of the specious, and substitutes her own concept of the author—one whose claim to the truth is no more than contingent. (Jordan 1990, 258) Marinella's methodical use of oppositions in the work of a single author, the most powerful case being that of Aristotle, leaves room for her to suggest that experience may be a better source of our knowledge of women, their capacities and natures. If one effect of Marinella's marshalling of the evidence for both sides of the woman question is to undermine the authority of those authors most frequently cited by her opponents, another effect is to raise the possibility of a skeptical agenda. Like many pro-woman authors, Marinella begins by accepting the claim that the rational souls of men and women are the same, before going on to argue for the superiority of women. We might wonder why she, and others, were not content to rest with the claim of equality at the level of the rational soul. To put the question another way: Did Marinella truly believe that women are better than men, or did she argue for that position for other reasons? It is possible that the arguments for superiority are intended to raise skeptical doubts in the minds of her audience, doubts which would make them reluctant to decide the question of superiority between men and women. If it seemed preposterous to argue that women were superior to men, and yet one could do so using unimpeachable authorities, then arguments for the superiority of men over women, constructed on the foundations of those same authorities, might seem less convincing. So, although many contemporary and later interpreters assume that The Nobility is a treatise in support of the superiority of women, there is some evidence to suggest that Marinella may have argued for the claim of superiority not so much to establish its truth as to call into skeptical doubt the truth of the claim of superiority for men made by her opponents (see O'Neill 2007 for an argument that skepticism informed the work of another sixteenth century feminist, Marie de Gournay). Some evidence that Marinella's methods are not entirely transparent lies in her late work, Essortationi alle donne (Exhortations to women), which on the surface appears to be a palinode, a rejection of a lifetime dedicated to study and writing—she specifically urges women not to aspire to a literary career. Some interpreters have, however, detected in this work “a residue of defiance combined with the possibility of unmasking male techniques of domination,” (Kolsky 2001, 984). Several kinds of evidence point to the possibility that Marinella did not intend to subvert the claims of The Nobility: her use of “irony, paradox and contradiction,” together with a prefatory remark instructing readers to look below the surface of the text, and the reputation of the printer of the Essortationi as one who was infamous for publishing “layered discourses” (Ross 2009, 296–8; Malpezzi Price and Ristaino 2008, 120–55). - Marinella, L. , 1595, La Colomba sacra, Poema eroico. Venice. - –––, 1597, Vita del serafico et glorioso San Francesco. Descritto in ottava rima. Ove si spiegano le attioni, le astinenze e i miracoli di esso, Venice. - –––, 1598, Amore innamorato ed impazzato, Venice. - –––, 1601a, La nobiltà et l'eccellenza delle donne co' diffetti et mancamenti de gli uomini. Discorso di Lucrezia Marinella in due parti diviso, G , Venice. - –––, 1601b, The Nobility and Excellence of Women, and the Defects and Vices of Men, Dunhill, A. (ed. and trans.), Chicago: The University of Chicago Press, 1999. - –––, 1602, La vita di Maria vergine imperatrice dell'universo. Descritta in prosa e in ottava rima, Venice. - –––, 1603, Rime sacre, Venice. - –––, 1605, L'Arcadia felice, Venice. - –––, 1605a, L'Arcadia felice, F. Lavocat (ed.), Florence: Accademia toscana di scienze e lettere, ‘La Colombaria’ 162, 1998. - –––, 1605b, Vita del serafico, et glorioso San Francesco. Descritto in ottava rima, Venice. - –––, 1606, Vita di Santa Giustina in ottava rima, Florence. - –––, 1617, La imperatrice dell'universo. Poema heroico, Venice. - –––, 1617a, La vita di Maria Vergine imperatrice dell'universo, Venice. - –––, 1617b, Vite de' dodeci heroi di Christo, et de' Quatro Evangelisti, Venice. - –––, 1624, De' gesti heroici e della vita meravigliosa della serafica Santa Caterina da Siena, Venice. - –––, 1635, L'Enrico ovvero Bisanzio acquistato. Poema heroico, Venice. - –––, 1645a, Essortationi alle donne et a gli altri se a loro saranno a grado di Lucretia Marinella. Parte Prima, Venice. - –––, 1645b, Exhortations to Women and to Others if They Please, L. Benedetti (ed. and trans.), Toronto: Centre for Reformation and Renaissance Studies, 2012. - Agrippa, H. C., 1529, Declamation on the Nobility and Preeminence of the Female Sex, A. Rabil, Jr. (ed. and trans.), Chicago: The University of Chicago Press, 1996. - Castiglione, B., 1528, The Book of the Courtier, Singleton, C. S. (trans.), New York: Doubleday, 1959. - Passi, G., 1599, I donneschi diffetti nuovamente formati e posti in luce da Giuseppe Passi Ravenate nell'Academia de' Signori Informi di Ravenna L'Ardito, Milan. - Pizan, C. de, 1405, The Book of the City of Ladies, Richards, E. J. (trans.), New York: Persea Books, 1982. - Benson, P. J., 1992, The Invention of the Renaissance Woman: The Challenge of Female Independence in the Literature and Thought of Italy and England, University Park, PA.: The Pennsylvania State University Press. - Chemello, A. 1983, “La donna, il modello, l'immaginario: Moderata Fonte e Lucrezia Marinella,” in Nel cherchio della luna: Figure di donna in alcuni testi del XVI secolo, Venice: Marsilio. - –––, 1991, “Lucrezia Marinella,” in Le stanze ritrovate: Antologia di scrittrici venete dal quattrocento al novecento, A. Arslan, A. Chemello, and G. Pizzamiglio (eds.), Milan: Eidos, pp. 95–108. - –––, 2000, “The rhetoric of eulogy in Marinella's La nobiltà e l'eccelenza delle donne,” in Women in Italian Renaissance Culture and Society, Panniza, L. (ed.), London: Legenda, pp. 463–77. - Cox, V., 1995, “The Single Self: Feminist Thought and the Marriage Market in Early Modern Venice,” Renaissance Quarterly, 48 (3): 513–81. - Ferguson, M. W., M. Quilligan, and N. J. Vickers, 1986, Rewriting the Renaissance: The Discourses of Sexual Difference in Early Modern Europe, Chicago: University of Chicago Press. - Jordan, C., 1990, Renaissance Feminism: Literary Texts and Political Models, Ithaca: Cornell University Press. - Kelly, J., 1984, Women, History and Theory: The Essays of Joan Kelly, Chicago: University of Chicago Press. - Kraye, J., 1994, “The Transformation of Plato in the Renaissance,” in Platonism and the English Imagination, A. Baldwin and S. Hutton (eds.), Cambridge: Cambridge University Press. - King, M., 1980, “Book-Lined Cells: Women and Humanism in the Early Italian Renaissance,” in Beyond Their Sex: Learned Women of the European Past, P. H. Labalme (ed.), New York and London: XXX, pp. 66–90. - Kolsky, S., 2001, “Moderata Fonte, Lucrezia Marinella, Guiseppe Passi: An Early Seventeenth-Century Feminist controversy,” The Modern Language Review, 96 (4): 973–89. - Maclean, I., 1980, The Renaissance Notion of Woman: A Study in the Fortunes of Scholasticism and Medical Science in European Intellectual Life, Cambridge: Cambridge University Press. - Malpezzi Price, P. and C. Ristaino, 2008, Lucrezia Marinella and the “Querelle des Femmes” in Seventeenth-Century Italy, Madison: Fairleigh Dickinson University Press. - O'Neill, E., 2007, “Justifying the Inclusion of Women in our Histories of Philosophy: The Case of Marie de Gournay,” in The Blackwell Guide to Feminist Philosophy, L. M. Alcoff and E. F. Kittay (eds.), Oxford: Blackwell. - Panizza, L. and S. Wood, 2000, A History of Women's Writing in Italy, Cambridge: Cambridge University Press. - Ross, S. G., 2009, The Birth of Feminism: Woman as Intellect in Renaissance Italy and England, Cambridge, Mass.: Harvard University Press. - Zancan, M. (ed.), 1983, Nel cerchio della luna: Figure di donna in alcuni testi del XVI secolo, Venice: Marsilio. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
<urn:uuid:d9e55d7d-4982-4aed-8122-b75d6ad8ffa4>
CC-MAIN-2016-26
http://plato.stanford.edu/entries/lucrezia-marinella/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954587
13,209
2.734375
3
Over the past several years, the battle over fracking has brought Congressional hearings, protests and huge industry money to Washington DC. But in recent months the topic has taken on a new, more local turn in the nation's capital as oil and gas companies push to drill in a national forest near in the city's backyard and an unusual cast of charaters are lining up to oppose it. The fight is over access to drill for shale gas in the George Washington National Forest and officials from the Environmental Protect Agency, Army Corps of Engineers and the National Park Service have come out in opposition, even though some of these same federal agencies have in other contexts helped to promote expanded shale gas drilling. The forest is one of the East Coast’s most pristine ecosystems, home to some of its last old growth forests. Horizontal drilling, key to shale gas extraction, has never before been permitted in the George Washington National Forest. But as the U.S. Department of Agriculture Forest Service prepares a new 15-year plan, drillers are pushing hard for the ban to be lifted despite the industry’s long record of spills, air pollution and water contamination on public lands.
<urn:uuid:89edf76a-76ea-4f48-b065-f312687228ab>
CC-MAIN-2016-26
http://www.desmogblog.com/directory/vocabulary/13653
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952933
236
2.609375
3
An Overview of Elvitegravir May 29, 2013 Other Names: EVG, GS-9137, JTK-303 What is an investigational drug? An investigational drug is one that is under study and is not approved by the U.S. Food and Drug Administration (FDA) for sale in the United States. Medical research studies are conducted to evaluate the safety and effectiveness of an investigational drug. These research studies are also called clinical trials. Once an investigational drug has been proven safe and effective in clinical trials, FDA may approve the drug for sale in the United States. What is elvitegravir? Elvitegravir, used as a stand alone agent, is an investigational drug that is being studied for the treatment of HIV infection. Elvitegravir belongs to a class (group) of HIV drugs called integrase inhibitors.2 Integrase inhibitors block an HIV enzyme called integrase. (An enzyme is a protein that starts or increases the speed of a chemical reaction.) By blocking integrase, integrase inhibitors prevent HIV from multiplying and can reduce the amount of HIV in the body. Elvitegravir requires boosting with an additional drug, such as the FDA-approved HIV medicine ritonavir (brand name: Norvir) or the investigational drug cobicistat. (Boosting involves the use of a second drug to increase the effectiveness of the main [first] drug.)4 Elvitegravir is a component of the FDA-approved, fixed-dose combination (FDC) HIV medicine elvitegravir/cobicistat/emtricitabine/tenofovir disoproxil fumarate (brand name: Stribild). (FDC drugs include two or more drugs in a single dosage form, such as a capsule or tablet.)5 How are clinical trials of investigational drugs conducted? Clinical trials are conducted in "phases." Each phase has a different purpose and helps researchers answer different questions.6 In most cases, an investigational drug must be proven safe and effective in a Phase III clinical trial to be considered for approval by the FDA for sale in the United States. Some drugs go through the FDA's accelerated approval process and are approved before a Phase III clinical trial is complete. After a drug is approved by the FDA and made available to the public, researchers track its safety in Phase IV trials to seek more information about the drug's risks, benefits, and optimal use.6 In what phase of testing is elvitegravir? Elvitegravir is currently being studied in a Phase III clinical trial.2 A new drug application (NDA) for elvitegravir for the treatment of HIV-1 infection in adults who have already taken HIV medicines (treatment-experienced) was submitted to the FDA in June 2012.3 What have recent studies shown about elvitegravir? In a Phase III study, ritonavir-boosted elvitegravir taken once daily was compared to the FDA-approved integrase inhibitor raltegravir (brand name: Isentress) taken twice daily in treatment-experienced, HIV-infected participants. Study participants also received additional HIV medicines as part of their optimized background regimens. (An optimized background regimen is a combination of drugs, chosen on the basis of a person's resistance test results and treatment history, that are not being studied as the investigational drug[s] in the clinical trial, but are given to help control a participant's HIV infection.) The background regimen therapy included a boosted protease inhibitor plus another HIV medicine.7,8 In this study, ritonavir-boosted elvitegravir proved as effective as raltegravir. In terms of safety, elvitegravir was comparable to raltegravir. However, diarrhea was reported more frequently in patients taking elvitegravir than raltegravir, while elevated liver enzymes occurred more frequently in patients receiving raltegravir than elvitegravir.7,8 What side effects might elvitegravir cause? In the Phase III study discussed under the previous question, diarrhea was reported as a side effect.8 Because elvitegravir is still being studied, information on possible side effects of the drug is not complete. As testing of elvitegravir continues, additional information on possible side effects will be gathered. Where can I get more information about clinical trials studying elvitegravir? More information about elvitegravir-related research studies is available from the AIDSinfo database of ClinicalTrials.gov study summaries. Click on the title of any trial in the list to see the ClinicalTrials.gov trial summary and more information about the study. I am interested in participating in a clinical trial of elvitegravir. How can I find more information about participating in a clinical trial? Participating in a clinical trial can provide benefits. For example, a volunteer participant can benefit from new research treatments before they are widely available. Participants also receive regular and careful medical attention from a research team that includes doctors and other health professionals. However, clinical trials may also involve risks of varying degrees, such as unpleasant, serious, or even life-threatening side effects from the treatment being studied.6 Your health care provider can help you decide whether participating in a clinical trial is right for you. For more information, visit NIH Clinical Research Trials and You. This article was provided by AIDSinfo. Visit the AIDSinfo website to find out more about their activities and publications. Add Your Comment: (Please note: Your name and comment will be public, and may even show up in Internet search results. Be careful when providing personal information! Before adding your comment, please read TheBody.com's Comment Policy.)
<urn:uuid:8a0f289c-534d-425d-b460-9c24f0031852>
CC-MAIN-2016-26
http://www.thebody.com/content/69616/an-overview-of-elvitegravir.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924336
1,254
2.546875
3
Unusual Mechanism of DNA Synthesis Could Explain Genetic Mutations, Georgia Institute of Technology Study 9/12/2013 6:08:46 AM Researchers have discovered the details of how cells repair breaks in both strands of DNA, a potentially devastating kind of DNA damage. When chromosomes experience double-strand breaks due to oxidation, ionizing radiation, replication errors and certain metabolic products, cells utilize their genetically similar chromosomes to patch the gaps via a mechanism that involves both ends of the broken molecules. To repair a broken chromosome that lost one end, a unique configuration of the DNA replication machinery is deployed as a desperation strategy to allow cells to survive, the researchers discovered. Hey, check out all the research scientist jobs. Post your resume today! comments powered by
<urn:uuid:ba4d988a-e5be-40ef-9681-b0a25506dc46>
CC-MAIN-2016-26
http://www.biospace.com/News/unusual-mechanism-of-dna-synthesis-could-explain/308432
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890878
155
3.265625
3
How To Make A Human Pyramid Since ages, the pyramids have been seen as a wonder or a mysterious thing in the world. If talk about human pyramid, then it is really hard to make a one. It is actually a type of formation in which individuals stand or bow together in a formation along with making a base for the another individual to stand on their shoulders. 1- If the individuals get stand up, then that is called human tower. On that small group of people started getting added and each group is supported by the group below them. Participants that are light in nature get placed at the top of that formation whereas stronger individuals form the base of formation. 2- These human pyramids are created or formed by cheerleaders. Recruit atleast 5 cheerleaders in order to create this pyramid and include 2 adults. Make sure that members included into these pyramids must be of same gender like girls with girls and boys with boys. 3- Put the taller students at the top of formation as they will definitely be the strongest part of that formation and furthermore, they will make pyramid look more organized and well defined. 4- Put the 2 tallest individuals at the middle up above those earlier taller ones. The earlier six would not require this much help but still if any problem arises, then these middle ones can recover that problem. 5- Make 2 adults help the entire group and moreover they can carefully spot out the toppers. They can have a keen eye over the falling problem if it may occurs. 6- Encourage the small team members and make them maintain a serious behavior so that on the shoulders on the others, they may stand in a safe and a presentable manner with the assistance of adults. 7- Build confidence and make them learn that keep a strong balance which may prevent them being fall down or being hurt from a really high position. Make them know how they need to maintain their body in order to make pyramid stand in a presentable manner. 8- You must have heard that practice makes a man perfect. Before the final day, do lots of practice in order to get perfection in your work so that on that final day, you can easily do things in an accurate manner. Here you go to see live human pyramid formation in action.
<urn:uuid:b4b7bacb-bfbe-44cf-96a4-1804ae6be083>
CC-MAIN-2016-26
http://randomstory.org/how-to-make-a-human-pyramid/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953978
463
2.71875
3
This is an example of using ALL in a UNION: CREATE TABLE Employees ( EmployeeNumber nchar(9), FirstName nvarchar(20), LastName nvarchar(20), HourlySalary money, [Status] nvarchar(20) default N'Employee' ); GO CREATE TABLE Contractors ( ContractorCode nchar(7), Name1 nvarchar(20), Name2 nvarchar(20), Wage decimal(6, 2), [Type] nvarchar(20) default N'Contractor' ); GO INSERT INTO Employees(EmployeeNumber, FirstName, LastName, HourlySalary) VALUES(N'2930-4708', N'John', N'Franks', 20.05), (N'8274-9571', N'Peter', N'Sonnens', 10.65), (N'6359-8079', N'Leslie', N'Aronson', 15.88); GO INSERT INTO Contractors(ContractorCode, Name1, Name2, Wage) VALUES(N'350-809', N'Mary', N'Shamberg', 14.20), (N'286-606', N'Chryssa', N'Lurie', 20.26); GO SELECT * FROM Employees UNION ALL SELECT * FROM Contractors; GO This would produce: atement has been executed, all records from the source table are copied to the target table: You can use the ability to copy records to get records from two or more tables and add them to a another table. The formula to follow is: INSERT INTO TableName SELECT WhatField(s) FROM OneTable UNION [ALL] SELECT WhatField(s) FROM AnotherTable; Imagine you have an existing table filled with records. Instead of copying all records from that table into another table, you may want to copy only a specific number of records. To do this, use the following formula: INSERT TOP (Number) [INTO] TargetTable SELECT WhatObject(s) FROM WhatObject After the INSERT keyword, add TOP followed by parentheses. In the parentheses, enter the desired number of records. The rest of the formula follows the techniques we hase used so far. Here is an example: USE Exercise; GO CREATE TABLE Interns ( InternNumber nchar(10), LastName nvarchar(20), FirstName nvarchar(20), Salary money ); GO INSERT INTO Interns VALUES(N'30848', N'Politanoff', N'Jeannette', 22.04), (N'81094', N'Bragg', N'Salomon', 15.50), (N'20938', N'Verne', N'Daniel', 21.24), (N'11055', N'Beal', N'Sidonie', 12.85), (N'88813', N'Jensen', N'Nicholas', 20.46); GO INSERT TOP (3) INTO Employees SELECT InternNumber, FirstName + ' ' + Lastname, Salary FROM Interns; GO SELECT * FROM Employees; GO This would produce: Notice that only 3 records from the source table were copied. Instead of copying a fixed number of records, you can specify a portion as a percentage. In this case, use the following formula: INSERT TOP (Number) PERCENT [INTO] TargetTable SELECT WhatObject(s) FROM WhatObject The new keyword in this formula is PERCENT. You must use it to indicate that the number in the parentheses represents a percentage value. That number must be between 0.00 and 100.00 included. Here is an example: USE Exercise; GO INSERT INTO Interns VALUES(N'28440', N'Avery', N'Herbert', 13.74), (N'60040', N'Lynnette', N'Douglas', 17.75), (N'25558', N'Washington', N'Jacob', 20.15), (N'97531', N'Colson', N'Lois', 17.05), (N'24680', N'Meister', N'Victoria', 11.60); GO INSERT TOP (30) PERCENT INTO Employees SELECT InternNumber, FirstName + ' ' + Lastname, Salary FROM Interns; GO This would produce: Notice that the source table (the Interns table) has nine records but only three (9 / (100/30) = 9 / 3.33 = 2.7 ≈ 3 (the closest higher integer to 2.7 is 3)) record from that table were copied into the Employees table. Consider the following two tables: USE Exercise; GO CREATE TABLE Contractors ( ContractorCode nchar(10), Salary money, LastName nvarchar(20), FirstName nvarchar(20) ); INSERT INTO Contractors VALUES(N'86824', 12.84, N'Chance', N'Julie'), (N'84005', 9.52, N'Kaihibu', N'Ayinda'), (N'27084', 14.26, N'Gorman', N'Alex'); GO CREATE TABLE Employees ( FirstName nvarchar(50), LastName nvarchar(20), EmplNbr nchar(10), HourlySalary money ); INSERT INTO Employees VALUES(N'Ann', N'Keans', N'22684', 20.52), (N'Godwin', N'Harrison', N'48157', 18.75), (N'Timothy', N'Journ', N'82476', 21.05), (N'Ralph', N'Sunny', N'15007', 15.55); GO SELECT ALL * FROM Contractors; GO SELECT ALL * FROM Employees; GO Moving records consists of transferring them from one table, the source, to another table, the target. Neither SQL nor Transact-SQL directly supports this operation, which happens to be extremely easy. You have many options. Probably the easiest way to do this consists of copying the records from the source to the target, then deleting the same records from the source. Here is an example: USE Exercise; GO INSERT INTO Employees SELECT FirstName, LastName, ContractorCode, Salary FROM Contractors WHERE Contractors.Salary >= 10.00; GO DELETE FROM Contractors WHERE Contractors.Salary >= 10.00; GO SELECT ALL * FROM Contractors; GO SELECT ALL * FROM Employees; GO Imagine you have two tables created at different times, or by different people, or for different reasons. You may have two tables that have duplicate records (the same record(s) in more than one table, for example the same employee number and same name in two tables). You may have records in different tables but some of those records share a field's value (you may have an employee A in one table and another employee B in another table but both have the same employee number with different names, perhaps when two companies merge). As an assignment, you may be asked to combine the records of those tables into one. Record merging consists of inserting the records of one table, referred to as the source, into another table, referred to as the target. When performing this operation, you will have the option of: The primary formula to merge two tables is: MERGE Table1 AS Target USING Table2 AS Source ON Table1.CommonField = Table2.CommonField WHEN MATCHED Matched Options THEN Match Operation(s) WHEN NOT MATCHED BY TARGET Not Matched By Target Options THEN Not Matched By Target Operation(s) WHEN NOT MATCHED BY SOURCE Not Matched By Source Options THEN Not Matched By Source Operation(s) You start with the MERGE operator followed by the table to which the records will be added. You continue with the USING operator followed by the table from which the records will be retrieved. You must specify the condition by which the records must correspond. To merge the records, the tables must have a common column. The columns don't have to have the same name but they should be of the same type (and size). To provide this information, type ON followed by the condition. After specifying the tables and the records corresponding conditions, you must indicate what to do if/when a record from the source table meets a record from the target table. If you do a merge using the above formula, after the merge has been performed, you would not know the results) unless you run a new query on the target table. Fortunately, you can ask the database engine to immediately display a summary of what happened. To do this, after the last THEN statement, create an OUTPUT expression. The formula to follow is: MERGE Table1 AS Target USING Table2 AS Source ON Table1.CommonField = Table2.CommonField WHEN MATCHED Matched Options THEN Match Operation(s) WHEN NOT MATCHED BY TARGET Not Matched By Target Options THEN Not Matched By Target Operation(s) WHEN NOT MATCHED BY SOURCE Not Matched By Source Options THEN Not Matched By Source Operation(s) OUTPUT $action, DELETED | INSERTED | from_table_name.* To get a summary of the merging operation(s):
<urn:uuid:80670b74-1adb-4518-bef4-ffc57f8637c1>
CC-MAIN-2016-26
http://www.functionx.com/sqlserver2008/examples/all.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.765372
1,988
2.65625
3
Work: Emperor Quartet II. Poco adagio, cantabile About This Work This is both the most popular and most notorious of Haydn's string quartets, all because of the second movement, a beautiful hymn that was later misappropriated by the twentieth century's most evil regime. Back in the late eighteenth century, Napoleon was posing a serious threat to the Hapsburg empire; after his armies raided Styria in 1796 Haydn was driven to a burst of nationalism. He set patriotic words by L.L. Haschka as a so-called Kaiserlied, and had an immediate hit on his hands. He determined to write all the "popularized" arrangements himself, including one for string quartet. This became the slow movement of the third quartet of his opus 76 set. The moving, noble melody has been too good to pass up. Later composers, including Czerny and Smetana, incorporated it into works of their own. And a few decades after the Austrian empire finally collapsed, Germany's Nazis commandeered the melody for the song "Deutschland über alles." This limited the quartet's popularity among the Allies during and immediately after World War II, but the taint soon washed away. The whole quartet seems to be a patriotic effort once you realize that the first bar of the opening Allegro is a musical anagram. Its notes correspond to the first letters of the words "Gott erhalte Franz den Kaiser" (with "Caesar" apparently filling in for "Kaiser"); this is the opening line of the Kaiserlied on which the second movement is based. This is hardly obvious unless you examine the score, for all you hear is a bright, bouncy C-major tune that the first violin soon appropriates with an obsessive dotted rhythm. In the late eighteenth century, by the way, that rhythm was symbolically associated with royal occasions. All the movement's principal thematic matter is derived from this small bit of music. The development section includes a characteristically Haydnesque surprise: an E-major Hungarian scene with a gypsy-like accompaniment of strong accents on weak beats. This was Haydn's nod to the Hungarian aristocrats who employed Haydn and commissioned these quartets; they were footing a big part of the bill for the emperor's war against Napoleon. The second movement, Poco adagio-cantabile, begins with an especially sweet statement of the Emperor Hymn, then puts it through four variations. The first is a quiet but ornate elaboration for the first violin, while the second fiddle plays the theme in its original form. The next variation shifts the theme down to the cello, with the viola and second violin providing harmony and the first violin offering counterpoint. The viola finally gets its own statement of the theme in the third variation while the top and bottom instruments wind around it. Finally comes a richly harmonized version of the theme with more elaborate inner voices than in the beginning, but nothing as complex as what has come in between. The Minuet is a good-humored drawing-room dance, marked especially by a slightly mocking downward-drifting figure in the first violin. The trio is a cautious- sounding variation of the Minuet's main theme. The Presto finale thrusts us into what one analyst has described as a C-minor battle scene: Franz vs. Napoleon. The movement does begin with three loud, jagged chords and eventually has the first violin fire off a barrage of eighth notes, but there's little explicitly militaristic about the music. After this material is intensely developed, the main themes return in a C major version that certainly sounds optimistic, though not necessarily triumphant. -- James Reel Select a specific Performer, Ensemble or Label or browse recordings by Formats & Featured below
<urn:uuid:3601ee6a-3e37-446c-ae11-6bd174f32067>
CC-MAIN-2016-26
http://www.arkivmusic.com/classical/Namedrill?name_id1=5170&name_role1=1&bcorder=1&comp_id=1390
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95641
805
2.703125
3
Ever since the earliest Bulgarian manuscript books (preserved until today), we can trace the tradition to honor the two saint brothers - St Cyril (on February 14th) and St Methodius (on April 6th) as authors of the Bulgarian Alphabet. The earliest church celebration of both saints together, on May 11th each year, goes back to the 12th century. During the Revival period those church celebrations on May 11th developed into a large secular celebration of Bulgarian knowledge and culture. It was Naiden Gerov who organized the first celebration of May 24th, 1851 in Plovdiv, as the Day of the St Cyril and St Methodius, to honor all achievements of Bulgarian language and culture. A lot of people today, who were named after the two saints, celebrate that day as their name day. The name Cyril means lordly in ancient Greek, while Methodius means the one following a method, or doing a study. The two brothers, Sts. Cyril and Methodius (or Constantine and Methodius) - known as the Apostles of the Slavs, were born in Thessalonica, in 827 and 826 respectively. Though belonging to a senatorial family, they renounced secular honours and became priests. They were living in a monastery on the Bosphorous, when the Khazars sent to Constantinople for a Christian teacher. Cyril was selected and was accompanied by his brother. They learned the Khazar language and converted many of the people. Soon after their Khazar mission, came the invitation of the Moravian Prince Rostislav, who sought missionaries able to preach in the Slavonic vernacular (peoples) language, and thereby check German influence in Moravia - the Moravians wished a teacher who could instruct them and conduct Divine service in the Slavonic tongue. On account of their acquaintance with that language, Cyril and Methodius were chosen for their work. In preparation for it Cyril invented a new alphabet and, with the help of Methodius, translated the Gospels and the necessary liturgical books into that new South Slavonic language. They went to Moravia in 863, and laboured over the translations for four and a half years. The immediate success aroused the hostility of the German rulers and ecclesiastics. Cyril died in Rome, 4 Feb., 869. Methodius went to Constantinople and with the assistance of several priests he completed the translation of the Bible and ecclesiastical books into Slavonic. The enemies of Methodius did not cease to antagonize him. His health was worn out from the long struggle, and he died 6 April, 885, recommending as his successor Gorazd, a Slav who had been his disciple. Methodius influence in Moravia was wiped out after his death but was carried to Bulgaria, Serbia, and Russia, where the Southern Slavonic language of Cyril and Methodius is still the liturgical language of churches and they all use, with variations, that same alphabet as the basis of their languages.
<urn:uuid:77cb330d-62ad-4b8d-b924-e2acc23831d4>
CC-MAIN-2016-26
http://www.plovdivguide.com/Current-News/Bulgarian-Literacy-and-Culture-Celebration-Day---May-24th-567/_p8/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981028
620
3.390625
3
The research, which was reported in the Proceedings of the National Academy of Sciences , provides direct support to the "Out of Africa" hypothesis. The finding of the skull is the first direct skeletal evidence of human occupation of the region so early. It is significant in that it fits perfectly with genetic and archaeological evidence that has also indicated human settlement as early as 60,000 years ago. The skull was found in a limestone cave known as Tam Pa Ling (Cave of the Monkeys); although, as Medical Daily reports, it is thought that the remains were washed into the cave. As Sci-News reports, Dr Laura Shackelford, a co-author of the study, said: No other artifacts have yet been found with the skull, suggesting that the cave was not a dwelling or burial site. It is more likely that the person died outside and the body washed into the cave sometime later. reports, Dr Kira Westway, who led the dating of the skull, said: Despite abundant limestone caves, there has been uncertainty about the arrival of modern humans in Southeast Asia because of a lack of dateable evidence. Prior to the latest finding the earliest human remains in the region was a skull found at Sarawak's Niah Cave dated at 40,000 years ago. The finding of the current skull fills a gap in the fossil record, and, in doing so, substantially supports the "Out of Africa" hypothesis by demonstrating its predictive capacity. Researchers are currently attempting to extract DNA from the remains in order to identify how closely it matches with people currently living.
<urn:uuid:9b7c370e-f529-41b0-b4cc-c12b03af0a9b>
CC-MAIN-2016-26
http://www.digitaljournal.com/article/334002
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978896
325
3.515625
4
USE COMMON SENSE ON THE ICE For Ice fishing, ice skating, snowmobiling and other activities that may find us wondering if it's safe to venture onto a frozen pond or lake. Ice doesn't always form in a uniform thickness over a water body. So people can sometimes feel that the ice is safe in one place, when it's actually very thin nearby. That false sense of security can have deadly consequences. Here are a few guidelines for ice safety that could save your life. Before venturing onto the ice the first thing you should know.. "IS THE ICE SAFE????" Never assume the ice - on any water body - is thick enough to support your weight. Check it! Start at the shoreline and, using an auger, spud or axe, make test holes at intervals as you proceed. The American Pulpwood Association has developed a table for judging the relative safety of ice on lakes and streams. This is just a guide; use your own good judgement before going out on any ice. Avoid areas of moving water, including where streams enter the lake, and around spillways and dams. ICE THICKNESS TABLE ||one person on foot| ||group in single file| ||one car (2 tons)| ||light truck (2.5 tons)| ||truck (3.5 tons)| ||heavy truck (7-8 tons)| Note: This guide is based on clear, blue, hard ice on non-running waters. Slush ice is about 50 percent weaker. Clear, blue ice over running water is about 20 percent weaker. Many ice anglers do not like to fish on less than five inches of ice, and do not like to drive a pick-up truck on less than 15 inches of ice. Remember this is just a Guide, Use common sense before venturing out onto the lake or river! Here are a few ice safety tips that ice fishermen and winter sports enthusiasts should keep in mind before venturing out on a frozen lake. Go out with a buddy and keep a good distance apart as you walk out. If one of you goes in the other can call for help. Leave information about your plans with someone -- where you intend to fish and when you plan to return. Wear a life jacket. Life vests or float coats provide excellent flotation and protection from hypothermia (loss of body temperature). Check for known thin ice areas with a local bait shop. Using the above Guide, test the thickness yourself using an ice chisel, an auger, spud, axe or even a cordless drill with a 6 inch or longer bit make test holes at intervals as you proceed. If you must drive a vehicle. be prepared to leave it in a hurry - keep windows down, unbuckle your seat belt and have a simple emergency plan of action you have discussed with your passengers. Don't drive across ice at night or when it is snowing. Reduced visibility increases your chances for driving into an open or weak ice area. Don't "overdrive" your headlight's. At even 30 miles per hour, it can take a much longer distance to stop on ice than your headlight shines. Many fatal snowmobile through-the-ice accidents occur because the machine was travelling too fast for the operator to stop when the headlamp illuminated the hole in the ice. Wear a life vest under your winter gear or one of the new flotation snowmobile suits. And it's a good idea to carry a pair of ice picks that may be purchased from most well stocked sporting goods stores. It's amazing how difficult it can be to pull yourself back onto the surface of unbroken but wet and slippery ice with a snowmobile suit weighted down with 60 lbs of water. The ice picks really help pulling yourself back onto solid ice. CAUTION: Do NOT wear a flotation device when travelling across the ice in an enclosed vehicle! WHAT IF YOU FALL IN? Having taken all of these precautions, you're now going to try your luck at fishing. Walking out on the ice, you hear a crack and break through. Suddenly you find yourself immersed up to your neck in water so cold it takes your breath away. If you think that's no big deal, try holding your hands in a bucket of ice water for more than a couple of minutes. If you can do it without extreme pain, you are tougher than the average person. Try not to panic. Instead, remain calm and turn toward the direction you came from. Place your hands and arms on the unbroken surface of the ice (here's where the ice picks come in handy.) Work forward on the ice by kicking your feet. If the ice breaks, maintain your position and slide forward again. Once you are lying on the ice, don't stand. Instead, roll away from the hole. That spreads out your weight until you are on solid ice. This sounds much easier than it is to do. The best advice is don't put yourself into needless danger by venturing out too soon or too late in the season. No angler, no matter how big of a fishing enthusiast, would want to die for a crappie. WHAT SHOULD YOU DO IF A COMPANION FALLS THROUGH THIN ICE? - Keep calm and think out a solution. - Don't run up to the hole. You'll probably break through and then there will be two victims. - Use some item on shore to throw or extend to the victim to pull them out of the water such as a tree limb, rope, jumper cables or skis, or push your ice fishing sled ahead of you. - If you can't rescue the victim immediately, call 911. It's amazing how many people carry cellphones. - Get medical assistance for the victim. Carry a set of hand spikes. Ice picks work well or you can make these at home, using large nails, or you can purchase good ones at stores that sell fishing supplies. Screwdrivers will also work. Carry a 50' safety rope that can be thrown to someone who has gone through the ice. If you have a cell phone, bring it along, it could prove to be vital for your party or somebody else. FROSTBITE AND HYPOTHERMIAFrostbite Occurs when the skin and subcutaneous tissue begins freezing. It can be easily remedied if detected in the early stages, or severe enough to require amputation of the affected areas. Symptoms become apparent when the skin turns waxy white, to yellow and is hard and cold to the touch. Initial pain turns into numbness. Toes, fingers, nose, ears and cheeks are the most vulnerable. If you suspect frostbite, warm the affected area by pressing it against a warm part of the body or immerse in luke-warm water (104-110 deg.F). Excessively hot water will damage the fragile tissue. Rubbing a frostbitten area in the more advanced stages will also cause damage. Avoid tobacco products because nicotine will restrict vital blood circulation. Finally, seek medical attention as soon as possible. Despite all the precautions that anglers take, a few go through the ice each year, and all ice anglers should know something about rescue techniques and first aid for hypothermia. Drowning is one immediate danger, But usually the victims are able to keep their heads above water by clinging to the edge of the broken ice or to floating gear. Most fatalities are caused by hypothermia. Hypothermia occurs when the body begins to lose heat faster than it can produce it. The symptoms become apparent, and include; uncontrollable shivering, slow or slurred speech, incoherence, fumbling hands, stumbling, apparent exhaustion, drowsiness which causes loss of the use of limbs, disorientation, unconsciousness and, finally, heart failure. In Both Above Cases, If a person shows any signs of overexposure to cold or wet and windy weather, take the following measures, even if the person claims to be in no difficulty. Often the person will not realize the seriousness of the situation. If your party is out on the ice with no shelter, seek out a shanty for heat and protection from the elements. Get the person into dry clothing with a warm (not hot) water bottle of some sort, concentrate heat on the torso. Supply warm drinks. Keep the head low and the feet up to get warm blood circulating to the head. Insulate the victim's trunk, head and neck from additional heat loss. Under no circumstances should the victim be given alcoholic beverages which diminish shivering, thus reducing heat production. Alcohol also causes dilation of surface blood vessels, causing more heat loss. Avoid pain relievers, they will slow the body metabolism. Tobacco products will restrict vital blood circulation. People subjected to cold water may seem fine but after being rescued can suffer a potentially fatal condition called "after-drop". That may occur when cold blood that is pooled in the body's extremities starts to circulate again as the victim starts to rewarm. Summon a vehicle to get to shore and arrange medical help. Call for professional medical assistance immediately. This is when a cell phone could come in handy. Hypothermia and frost bite should only be treated at a hospital. Fortunately, rescue and first aid are very seldom necessary. However, since the sport is constantly attracting newcomers and since even veterans are subject to occasional human error, it's best that anglers be prepared for any unexpected situation and learn emergency measures even though they may never have to apply them.
<urn:uuid:01cf04ab-1e18-4980-8c8d-1f854537c1ce>
CC-MAIN-2016-26
http://blacklakeny.com/article2c6d.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00066-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932991
1,977
3.21875
3
There was a lot of work to do a few generations ago, but the work wasn’t regulated by a clock. With the growth of industrial capitalism during the post-Civil War years, more and more Americans were feeling pressure to be “on time.” (The phrase itself was a colloquialism which did not appear until the 1870s.) The corporate drive for efficiency … reinforced the spreading requirement that people regulate their lives by the clock. … And though there was much resistance, especially among workers from a preindustrial background, the triumph of clock time seemed assured by 1890, when the time clock was invented. Clocks had been around for centuries, but no one punched a time clock until 1890. People had regular schedules, some more so than others, but in general their schedules were not rigid or synchronized. Increasing numbers of people now enjoy flexible work schedules. This is not something new but a return to something old. Industrialism made synchronization necessary. Post-industrial work is partially returning to pre-industrial norms. “No Place of Grace: Antimodernism and the Transformation of American Culture, 1880-1920” by T. J. Jackson Lears.
<urn:uuid:2fb4ba4a-f6e6-497e-8e47-d87881c4f4f7>
CC-MAIN-2016-26
http://www.johndcook.com/blog/2012/03/02/life-off-the-clock/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975026
247
3.15625
3
Amalgam and Mercury - What Science Shows The health effects of elemental mercury from amalgam fillings are numerous and many studies and scientific evidence supports that mercury causes widespread and adverse health effects. Amalgam fillings have been well documented to be the number one source of exposure of mercury to most people, with exposure levels often exceeding Government health guidelines and the levels that cause adverse health effects. Dental amalgam contains about 50 % mercury, mixed with silver and other metals such as tin, copper, nickel, palladium, etc. The average filling has 1 gram of mercury and leaks mercury vapour continuously due to it’s low vapour pressure along with loss due to the galvanic battery-like action of mercury with dissimilar metals in the mouth resulting in significant exposure for most with amalgam fillings. Mercury vapour is transmitted rapidly throughout the body, easily crosses cell membranes, and like organic methyl mercury has significant toxic effects at much lower levels of exposure than other inorganic mercury forms. According to the U.S. EPA mercury is among the top 3 toxic substances adversely affecting large numbers of people with amalgam being the number one source of exposure in most cases. Why Mercury is Such a Concern Toxic and Health Effects of Mercury Mercury is the most toxic of the toxic metals. Mercury vapour is carried by the blood to cells in all organs of the body where it: - is cytotoxic(kills cells) - penetrates and damages the blood brain barrier, resulting in accumulation of mercury and other toxic substances in the brain; it also accumulates in the motor function areas of the brain and CNS. - is neurotoxic(kills brain and nerve cells); damages brain cells and nerve cells, generates high levels of reactive oxygen species(ROS) and oxidative stress, depletes glutathione and thiols causing increased neurotoxicity from interactions of ROS with glutamate and dopamine; kills or inhibits production of brain cells; inhibits production of neurotransmitters and blocks neurotransmitter amino acids effecting phenylalanine, serotonin, tyrosine and tryptophan transport to neurons - is immunotoxic damaging and inhibiting immune T-cell, B-cell and neutrophil function and induces DNA antibodies and autoimmune disease - is nephrotoxic(toxic to kidneys) - is an endocrine system-disrupting chemical as it accumulates in the pituitary gland and damages or inhibits pituitary gland hormonal functions at very low levels, it affects adrenal gland function, thyroid gland function, and disrupts enzyme production processes at very low levels of exposure - is transmitted rapidly via the placenta to the foetus - is a reproductive and developmental toxin damaging DNA and inhibiting DNA & RNA synthesis; it damages sperm, lowers sperm counts and reduces sperm motility; it causes menstrual disturbances; it reduces the bloods ability to transport oxygen and essential nutrients to the foetus; it causes reduced iodine uptake & hypothyroidism & learning deficits; it also causes learning disabilities and impairment, and reduction in IQ; it causes infertility and birth defects - causes cardiovascular damage and disease, increased white cell count, decreased oxyhemoglobin level, high blood pressure, tachycardia and increases risk of acute myocardial infarction (heart attack) - causes immune system damage resulting in allergies, asthma, lupus, chronic fatigue syndrome(CFS),and multiple chemical sensitivities(MCS) and neutrophil functional impairment - causes interruption of energy function systems and blocks enzymes needed to convert porphyrins to adenosine tri phosphate(ATP) causing progressive porphyrinuria, resulting in low energy, digestive problems, and porphyrins in urine - inhibits the immune system facilitating increased damage by bacterial, viral, and fungal infections and increased antibiotic resistance. - causes significant destruction of stomach and intestine epithelial cells, resulting in damage to stomach lining and adversely alters bacterial populations in the intestines causing leaky gut syndrome, accumulation of heliobacter pylori (a suspected major factor in stomach ulcers and stomach cancer) and candida albicans, as well as poor nutrient absorption. - it forms strong bonds with and modification of the-SH groups of proteins causing mitochondrial release of calcium, as well as altering molecular function of amino acids and damaging enzymatic process resulting in improper cysteine regulation, inhibited glucose transfer, damaged sulphur oxidation processes and reduced glutathione availability (necessary for detoxification). Mercury disrupts the endocrine system in animals and people, disrupting function of the pituitary gland, thyroid gland, enzyme production processes, and many hormonal functions at very low levels of exposure. Mercury (especially mercury vapour) rapidly crosses the blood brain barrier and is stored preferentially in the pituitary gland, hypothalamus, and occipital cortex in direct proportion to the number and extent of dental amalgam surfaces/fillings. The pituitary gland controls many endocrine system functions and secretes hormones that control most bodily processes, including the immune system, reproductive systems and metabolism. Mercury blocks thyroid hormone production by occupying iodine binding sites and inhibiting hormone action even when the measured thyroid levels appears to be in proper range. The thyroid and hypothalamus regulate body temperature and many metabolic processes including enzymatic processes that when inhibited result in higher dental decay. Mercury damage thus commonly results in poor control of bodily temperature, in addition to many problems caused by hormonal imbalances such as depression. Mercury also damages the blood brain barrier and facilitates penetration of the brain by other toxic metals and substances. Mercury causes biochemical damage at the cellular level. This results in DNA damage, inhibition of DNA and RNA synthesis; alteration of protein structure; alteration of the transport of calcium; inhibition of transport of glucose and other essential nutrients. Inhibition of enzyme function; initiation of free radical formation, depletion of cellular gluthathione (necessary for detoxification processes), inhibition of glutathione peroxidase enzyme, endothelial cell damage, abnormal migration of neurons in the cerebral cortex and immune system damage. Oxidative stress and reactive oxygen species (ROS) have been implicated as major factors in neurological disorders including stroke, Parkinson’s Disease and Alzheimer's, etc. Only a few micrograms of mercury severely disturb cellular function and inhibit nerve growth. Exposure to mercury results in metallo-protein compounds that have effects on gene expression. Some of the processes affected by such metallo-protein control of genes include cellular respiration, metabolism, enzymatic processes, metal-specific homeostasis, and adrenal stress response systems. Metallo-protein formation also appears to have a relation to autoimmune reactions in significant numbers of people. Mercury binding with proteins causes blockage of sulphur oxidation processes and enzymatic processes involving vitamins B6 and B12, has effects on the cytochrome-C energy processes and adverse effects on cellular mineral levels of calcium, magnesium, zinc, and lithium. Mercury accumulates in the pituitary gland, ovaries, testes, and prostate gland. It has oestrogenic effects, effects the reproductive system resulting in lowered sperm counts, defective sperm cells, damaged DNA, aberrant chromosome numbers rather than the normal 46, chromosome breaks, and lowered testosterone levels in males. It also causes menstrual disturbances and infertility in women; and increased neurological problems related to lowered levels of the neurotransmitters dopamine, serotonin, and norepinephrine. Some of the effects of depression are related to mercury reducing the level of posterior pituitary hormone (oxytocin). The pituitary glands of a group of dentists had 800 times more mercury than controls. This may explain why dentists have much higher levels of emotional problems, depression, suicide, etc. Low levels of pituitary function are associated with depression and suicidal thoughts. Amalgam fillings, nickel and gold crowns are major factors in reducing pituitary function.Supplemental oxytocin extract has been found to alleviate many of these mood problems, along with replacement of metals in the mouth. The normalisation of pituitary function also often normalises menstrual cycle problems, endometriosis, and increases fertility. An average amalgam filling contains over ½ gram of mercury, and the average adult had at least 5 grams of mercury in fillings. Mercury in solid form is not stable, having low vapour pressure and being subject to galvanic action with other metals in an oral environment, so that within 10 years up to half the mercury has been found to have been transferred to the body of the host. Elemental mercury vapour is more rapidly transmitted throughout the body than most other forms of mercury and has more much toxic effects on the CNS and other parts of the body than inorganic mercury due to its much greater capacity to cross cell membranes. Mercury vapour rapidly crosses the blood-brain barrier and the placenta of pregnant women. Some genetic types are susceptible to mercury-induced autoimmunity and some are resistant and thus much less affected. Studies found that mercury causes or accelerates various systemic conditions in a strain dependent manner, and that lower levels of exposure adversely affect some strains but not others, including inducing of autoimmunity. One genetic difference found in animals and humans is cellular retention differences for metals related to the ability to excrete mercury. For example individuals with genetic blood factor type APOE-4 do not excrete mercury readily and bioaccumulate mercury, resulting in susceptibility to chronic autoimmune conditions such as Alzheimer's, Parkinsons, etc. as early as age 40, whereas those with type APOE-2 readily excrete mercury and are less susceptible. . Long term occupational exposure to low levels of mercury can induce slight cognitive deficits, fatigue, decreased stress tolerance, etc. Higher levels have been found to cause more serious neurological problems. Occupational exposure studies have found mercury impairs the body's ability to kill Candida albicans in workers whose mercury exposure levels are within current safety limits. Another group of workers had long lasting increases in humoral immunological stimulation of IgG, IgA, and IgM levels. Other studies found that workers exposed at high levels at least 20 years previously demonstrated significantly decreased strength, decreased coordination, increased tremor, decreased sensation, polyneuropathy, etc. Elemental mercury can affect both motor and sensory peripheral nerve conduction. Thirty percent of dentists with more than average exposure were found to have neuropathies and visuographic dysfunction. Another study found that many of the symptoms and signs of chronic candidiasis, multiple chemical sensitivity and chronic fatigue syndromes are identical to those of chronic mercurialism. The symptoms remit after removal of amalgam combined with appropriate supplementation and gave evidence to implicate amalgam as the only underlying aetiological factor. Other studies found that mercury at levels below the current occupational safety limit causes adverse effects on mood, personality, and memory with effects on memory at very low exposure levels. Systemic Mercury Intake from Amalgam Fillings The tolerable daily exposure level for mercury according to a report for Health Canada is 0.014 micrograms/kilogram body weight(ug/kg) or approximately 1ug/day for average adult. The U.S. EPA Health Standard for elemental mercury exposure is 0.3 micrograms per cubic meter of air. For the average adult breathing 20m3 of air per day, this amounts to an exposure of 4 or 6 ug/day. The EPA health guideline for methyl mercury is 0.1ug/kg body weight per day or 7ug for the average adult. Mercury in the presence of other metals in the oral environment undergoes galvanic action, causing movement out of amalgam and into the oral mucosa and saliva. Mercury in solid form is not stable due to low vapour pressure and evaporates continuously from amalgam fillings in the mouth, being transferred over a period of time to the host. The daily total exposure of mercury from fillings is from 3 to 1000 micrograms per day, with the average exposure being above 10 micrograms per day and the average uptake into the body over 5ug/day. A large study was carried out at the Univ. Of Tubingen Health Clinic in which the level of mercury in saliva of 20,000 persons with amalgam fillings was measured. The level of mercury in unstimulated saliva was found to average 11.6ugHg/L, with the average after chewing being 3 times this level. Several were found to have mercury levels over 1100ug/L and 1 % had unstimulated levels over 200ug/L, and 10 % had unstimulated mercury saliva levels of over 100ug/L. The level of mercury in saliva has been found to be proportional to the number of amalgam fillings, and generally was higher for those with more fillings. 16% of all of those tested with 4 amalgam fillings had daily exposure from their amalgam fillings of over 17ug per day and even higher for those with more than 4 fillings. There is a significant correlation between exposure levels and the number of amalgam surfaces. Exposure generally increases as number of fillings increases. Variations in mercury exposure will occur due to the composition of the amalgam, whether a person chews gum or drinks hot liquids, bruxism (grinding teeth) and oral environmental factors such as acidity, etc. Chewing gum or drinking hot liquids can result in 10 to 100 times normal levels of mercury exposure from amalgams during that period. The Tubingen study did not assess the significant exposure route of intraoral air via the lungs. One study that looked at this estimated a daily average burden of 20ug from ionised mercury from amalgam fillings absorbed through the lungs, while a Norwegian study found the average level in oral air to be 0.8 ug/M3. Another study at a Swedish University measured intraoral air mercury levels from fillings of from 20 to 125ug per day, for persons with 18 to 82 filling surfaces. Another study found similar results and some individuals have been found to have intraoral air mercury levels above 400ug/m3. Most of those whose intraoral air mercury levels were measured exceeded Government health guidelines for workplace exposure. The studies also determined that the number of fillings is the most important factor related to mercury level, with age of filling being much less significant. As it can be seen most people with several fillings have daily exposure exceeding the Health Canada and the U.S. EPA and health guidelines for mercury. The main exposure paths for mercury from amalgam fillings are absorption by the lungs from intraoral air; vapour absorbed by saliva or swallowed; amalgam particles swallowed; and membrane, olfactory, venous, and neural path transfer of mercury absorbed by oral mucosa, gums, etc. At least 80% of mercury vapour reaching the lungs is absorbed and enters the blood from where it is taken to all other parts of the body. Elemental mercury swallowed in saliva can be absorbed in the digestive tract by the blood or bound in sulph-hydryl compounds and excreted through the faeces. The primary detoxification/excretion pathway for mercury absorbed by the body is as mercury-glutathione compounds through the liver/bile loop to faeces but some mercury is also excreted though the kidneys in urine and in sweat. The range of mercury excreted in urine per day by those with amalgams is usually less than 15ug, but some patients are much higher. Autopsy studies for those with chronic exposure to mercury show mercury also bioaccumulates in the brain/CNS, liver, kidneys, heart, and oral mucosa with the half life in the brain being over 20 years. Elemental mercury vapour is transmitted throughout the body via the blood and readily enters cells and crosses the blood-brain barrier, and the placenta of pregnant women. It crosses cellular membranes into major organs such as the heart. While mercury vapour and methyl mercury readily cross cell membranes and the blood-brain barrier, once in cells they form inorganic mercury that does not readily cross cell membranes or the blood brain barrier and is responsible for the majority of toxic effects. Thus inorganic mercury in the brain has a very long half life as it can not easily pass back out. The average amalgam filling has approximately 0.5grams (500,000ug) of mercury. As much as 50% of mercury in fillings has been found to have vaporised after 5 years and 80% by 20 years. Mercury vapour from amalgam is the single largest source of systemic mercury intake for persons with amalgam fillings, ranging from 50 to 90 % of total exposure, averaging about 80% of total systemic intake. After amalgam filling replacement levels of mercury in the blood, urine, and faeces temporarily increase for a few days, but levels usually decline in blood and urine within 6 months by 60 to 85% of the original levels. Mercury levels in saliva and faeces usually decline between 80 to 95%. Having dissimilar metals in the teeth (e.g. gold and mercury) causes galvanic action, electrical currents, and much higher mercury vapour levels and levels in tissues. Average mercury levels in gum tissue near amalgam fillings are about 200ppm, and are the result of mercury flow into the mucous membrane because of galvanic currents. Average mercury levels are often 1000ppm near a gold crown over an amalgam filling due to higher currents when gold is in contact with amalgam. These levels are among the highest levels ever measured in tissues of living organisms, exceeding the highest levels found in chronically exposed chloralkali workers who died in Minamata, or animals that died from mercury poisoning. German oral surgeons have found levels in the jaw bone under large amalgam fillings or gold crowns over amalgam as high as 5760ppm with an average of 800ppm. These levels are much higher than the FDA/EPA action level for prohibiting use of food with over 1ppm mercury. Likewise the level is tremendously over the U.S. Dept. Of Health/EPA drinking water limit for mercury which is 2 parts per billion (ppb). Amalgam manufacturers, Government health agencies such as Health Canada, dental school texts, and dental materials researchers advise against having amalgam in the mouth with other metals such as gold, but many dentists ignore the warnings. Studies have shown mercury travels from amalgam into the tooth dentine, root tips and the gums, with levels in roots tips as high as 41ppm. Mercury and silver from fillings can be seen in the tissues as amalgam "tattoos", which have been found to accumulate in the oral mucosa as granules along collagen bundles, blood vessels, nerve sheaths, elastic fibres, membranes, muscle fibres, and ducts of minor salivary glands. Dark granules are also present intracellularly within cells of the immune system such as macrophages, giant cells, endothelial cells and fibroblasts. There is in most cases chronic inflammatory response the metals, usually in the form of a foreign body granuloma. The component mix in amalgams has also been found to be an important factor in mercury vapour emissions. The level of mercury and copper released from high copper amalgam is as much as 50 times that of low copper amalgams. Studies have consistently found modern high copper amalgams have a high negative current and much greater release of mercury vapour than conventional silver amalgams and are more cytotoxic. Clinics have found the increased toxicity and higher exposures to be factors in increased incidence of chronic degenerative diseases etc. While the high copper amalgams were developed to be less corrosive and less prone to marginal fractures than conventional silver amalgams they have been found to be unstable when subjected to wear, polishing, chewing or brushing as droplets of mercury form on the surface of the amalgam. This has also been found to be a factor in the much higher release of mercury vapour by the modern amalgams. Amalgam also releases significant amounts of silver, tin, and copper which also have toxic effects, with organic tin compounds formed in the body being even more neurotoxic than mercury. The number of amalgam surfaces has a statistically significant correlation to mercury levels of: blood plasma, urine, oral air, saliva and oral mucosa, faeces, the pituitary gland, brain cortex, renal cortex, the liver and motor function areas of the brain & CNS. Teeth are living tissue and have massive communication with the rest of the body via blood, lymph, and nerves. Mercury vapour (and bacteria in teeth) have paths to the rest of the body. Some mercury entering nasal passages is absorbed directly into the olfactory lobe and brain without coming from blood. Mercury also is transported along the axons of nerve fibres. Mercury has a long half-life in the body and over 20 years in the brain, and chronic low level intake results in a slow accumulation in body tissues. Methyl mercury is more toxic to some body processes than inorganic mercury. Mercury from amalgam is methylated by bacteria, galvanic electric currents and candida albicans in the mouth and intestines. Methyl mercury is 10 times more potent in causing genetic damage than any other known chemical. The level of mercury in the tissue of the foetus, newborn and young children is directly proportional to the number of amalgam surfaces in the mother's mouth. The level of mercury in umbilical cord blood and placenta was higher than that in mother's blood. The saliva and faeces of children with amalgams have approximately 10 times the level of mercury as children without. A group of German children with amalgam fillings had urine mercury level 4 times that of a control group without amalgams and in a Norwegian group there was a significant correlation between urine mercury level and number of amalgam fillings. The level of mercury in maternal hair was significantly correlated to the level of mercury in nursing infants. The foetal mercury content after maternal inhalation of mercury vapour was found to be higher than in the mother. Mercury from amalgam in the blood of pregnant women crosses the placenta and appears in amniotic fluid and foetal blood, liver, and pituitary gland soon after placement of an amalgam. Dental amalgams are the main source of mercury in breast milk. Milk increases the bioavailability of mercury and mercury is often stored in breast milk and the foetus at much higher levels than that in the mother's tissues. The level of mercury in breast milk was found to be significantly correlated with the number of amalgam fillings, with milk from mothers with 7 or more fillings having levels in milk approx. 10 times that of amalgam-free mothers. The pituitary gland uptakes the highest level of mercury in the foetus which can affect development of the endocrine system. Immune System Effects and Autoimmune Disease Many thousands of people with symptoms of mercury toxicity have been found in tests to have high levels of mercury, and of the many thousands who have had amalgam fillings removed most have had health problems and symptoms alleviated or greatly improved. From clinical experience some of the symptoms of mercury sensitivity/mercury poisoning include chronic fatigue, dizziness, frequent urination, insomnia, headaches, chronic skin problems, metallic taste, gastrointestinal problems, asthma, post nasal drip, ringing ears, chest pain, hyperventilation, diabetes, spacey feeling, brain fog, memory loss, problems with temperature regulation, mood and behavioural problems, thyroid problems, adrenal fatigue, hormonal imbalances, reduced liver function, depression, chronic skin problems, immune and autoimmune diseases, cardiovascular problems and many types of neurological problems. Amalgam results in chronic mercury exposure rather than acute exposure and accumulation in body organs over time, so most health effects are of a chronic nature rather than acute. Mercury vapour exposure at very low levels adversely affects the immune system. From animal studies it has been determined that mercury damages T-cells by damaging the mitochondria, causes destruction and loss of cell membrane integrity, inhibiting ability to secrete interleukins, causing production of superoxides and nitric oxide, and inactivating or inhibiting enzyme systems involving the sulphydrol protein groups. Mercury caused adverse effects on both neutrophil and macrophage function and T-cells were susceptible to mercury induced cellular death. Interferon synthesis was reduced in a concentration dependent manner with either mercury or methyl mercury as well as other immune functions. Low doses also induce aggregation of cell surface proteins and dramatic tyrosine phosphorylation of cellular proteins related to asthma, allergic diseases such as eczema and lupus and autoimmunity. Both mercuric and methyl mercury chlorides caused dose dependent reduction in immune B-cell production. Mercury also inhibited B-cell and T-cell RNA and DNA synthesis. Workers occupationally exposed to mercury at levels within guidelines have been found to have impairment of lytic activity of neutrophils and reduced ability of neutraphils to kill invaders such as candida. Low doses also induced autoimmuntiy in some species. Another effect found is a significant increase in the average blood white cell count. The increased white count usually normalises after amalgam removal. Mercury also blocks the immune function of magnesium and zinc. Large numbers of people undergoing amalgam removal have clinically demonstrated significant improvements in immune system function and recovery and significant improvement in immune system problems in most cases surveyed. Mercury from amalgam interferes with production of cytokines that activate macrophage and neutraphils, disabling early control of viruses and leading to enhanced infection. Body mercury burden was found to play a role in resistant infections such as Chlamydia trachomatis and herpes family viral infections and it was found antibiotics could only effectively treat many cases after removal of body mercury burden. Similar results have been found for treatment of cancer. Mercury by its effect of weakening the immune system contributes to increased chronic diseases and cancer. Exposure to mercury vapour causes decreased zinc and methionine availability, depresses rates of methylation, and increased free radicals all factors in increased susceptibility to cancer. Amalgam fillings have also been found to be positively associated with mouth cancer. More Effects of Mercury Mercury interrupts the cytochrome C oxidase system, blocking the ATP energy function. These effects along with reductions in red blood cells oxygen carrying capability often result in fatigue and reduced energy levels as well as neurological effects. Toxic/allergic reactions to metals such as mercury often result in lichen planus lesions in oral mucosa or gums and play a roll in pathogenesis of periodontal disease. Removal of amalgam fillings usually results in cure of such lesions. A high percentage of patients with oral mucosal problems along with other autoimmune problems such as CFS have significant immune reactions to mercury, palladium, gold, and nickel. Mercury has been found to impair conversion of thyroid T4 hormone to the active T3 form as well as causing autoimmune thyroiditis common to such patients. In general immune activation from toxins such as heavy metals can cause changes in the brain, fatigue, and severe psychological symptoms such as profound fatigue, muscosketal pain, sleep disturbances, gastrointestinal and neurological problems as are seen in CFS, fibromyalgia, and autoimmune thyroiditis. Patients with other systemic neurological or immune symptoms such as arthritis, myalgia, eczema, CFS, MS, lupus, ALS, diabetes, epilepsy, Hashimoto's thyroiditis, schleroderma, etc. often recover or improve significantly after amalgam replacement. Mercury inhibits production of insulin and is a factor in diabetes and hypoglycemia, with significant reductions in insulin need and normalisation of blood sugar after replacement of amalgam fillings. Mercury exposure through fillings appears to be a major factor in chronic fatigue syndrome (CFS) through its effects on ATP and the immune system and its promotion of growth of candida albicans in the body and the methylation of inorganic mercury by candida to the extremely toxic methyl mercury form which like mercury vapour crosses the blood-brain barrier and also damages and weakens the immune system. Both inorganic and methyl mercury have been shown in animal studies to induce autoimmune reactions and disease in susceptible types through effects on immune system T cells. Medical Study Findings of Health Problems Related to Amalgam Fillings Neurological problems are among the most common and serious and include memory loss, moodiness, depression, anger and sudden bursts of anger/rage, self-effacement, suicidal thoughts, lack of strength/force to resolve doubts or resist obsessions or compulsions, etc. Many studies of patients with major neurological diseases have found evidence amalgam fillings may play a major role in development of conditions such as depression, schizophrenia, memory problems and other more serious neurological diseases such as MS, ALS, Parkinson's, and Alzheimer's. Mercury causes decreased lithium levels, which is a factor in neurological diseases such as depression and Alzheimer's. Lithium protects brain cells against excess glutamate and calcium, and low levels cause abnormal brain cell balance and neurological disturbances. Medical texts on neurology point out that chronic mercurialism is often not recognized by diagnosticians and misdiagnosed as dementia or neurosis or functional psychosis or just "nerves". Early manifestations are likely to be subtle and diagnosis difficult: Insomnia, nervousness, mild tremor, impaired judgment and coordination, decreased mental efficiency, emotional lability, headache, fatigue, loss of sexual drive, depression, etc. are often mistakenly ascribed to psychogenic causes. Very high levels of mercury are found in brain memory areas such as the cerebral cortex and hippocampus of patients with diseases with memory related symptoms. Some conditions found to be related to such toxic exposure of the foetus include autism, schizophrenia, ADD, dyslexia, eczema, etc. Prenatal/early postnatal exposure to mercury affects the level of nerve growth factor in the brain and causes brain damage and imbalances in development of the brain. Several studies found that mercury causes learning disabilities and impairment, and reduction in IQ. Mercury has an effect on the foetal nervous system at levels far below that considered toxic in adults, and background levels of mercury in mothers correlate significantly with incidence of birth defects and still births. Mercury alters lymphocyte reactivity and effects glutamate in the CNS and induces CFS type symptoms including profound tiredness, musculoskeletal pain, sleep distubances, gastrointestinal and neurological problems and fibromyalgia. Numerous studies have found long term chronic low doses of mercury cause neurological, memory, behaviour, sleep, and mood problems. Neurological effects have been documented at very low levels of exposure (urine Hg< 4ug/L), levels commonly received by those with amalgam fillings. Mercury binds to haemoglobin oxygen binding sites in the red blood cells thus reducing oxygen carrying capacity and adversely affects the vascular response to norepinephrine and potassium. Mercury's effect on pituitary gland vasopression is a factor in high blood pressure. Amalgam fillings have been found to be related to higher blood pressure, haemoglobin irregularities, tachycardia, chest pains, fatigue and reduced energy levels. Mercury also accumulates in the heart and damages myocardial tissues and heart valves. Interruption of the ATP energy chemistry results in high levels of porphyrins in the urine. Mercury, lead, and other toxins all cause high levels of porphyrins and have a pattern that indicates the likely source and the extent of damage. The average level of porphyrins for those with amalgams is over 3 time that of those without, and is over 20 times normal for some severely poisoned people. The FDA has approved a test measuring porphyrins as a test for mercury poisoning. A study funded by the Adolf Coors Foundation found that toxicity such as mercury is a significant cause of abnormal cholesterol levels, increasing as a protective measure against metal toxicity, and that cholesterol levels usually normalize after amalgam replacement. The study also found that mercury has major adverse effects on red and white blood cells, oxygen carrying capacity, and porphyrin levels, with most cases seeing significant increase in oxyhemogolbin level and reduction in porphyrin levels along with 100% experiencing improved energy. Patch tests for hypersensitivity to mercury have found from 2% to 44% to test positive, much higher for groups with more amalgam fillings and length of exposure than those with less. In studies of medical and dental students, those testing positive had significantly higher average number of amalgam fillings than those not testing positive and higher levels of mercury in urine. Of the dental students with 10 or more fillings at least 5 years old, 44% tested allergic. People with amalgam fillings have an increased number of intestinal microorganisms resistant to mercury and many standard antibiotics. Mercury is extremely toxic and kills many beneficial bacteria, but some forms of bacteria can alter their form to avoid being killed making the bacteria mercury resistant. But this transformation also increases antibiotic resistance and results in adversely altered populations of bacteria in the intestines. Recent studies have found that drug resistant strains of bacteria causing ear infections, sinusitis, tuberculosis, and pneumonia more than doubled since 1996. After reducing mercury burden antibiotic resistance declines. The alteration of intestinal bacterial populations necessary for proper digestion along with other damage and membrane permeability effects of mercury are major factors in creating "leaky gut" conditions with poor digestion and absorption of nutrients and toxic incompletely digested compounds passing into the bloodstream. Mercury from amalgam binds to the -SH (sulphydryl) groups of amino acids and proteins, resulting in inactivation of sulphur and blocking of enzyme function, producing sulphur metabolites with extreme toxicity that the body is unable to properly detoxify, along with a deficiency in sulphates required for many body functions. Sulphur is essential in enzymes, hormones, nerve tissue, and red blood cells. These exist in almost every enzymatic process in the body. Blocked or inhibited sulphur oxidation at the cellular level has been found in most with many of the chronic degenerative diseases, including Parkinson's, Alzheimer's, ALS, lupus, rheumatoid arthritis, MCS, autism, etc. Mercury also blocks the metabolic action of manganese and the entry of calcium ions into cytoplasm. Mercury from amalgam thus has the potential to disturb all metabolic processes. A large study of 20,000 subjects at a German university found a significant relation between the number of amalgam fillings with periodontal problems, neurological problems, and gastrointestinal problems. Allergies and hair-loss were found to be 2-3 times as high in a group with large number of amalgam fillings compared to controls. Higher levels of hormone disturbances, immune disturbances, infertility, and recurrent fungal infections were also found in the amalgam group. Clinics have also found alleviation of hair loss/alopechia after amalgam removal and detox. Another study in Japan found significantly higher levels of mercury in grey hair than in dark hair. Mercury accumulates in the kidneys with increasing levels over time. One study found levels ranging from 21 to 810ppb. Mercury exposure has been shown to adversely affect kidney function in occupational and animal studies and also in those with more than average number of amalgam fillings. Inorganic mercury exposure causes cytotoxicity by generating extremely high levels of hydrogen peroxide, which is normally quenched by pyruvate and catalase. The Government's toxic level for mercury in urine is 30mcg/L, but adverse effects have been seen at lower levels and low levels in urine often mean high mercury retention and chronic toxicity problems. Amalgam fillings produce electrical currents which increase mercury vapour release and may have other harmful effects. These currents are measured in micro amps, with some measured at over 4 micro amps. The central nervous system operates on signals in the range of nano-amps, which is 1000 times less than a micro amp. Negatively charged fillings or crowns push electrons into the oral cavity since saliva is a good electrolyte and cause higher mercury vapour losses. Patients with autoimmune conditions like MS, or epilepsy, depression, etc. are often found to have a lot of high negative current fillings. The Huggins total dental revision (TDR) protocol calls for teeth with the highest negative charge to be replaced first. Other protocols for amalgam removal are available from international dental associations like IAOMT. Some studies have also found persons with chronic exposure to electromagnetic fields (EMF) to have higher levels of mercury exposure and excretion. Since mercury is documented from studies of humans and animals to be a reproductive and developmental toxin mercury can reduce reproductive function and cause birth defects and developmental problems. Clinical evidence indicates that amalgam fillings lead to hormone imbalances that can reduce fertility, cause decreased sperm volume and motility, increase sperm abnormalities and spontaneous abortions, increase uterine fibroids/endometritis, and decreased fertility in animals and in humans. In studies of women having miscarriages or birth defects, male partners were found to typically have low sperm counts and significantly more visually abnormal sperm. Studies have found that mercury accumulates in the ovaries and testes, inhibits enzymes necessary for sperm production, affects DNA in sperm, causes aberrant numbers of chromosomes in cells, causes chromosome breaks, etc. all of which can cause infertility, spontaneous abortions, or birth defects. Researcher's advise pregnant women should not be exposed to mercury vapour levels above government health standards with many governments having bans or restrictions on use of amalgam by women of child-bearing age. Mercury is an endocrine system disrupting chemical in animals and people, disrupting function of the pituitary gland, hypothalamus, thyroid gland, enzyme production processes and many hormonal functions at very low levels of exposure. The pituitary gland controls many of the body's endocrine system functions and secretes hormones that control most bodily processes, including the immune system and reproductive systems. The hypothalamus regulates body temperature and many metabolic processes. Mercury damage thus commonly results in poor bodily temperature control, in addition to many problems caused by hormonal imbalances. Mercury also damages the blood brain barrier and facilitates penetration of the brain by other toxic metals and substances. Low levels of mercuric chloride also inhibit ATPase activity in the thyroid, with methyl mercury inhibiting ATP function at even lower levels. These effects result commonly in a reduction in thyroid production and an accumulation in the thyroid of radiation. Toxic metal exposure can play a major role in thyroid cancer aetiology. There has been no evidence found that there is any safe level of mercury in the body that does not kill cells and harm body processes (WHO). Many studies of patients with major neurological or degenerative diseases have found evidence amalgam fillings may play a major role in development of conditions such as such as Alzheimers, ALS , Parkinson's, ADD, etc. Mercury exposure causes high levels of oxidative stress/reactive oxygen species, which has been found to be a major factor in neurological disease. Mercury and quinones form conjugates with thiol compounds such as glutathione and cysteine and cause depletion of glutathione, which is necessary to mitigate reactive damage. One study found higher than average levels of mercury in the blood, urine, and hair of Parkinson's disease patients. Mercury has been found to accumulate preferentially in the primary motor function related areas such as the brain stem, cerebellum and anterior horn motor neurons, which enervate the skeletal muscles. There is considerable indication this may be a factor in ALS development. MS patients have been found to have much higher levels of mercury in cerebrospinal fluid compared to controls. Large German studies including studies at German universities have found that MS patients usually have high levels of mercury body burden, with one study finding 300% higher than controls. Most recovered after mercury detox, with some requiring additional treatment for viruses and intestinal dysbiosis. Studies have found mercury related mental effects to be indistinguishable from those of MS. Mercury and methyl mercury impair or inhibit all cell functions and deplete calcium stores. This can be a major factor in bone loss of calcium (osteoporosis). Mercury (like copper) also accumulates in areas of the eyes such as the endothelial layer of the cornea and macula and is a major factor in chronic and degenerative eye conditions such as iritis, astigmatism, myopia, black streaks on retina, cataracts, macula degeneration, etc. Most of these conditions have been found to improve after amalgam replacement. Results of Removal of Amalgam Fillings For the week following amalgam removal, body mercury levels increase significantly, depending on protective measures taken, but within 2 weeks levels fall significantly. Chronic conditions can worsen temporarily, but usually improve if adequate precautions are taken to reduce exposure during removal. Removal of amalgam fillings resulted in a significant reduction in body burden and body waste product load of mercury. Total reduction in mercury levels in blood and urine is often over 80% within a few months. There are extensive documented cases (many thousands) where removal of amalgam fillings led to cure or significant improvement of serious health problems such as periodontal diseases, oral keratosis(pre cancer), immune system/ autoimmune problems, allergies, asthma, chronic headaches/ migraines, multiple chemical sensitivities, epilepsy, blood conditions, eczema, chron's disease, stomach problems, lupus, dizziness/vertigo, arthritis, MS, ALS, Parkinson's/ muscle tremor, Alzheimer's, muscular/joint pain/fibromyalgia, infertility, depression, schizophrenia, insomnia, anger, anxiety & mental confusion, susceptibility to infections, antibiotic resistant infection, endometriosis, Chronic Fatigue Syndrome, tachycardia and heart problems, memory disorders, cancer, neuropathy/paraesthesia, alopecia/hair loss, sinus problems, tinnitus, chronic eye conditions: inflammation/iritis/astigmatism/myopia /cataracts/macula degeneration, vision disturbances, psoriasis, skin conditions, urinary/prostrate problems, hearing loss, candida, PMS, diabetes, etc. With the use of chemical or natural chelation to reduce accumulated mercury body burden in addition to amalgam replacement reports show over 80% with chronic health problems were cured or significantly improved. The recovery rate of those using dentists with special equipment and training in protecting the patient reported much higher success rates than those with standard training and equipment. Interviews of a large population of Swedish patients that had amalgams removed due to health problems found that virtually all reported significant health improvements and that the health improvements were permanent (study period 17 years). Tests for Mercury Level or Toxicity Faeces is the major path of excretion of mercury from the body and many researchers consider faeces to be the most reliable indicator of daily exposure level to mercury. The saliva test is another good test for daily mercury exposure, done commonly in Europe and representing one of the largest sources of mercury exposure. There is only a weak correlation between blood or urine mercury levels and body burden or level in a target organ. Mercury vapour passes through the blood rapidly (half-life in blood is 10 seconds) and accumulates in other parts of the body such as the brain, kidneys, liver, thyroid gland, pituitary gland, etc. Thus blood test measures mostly recent exposure. As damage occurs to kidneys over time, mercury is less efficiently eliminated so urine tests are not reliable for body burden after long term exposure. Some researchers suggest hair offers a better indicator of mercury body burden than blood or urine though still not totally reliable and may be a better indicator for organic mercury than inorganic. A new test approved by the FDA for diagnosing damage that has been caused by toxic metals like mercury is the fractionated porphyrin test, that measures amount of damage as well as likely source. Provocation challenge tests after use of chemical chelators such as DMPS or DMSA also are effective at measuring body burden, but DMPS can be dangerous to some people- especially those still having amalgam fillings or those allergic to sulphur drugs or sulphites. Another chelator used for clogged arteries, EDTA, forms toxic compounds with mercury and can damage brain function. Experienced doctors have also found additional zinc to be useful when chelating mercury as well as counteracting mercury's oxidative damage. Zinc induces metallothionein which protects against oxidative damage and increases protective enzyme activities and glutathione which tend to suppress mercury toxicity. Note: during initial exposure to mercury the body marshals the immune system and other measures to try to deal with the challenge, so many test indicators will be high. After prolonged exposure the body and immune system inevitably lose the battle and measures to combat the challenge decrease so some test indicator scores decline. Chronic conditions are common during this phase. Also high mercury exposures with low hair mercury or urine mercury level usually indicate the body is retaining mercury. Health Effects from Dental Personnel Exposure to Mercury Vapour It is well documented that dentists and dental personnel who work with amalgam are chronically exposed to mercury vapour, which accumulates in their bodies to much higher levels than for most non-occupationally exposed. Adverse health effects of this exposure including subtle neurological effects have also been well documented that affect most dentists and dental assistants, with measurable effects among those in the lowest levels of exposure. Mercury levels of dental personnel average at least 2 times that of controls. Sweden, which has banned use of mercury in fillings, is the country with the most exposure and health effects studies regarding amalgam, and urine levels in dental professionals from Swedish and European studies ranged from 0.8 to 30.1ug/L with study averages from 3.7 to 6.2ug/L. Mercury excretion levels were found to have a positive correlation with the number of amalgams placed or replaced per week, the number of amalgams polished each week, and with the number of fillings in the dentist. Autopsy studies have found high body accumulation of mercury in dental workers, with levels in pituitary gland and thyroid over 10 times controls and levels in renal cortex 7 times controls. Autopsies of former dental staff found levels of mercury in the pituitary gland averaged as high as 4,040 ppb. They also found much higher levels in the brain occipital cortex, renal cortex and thyroid. In general dental assistants and women dental workers showed higher levels of mercury than male dentists. The use of high speed drills in removal or replacement of amalgam has been found to create high volume of mercury vapour and respirable particles, and dental masks only filter out about 40 % of such particles. This produces high levels of exposure to patient and dental staff. Use of water spray, high velocity evacuation and rubber dam reduces exposure to patient and dental staff significantly. In addition to these measures researchers also advise all dental staff should wear face masks and a separate air supply and patients also be supplied with outside air. Use of such measures alone has been found to reduce exposure to patient and staff approximately by 90%. Dentists were found to score significantly worse than a comparable control group on neurobehavioral tests of motor speed, visual scanning and visuomotor coordination, concentration, verbal memory, visual memory and emotional/mood tests. Several dentists have been documented to suffer from mercury poisoning due to chronic mercury exposure showing chronic fatigue due to immune system overload and activation. Many studies have found this occurs frequently in dentists and dental staff along with other related symptoms like lack of ability to concentrate, chronic muscular pain, burnout, etc. A survey of over 60,000 U.S. dentists and dental assistants with chronic exposure to mercury vapour and anaesthetics found increased health problems compared to controls, including significantly higher liver, kidney, and neurological diseases. Other studies reviewed found increased rates of brain cancer and allergies. Either the dentist or dental hygienists and patients get high doses of mercury vapour when teeth are polished or cleaned by the use ultrasonic scalers on amalgam surfaces. Many homes of dentists have been found to have high levels of mercury contamination due to dentists bringing mercury home on shoes and clothes. Scientists and Government Panels or Bodies That Have Found Amalgam Fillings to be Unsafe. A World Health Organization Scientific Panel concluded that there is no safe level of mercury exposure and found no threshhold level below which effects were not measurable. In 1987 the Federal Dept. of Health in Germany issued an advisory warning against use of dental amalgam in pregnant women. Most major countries other than the U.S. have similar or more extensive bans or health warnings regarding the use of amalgam, including Canada, Great Britain, France, Austria, Norway, Sweden, Japan, Australia and New Zealand. A Swedish National Mercury Amalgam Review Panel and a similar Norwegian panel found that "from a toxicological point of view, mercury is too toxic to use as a filling material". Both countries have indicated plans to ban or phase out use of amalgam. A major amalgam manufacturer, Caulk Inc., advises that amalgam should not be used as a base for crowns or for retrograde root fillings as is commonly done in some countries. A Swedish medical panel unanimously recommended to the government "discontinuing the use of amalgam as a dental material". The U.S. EPA found that removed amalgam fillings are hazardous and must be sealed airtight and disposed of as hazardous waste. Most European countries require controls on dental waste amalgam emissions to sewers or air. The Legislature of the State of California passed a law that requires all dentists in the state to discuss the safety of dental materials with all patients and to post the following warning about use of amalgam on the wall of their office. "This office uses amalgam filling materials which contain and expose you to a chemical known to the State of California to cause birth defects and other reproductive harm". The use of mercury amalgams has been banned for children and women of child-bearing age or put on a schedule for phase out by several European countries. The use of amalgam is declining in Europe and Germany's largest producer of amalgam has ceased production. The director of the U.S. Federal program overseeing dental safety advises against using mercury amalgam for new fillings.
<urn:uuid:e5ee4078-f153-4b3a-985b-cc0eadf08709>
CC-MAIN-2016-26
http://www.evolvedental.com.au/amalgam-fillings-and-mercury
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930383
10,484
3.359375
3
Ogbono Nut | Wild Mango – Irvingia gabonensis Ogbono Nut tree, Irvingia gabonensis, also known as Wild Mango, Bush Mango and African Mango is a small to large tree, up to 40 meters tall, native to the tropical humid forest of Africa and South-east Asian. This fruit bearing tree is particularly prized for its fat- and protein-rich nuts, known as Ogbono, Odika, Dika and Etima nuts. Ogbono nut has commonly a straight trunk up to 100 cm in diameter, with smooth, grey to yellow-grey external bark and yellow, fibrous inner bark. The tree creates a spherical and dense crown. The trees yield a hard wood, valuable in construction and for making ships’ decks. Irvingia gabonensis leaves are alternate, simple and entire, 4-8cm long and 2-4cm wide, somawhat leathery and pinnately veined. Blossoming is on axillary panicle up to 9 cm long. Flowers are bisexual, small, 3-4 mm long, yellowish white in color. Ogbono Nut Fruit Fruit is an edible mango like ellipsoidal to cylindrical drupe, at times almost spherical, smooth and green when mature. Pulp is bright orange, soft, juicy, sweet with a few light fibers and a single ligneous nut. Fruit is generally consumed fresh. It can be also employed for the preparation of juice, jelly, jam and wine and to develop a black dye for textile. The subtly aromatic Ogbono nuts are generally dried in the sun for saving, and are sold whole or in powder form. They can be ground to a paste known diversely as Dika bread or Gabon chocolate. Their high content of gum enables Ogbono nuts to be used as a thickening agent for dishes such as Ogbono soup. The nuts can also be pressed for vegetable oil. Ogbono Nut Propagation Methods Tree propagation is by seed. Growth in young plants is very slow at the start but it becomes fairly fast later on. Irvingia gabonensis favors moist lowland tropical forests below 1000 m altitude and with yearly rainfall of 1500-3000 mm and mean yearly temperatures of 25-32°C. Irvingia gabonensis is a member of the family Irvingiaceae the genus Irvingia. No diseases or pest have been registered. Ogbono Nut, Irvingia gabonensis Fruit and Nut Trees Incoming search terms: - ogbono nuts - ogbono tree - irvingia gabonensis plants - irvingia gabonensis plant - ogbono nut - irvingia gabonensis tree - Ogbono fruit - spacing of ogbono tree - growing ogbono - Growing ogbonno tree
<urn:uuid:329813a6-51bb-4be0-a5a4-11b28db6b3b8>
CC-MAIN-2016-26
http://fruitandnuttrees.com/ogbono-nut-wild-mango-irvingia
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923045
620
2.6875
3
Certain pesticides, like Sevin Dust, are more hazardous to the environment than the CO2 or Nitrogen, which is already abundant in our environment. Also, some fertilizer companies are already reducing the amount of Nitrogen in their composition because it evaporates very quickly. Are you aware that the earth's atmosphere is 78% Nitrogen making it the most abundant gas on our planet? Are you aware that CO2 is used by plants for photosynthesis!? Regardless, the best source to get the Nitrogen into the soil is by plant decomposition and tilling decaying plant material back into the earth. Additives and pesticides are very expensive and hurts the farmer's bottom line, so they actually use less than you think. Also, the actual volume usage of pesticides on organic farms is not recorded by the government? Yes, they do use pesticides! Rotenone is a common pesticide used in organic farms which attacks the mitochondria of cells and it has been linked to Parkinson Disease! Sevin Dust, as many other pesticides, is far more dangerous to our environment because it kills pollinating insects such as bees, and yet it is widely used by home owners. The bee population is quickly diminishing and in many cases, it is the home owners' fault! Rather than writing my own article in this comment section... let me just say that I prefer purchasing local produce. As for my garden, I prefer natural methods of gardening, making my own compost (adding things like egg shells are great for adding calcium to tomato plants), and using beneficial insects such as lady bugs. Always do your own research. I learned a lot from Virginia Tech Master Gardeners. Here is another place that you may want to start... - 5/27/2014 7:34:03 AM
<urn:uuid:6e185f25-54cd-4b6f-8c2c-9ce907af77d6>
CC-MAIN-2016-26
http://www.sparkpeople.com/resource/article_comments.asp?id=682&type=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940248
357
2.9375
3
A solid silica structure that runs along each side of the raphe in genera such as Frustulia. Round, F.E., Crawford, R.M. and Mann, D.G. (1990). The Diatoms. Biology and Morphology of the Genera. Cambridge University Press, Cambridge, 747 pp. Image Credit: Carrie Graef In Frustulia, the raphe lies between the two branches of the longitudinal rib. Image Credit: Sarah Spaulding Species in Playaensis possess a longitudinal rib on either side of the raphe.
<urn:uuid:29e4f254-861f-4d93-9173-52f71d40b566>
CC-MAIN-2016-26
http://westerndiatoms.colorado.edu/glossary/term/longitudinal_rib
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.721906
124
2.765625
3
Alse Young (1647): First American Execution for Witchcraft It was on this date, May 26, 1647, that the first witch was hanged in America for the crime of witchcraft. Alse Young was arrested, tried for this capital offense in Windsor, Connecticut, and hanged at Meeting House Square in Hartford, on what is now the site of the Old State House.* There is no further record of Young’s trial or the specifics of the charge, only that Alse Young was a woman, as 80% of those executed for witchcraft were, and that her execution anticipated the 1692 Salem witch trials by some 45 years. And she was followed by many women, some of whose names we know: Mary Johnson in 1648; Rebecca (and Nathaniel) Greensmith and Mary Barnes in 1663, during an outbreak of the witchcraft delusion in Hartford. During the Salem witch trials, 30 years later on, 200 were accused and 19 executed: 15 of them women. Cotton Mather (1663-1728), a contemporary priest and politician, in his Wonders of the Invisible World (1693), wrote in reference to Salem witchcraft and stressed how urgent the fight against witches and deviltry really was: The New Englanders are a People of God settled in those, which were once the Devil’s Territories; … The Devil thus irritated, immediately try’d all sorts of methods to overturn this poor plantation: …. I believe, that never were more satanical devices used for the unsettling of any people under the sun, than what have been employ’d for the extirpation of the vine which God has here planted, … But, all those attempts of hell, have hitherto been abortive, …. Wherefore the Devil is now making one attempt more upon us; …. We have been advised by some credible Christians yet alive, that a malefactor, accused of witchcraft as well as murder, and executed in this place more than forty years ago, did then give notice of, an horrible plot against the country by witchcraft, … which if it were not seasonably discovered, would probably blow up, and pull down all the churches in the country. And we have now with horror seen the discovery of such witchcraft! The Bible was very direct about how to treat witches: Exodus (22:18) said, “Thou shalt not suffer a witch to live.” And Leviticus (20:27) said, “A man also or woman that hath a familiar spirit, or that is a wizard, shall surely be put to death: they shall stone them with stones: their blood (shall be) upon them.” There is no doubt that theologians reasoned, “If the All-wise God punishes his creatures with tortures infinite in cruelty and duration, why should not his ministers, as far as they can, imitate him?” (A.D. White, Warfare of Science with Theology) Consequently, torture was a favorite method, not for finding the truth of witchcraft, because witchcraft never contained any, but for quite effectively extracting confessions — because people will say anything to get the pain to stop. Witchcraft jurisprudence itself anticipated the anti-communist purges of the 1950s in the US: To confess to witchcraft was to earn a life sentence in jail; to deny the charge often resulted in a death sentence. The crime of witchcraft disappeared from the list of capital crimes in Connecticut after 1715, and was not thereafter prosecuted. The relatives of those executed in the Salem witch-trials received compensation about then. But the stain of execution for the imaginary crime of witchcraft remains. *A journal of then-Massachusetts Governor John Winthrop states that “One of Windsor was hanged.” The second town clerk of Windsor, Matthew Grant, confirms the execution with his 26 May 1647 diary entry, “Alse Young was hanged.” Originally published May 2003 by Ronald Bruce Meyer.
<urn:uuid:2410c036-0a1c-48fa-9090-fb31647ce460>
CC-MAIN-2016-26
http://freethoughtalmanac.com/?p=2277
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973958
822
2.90625
3
The Pill's Precursors Eve's Herbs. John M. Riddle. 341 pp. Harvard University Press, 1997. $39.95. Athough this exhaustive accounting of abortion and women's contraception from ancient to modern times opens with a discussion of Roe v. Wade, the bulk of the book is a history of herbs that women used to control fertility and of the forces that have limited herb use. The author seeks to convince the reader that these herbs were effective as contraceptives and sometimes in terminating pregnancy. The evidence he cites is not limited to testimonials common to herbal remedies but also includes scientific experiments on animals. There is ample evidence from the 5th century b.c. that fertility-controlling herbs were well known to the Greeks and their physicians. These herbs included pennyroyal used now as a tea. Most modern women are unaware that pennyroyal can cause abortion. The author notes that Egyptian papyrus in 1500 b.c. contained a recipe for preventing pregnancy. He also uses European population statistics from 400 b.c. through 1970 and fertility rates of different, far-flung communities to show how herbal methods effectively controlled fertility, and he documents the growing effort to squelch the knowledge, availability and use of these herbs during the 18th and 19th centuries. The authorities targeted midwives for their use of herbal concoctions. They were portrayed as witches and persecuted, stripped of their usual status as wise women and primary caregivers to pregnant women. Trials became increasingly frequent as the 20th century approached. Before Roe v. Wade, securing an abortion in the United States primarily involved an illegal procedure. Afterward, legal abortions became readily available at centers staffed by licensed practitioners. Also, in Western countries, birth-control pills, abortion pills, surgical sterilization and barrier techniques combined to lower fertility rates below the level needed to replace current populations. The author wisely avoids becoming a strong advocate for either side of several controversial issues surrounding women's rights, abortion rights and the right to life. Readers expecting passion and controversy may feel disappointed that this book does not concentrate on the ethical or moral merit of herb use for abortion and contraception but rather on their extensive use by women over most of recorded history.—Allen P. Killam, Obstetrics and Gynecology, Duke University Medical Center
<urn:uuid:92a712bc-13c0-48bb-b973-079c2aa76260>
CC-MAIN-2016-26
http://www.americanscientist.org/bookshelf/id.2466,content.true,css.print/bookshelf.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00197-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960806
470
2.828125
3
Making Large Lecture Courses Interactive Faculty members will find below a variety of different kinds of sources, from journal articles and books to brief opinion pieces from the Chronicle of Higher Education and videotapes of forums on this topic. The subject matter includes social science research on how students learn (or don't learn) in large lecture courses, chapters on lecturing from teaching handbooks, the reflections and reports of lecturers who have experimented successfully with small group or discussion strategies in large classes, and polemics both for and against the lecture as a viable teaching format. Developed by Ken Bain "What Students Think About and Do in College Lecture Classes." Teaching Learning Issues 53 (1984). Brooks, David W. "Alternatives to Traditional Lecturing." Journal of Chemical Education 61 Improving Lectures. Cashin, William E. newsletter. 14 1985. Manhattan, Kansas: Center for Faculty Education and Development, Kansas State University. Is the Lecture a Dead Teaching Form? Clayson, S. Hollis. Apr 12, 1994. Evanston, IL. Searle Center for Teaching Excellence, Northwestern University. Dubrow, Heather and James , Wilkinson. "The Theory and Practice of Lectures." The Art and Craft of Teaching. Gullette, M. M. ed. Cambridge: Harvard-Danforth Center, 1982. 25-37. Dunn, Joe P. "Reflections of a Recovering Lectureholic." National Teaching & Learning Forum 3 Frederick, Peter J. "The Lively Lecture - 8 Variations." College Teaching 34 (1986): 43-50. Gleason, Maryellen. "Better Communication in Large Courses." College Teaching 34 (1986): Gullette, Margaret Morganroth. "Leading Discussion in a Lecture Course: Some Maxims and an Exhortation." Change (1992): 32-39. Hosley, Catherine J. "How To Get Reactions From Students In Big, Impersonal Lecture Classes." Chronicle of Higher Education 1987, 15 Lewis, Karron G. Taming the Pedagogical Monster: A Handbook for Large Class Instructors. Book: Center for Teaching Effectiveness, University of Texas at Austin, 1990. Lowman, Joseph. "Selecting and Organizing Material for Class Presentations." Mastering the Techniques of Teaching. San Francisco: Jossey-Bass, Inc. 1984. 96-118. Lowman, Joseph. Mastering the Techniques of Teaching. 1st ed. San Francisco: Jossey-Bass Inc., McKeachie, Wilbert J. "Lecturing." Teaching Tips. Lexington, MA: D.C. Heath and Company, Meredith, Gerald M. "Two Rating Indicators of Excellence in Teaching in Lecture-Format Courses." Psychological Reports 56 (1985): 52-54. Meredith, Gerald M. "Intimacy as a Variable in Lecture-Format Courses." Psychological Reports 57 (1985): 484-486. Merrill Library & Learning Resources Program. "The Large Class." Instructional Improvement 8 Monk, G. Stephen. "Student Engagement and Teaching Power in Large Classes." Learning in Groups. Bouton, C., and R. Y., Garth eds. San Francisco: Jossey-Bass Inc. 1983. 14. 7-12. Palmer, Stacy E. "The Art of Lecturing: A Few Simple Ideas Can Help Teachers Improve Their Skills." Chronicle of Higher Education 1983, 19-20. Rosenkoetter, John S. "Teaching Psychology to Large Classes: Videotapes, PSI, and Lecturing." Teaching of Psychology 11 (1984): 85-87. Silverstein, Brett. "Teaching a Large Lecture Course in Psychology: Turning Defeat into Victory." Teaching of Psychology 9 (1982): 150-155. Stanton, Harry E. "Small Group Teaching in the Lecture Situation." Improving College and University Teaching 26 (1978): 69-70. Weaver, Richard L. II. "Effective Lecturing Techniques: Alternatives to Classroom Boredom." Teacher Educator 16 (1980): 2-8. Weaver, Richard L. II. "The Small Group in Large Classes." Educational Forum 48 (1983): 65-73. Whooley, John. "Improving the Lecture." Improving College and University Teaching 22 (1974): Wick, John W. "Making a Big Lecture Section a Good Course." Improving College and University Teaching 22 (1974): 249-252. Zarefsky, David. Lecturing as Communication. 1994. Evanston. Searle Center for Teaching Excellence, Northwestern University.
<urn:uuid:d43932b7-353c-430b-8649-5eeb39a7318e>
CC-MAIN-2016-26
http://www.montclair.edu/academy/fostering-resources/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.765848
1,017
2.546875
3
Take on this challenging weather word scramble! Your child must use his logic and his knowledge of weather systems to figure out each word. Put your child's memory and geographic knowledge to the test with this challenging exercise, where she'll list off the 50 states in alphabetical order. Your fifth grader will learn impressive words like "tenacious" and "strident" in this vocabulary builder worksheet. Learn some new words with this worksheet. Reinforce known and new words with your child using this vocabulary worksheet. Learn words like "recitation," "incorporate," and more. Words like "antagonist" and "transient" are confusing for adults and kids. Fifth graders will learn these words and more in this vocabulary worksheet. This vocabulary list includes words like "appreciate" and "petulant." Build your child's vocabulary with this vocabulary list. Help your fifth grader learn the words like "famished" and "industrious" with this vocabulary worksheet. Learn to identify and use these words and more. Teach your fifth grader words like ancient, option, and achievement with this vocabulary worksheet. Learn the meanings and spellings of these words and more. Encourage your fifth grader to grow his vocabulary with this word-focused worksheet. Kids will learn new words, then write them into sentences.
<urn:uuid:589dd817-e015-46aa-81a2-595a77c11c40>
CC-MAIN-2016-26
http://www.education.com/collection/mrmarino/hannah-spelling/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926815
284
3.5625
4
Find great books for preschool, elementary, and middle school children and teens along with ideas of ways to teach with them in the classroom across the curriculum. Welcome to our newsletter. If you'd like to have each issue delivered to your email address you can sign-up for a subscription. In this issue I've gathered together reviews of several new books. There's a new collection of Edgar Allan Poe stories with essays by contemporary mystery writers, a very silly picture book, a picture book biography of Pablo Neruda, and a nonfiction book by David Macaulay on the human body. If you use any of these with your students I'd love to hear about their reactions. Connolly, Michael. ed. In the Shadow of the Master: Classic Tales by Edgar Allan Poe. (2009, Morrow. ISBN 9780061690396.) Fiction. Short stories. Three stars (out of 3). Gr 6-12. No exploration of short stories is complete without reading Poe. If you are looking for an anthology of his stories I highly recommend this one. This anthology includes essays by various mystery writers like Stephen King and Sue Grafton about how the reading of Poe has affected them. The experience of Poe as required reading in school comes up again and again. The "broccoli" that teenagers are required to eat. The essay writers often compare their early experiences of reading Poe with later readings when they were able to discover greater meaning. These essays are a great follow-up after reading the stories but are also a good jumping off place for discussions about required reading and about revisiting texts later, how changes in the reader affect later readings. It's a good time to talk about what the reader brings to a story in general. In the collection of Poe stories anthologized here you'll find old standards such as The Raven and The Pit and the Pendulum and lesser known stories like The Descent into the Maelstrom. Reading these stories in the context of the essays creates a sense of joining a community of readers sharing these stories that so many of us have been affected by. For students it can be their first experience of sharing a readers' community that extends beyond the walls of the school. Lechner, Jack. Mary Had a Little Lamp. (2008, Bloomsbury. ISBN 9781599901695.) Picture Book. 3 stars. Gr PreK-12. This book got me right in my funny bone. What a delight it was to find in the pile of new releases. Mary has a little lamp, a gooseneck desk lamp, in fact. And yes, she takes it everywhere she goes. She loves the cool feel of it against her face and she loves the light it gives when she plugs in the cord that she drags it around by. Her parents: "We just don't get it! Why a lamp? her worried parents said. 'We told her she could have a dog She wanted this instead!'" The illustrations are bold and outrageous. A perfect match to this wry poem. The faces of people and animals are particularly expressive. To the surprise of all Mary heads off to summer camp without her lamp and has a fabulous time. We think perhaps she's finally "normal" until we read the surprise ending. I highly recommend this book for all ages. Ray, Deborah Kogan. To Go Singing Through the World: The Childhood of Pablo Neruda. (2006, Farrar, Straus and Giroux. ISBN 0374376271.) Picture Book. 3 stars. Gr 4-6. Beautiful. It's just too predictable to use the word "lyrical" to describe this biography of Pablo Neruda's early years. But there you are, I've said it. Ray weaves short passages of Neruda's prose and poetry into her narrative. The context makes the poetry more accessible. For instance, when Ray describes the tiny outpost of Temuco, Chili, a new mill town, where Neruda grew up, it helps us understand what Neruda is talking about when the book segues into a clip of his poetry, "From axes and rain, it grew up, that town of wood recently carved, like a new star stained with resin." In turn Ray's writing is strengthened by the integration of Neruda's words. Born in 1904, Neruda lost his mother as an infant but became very close to his stepmother. Tortured by shyness and embarrassment about his stutter Pablo retreats more and more into his solitude. He finds comfort and a sense of belonging in books and writing. Pablo Neruda went on to become the most celebrated literary figure in Latin America, a political activist, diplomat and senator. He won the Nobel Prize for literature in 1971. The book speaks to many areas of the curriculum: poetry, finding one's voice, the outsider, biographies, South America, the 20th century, writing and the power of an individual. Macaulay, David. The Way We Work: Getting to Know the Amazing Human Body. (2008, Houghton. ISBN 9780618233786.) Nonfiction. 3 stars. Gr 6-12. Macaulay has turned his able hand from The Way Things Work to "The Way We Work". Well written with fascinating drawings he breaks human anatomy and physiology into seven chapters: Building Life (cell structure), Air Traffic Control (respiration), Let's Eat (digestion), Who's in Charge Here (nervous system), Battle Stations (immune system), Moving On (skeletal and musculature), and Extending the Line (reproduction). Most of the 336 pages are covered with Macaulay's illustrations of body parts and processes. There are humorous touches like the "MOM" tatoo on the cross section diagram of a vaccination needle going into muscle tissue and the diving board in the illustration of the various types of cells that are suspended in the "pool" of plasma that makes up our blood. Many of the drawings have that kind of tongue in cheek portrayal of what the - mostly straight faced - text is describing. Tiny tourists watch through binoculars on a deck overlooking the back of the throat as the tongue guides food down. Tiny angels with guide wires hold up the large intestine where it goes across the top. The distribution of oxygen is depicted as an amusement park ride. This is a book to curl up with while reading the text, exploring the pictures and extending ones understanding of our body. As Macaulay says in the introduction, "Each of us owns and inhabits an exceptional example of biological engineering and one that deserves to be understood and celebrated." This title is especially appropriate for middle school and high school students. Younger students might need help with the density of information. A good extension of or substitute for textbooks and a good source for report writing. Use this book in health, biology and art classes. That's it for today. Happy reading! Search Our Site Subscribe to our Free Email Newsletter. In Times Past by Carol Hurst and Rebecca Otis Integrating US History with Literature in Grades 3-8. Enliven your US History curriculum! Teach US History using great kids books. By Carol Otis Hurst!!
<urn:uuid:4c5b7a8f-23b0-412e-9fe6-4976db8ab706>
CC-MAIN-2016-26
http://www.carolhurst.com/newsletters/1402newsletters.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94488
1,502
2.671875
3
European ministers to adopt targets for reversing destruction of nature and landscape by 2010 Geneva /Nairobi, 9 May 2003 - Ministers and senior officials from 55 countries will meet in Kiev from 21-23 May for the 5th Pan-European Ministerial Conference "Environment for Europe", the region's highest-level environmental forum. Key items on the agenda include protecting the largest remaining wilderness in Europe (outside of Russia) by adopting the Framework Convention on the Protection and Sustainable Development of the Carpathians. Another highlight will be a decision to formally adopt the goal of halting the degradation of the region's biological and landscape diversity by the year 2010, together with nine specific measurable targets for ensuring that this overall goal is achieved. "By setting clear targets that can be tracked and evaluated over the next several years, European leaders will demonstrate their commitment to achieving an environmental renaissance in this complex and dynamic region," said Klaus Toepfer, Executive Director of the United Nations Environment Programme. UNEP, together with the Council of Europe, services the Pan European Biological and Landscape Diversity Strategy through which governments and civil society organizations developed the proposed goal and targets. The PEBLDS promotes the regional implementation of the 1992 Convention on Biological Diversity (CBD), which was negotiated under UNEP auspices. The proposed new Europe-wide targets for stabilizing biodiversity by 2010 would require: · taking effective actions by the year 2008 to prevent human activities from damaging forests; · finalizing an inventory of all high-value natural areas in agriculture ecosystems by 2006 and ensuring that a substantial proportion of these areas are under biodiversity-sensitive management by 2008; · integrating biodiversity concerns into all financial subsidy and incentive schemes for agriculture in Europe by 2008; · ensuring the early development of a Pan European Ecological Network by identifying and mapping all core areas of high ecological value, as well as restoration areas, wildlife corridors and buffer zones, by 2006, and then adequately conserving all core areas by 2008; · implementing an agreed strategy on alien invasive species in at least half of the region's countries by 2008; and · increasing substantially public and private financial investments in biodiversity via partnerships with the finance and business sectors, establishing a coherent European programme on biodiversity monitoring and indicators, and implementing national communication, education and public awareness plans in at least half the region's countries, all by 2008. Sir Brian Unwin, President of the European Centre for Nature Conservation (ECNC) and Honorary President of the European Investment bank, said, "I strongly welcome the clear biodiversity targets that leaders in Government and the economic sectors will discuss in Kiev. Europe now has a chance to demonstrate that it really means business in putting biodiversity, nature and landscape high on the political and economic agenda. But realization of this objective will require a much closer and contributory partnership between governments and NGOs and the business and banking sectors." The goal being considered by the Ministers would represent a stronger commitment than last year's global agreement at the World Summit on Sustainable Development to reduce significantly the current rate of loss of biodiversity by 2010. The Pan-European Biological and Landscape Diversity Strategy (PEBLDS) engages all European member countries of the Environment for Europe process, serviced by the United Nations Economic Commission for Europe, stretching across the continent from Iceland to Kyrgyzstan. The European biodiversity targets are expected to be adopted on World Biodiversity Day (22 May), which was established by the United Nations General Assembly to promote the goals of the CBD. Note to journalists: For information on accreditation, please see www.kyiv-2003.info. For information on biodiversity issues on the Kiev agenda, please contact Mr. Eric Falt, UNEP Spokesman/Director of the Division of Communications and Public Information, on Tel: +254-20-623292, Mobile: +254-733-682656, E-mail: email@example.com or Mr. Michael Williams, Chief, Information Unit on Conventions, UNEP Regional Office for Europe on Tel: +41-22-917-8242, Mobile: +41-79-409-1528, E-mail: firstname.lastname@example.org or Ms. Agnes Bruszik, European Centre for Nature Conservation, Central and Eastern European Regional Unit on Tel: +36-1-355-3699, E-mail: email@example.com. A press conference on the adoption and signature of the Framework Convention on the Protection and Sustainable Development of the Carpathians will take place on Thursday, 22 May at 13:20 at the Press Center of the Kyiv Conference, with the participation of HE Minister Shevchuk, Ukraine, Prof. Klaus Toepfer, Executive Director- UNEP, Claude Martin - WWF International, and HE Minister Matteoli, Italy. A press briefing on financing the conservation and sustainable use of biodiversity in Europe will take place on Thursday, 22 May at 15:40 at the UNEP/ECNC/IUCN joint biodiversity stand at the Conference's exhibition area directly after the Ministerial debate on biodiversity. Sir Brian Unwin, President of ECNC and Honorary President of the European Investment Bank, will chair the press briefing and high-level representatives of a number of governments and international finance institutions will provide information on their activities and plans in the field of banking and biodiversity, which is of major importance for halting the decline of Europe's biodiversity by 2010. UNEP Information Note 2003/06
<urn:uuid:91c9ba6d-93fe-4504-b446-5861135386d9>
CC-MAIN-2016-26
http://www.unep.org/Documents.multilingual/Default.asp?DocumentID=317&ArticleID=3977
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900281
1,125
2.796875
3
We have received a lot of calls from readers with questions on tomatoes and the drought conditions we are experiencing in Lucas County. Some of the most obvious issues we see with tomatoes during this extreme weather are leaves curling, blossom end rot, growth cracks, and sun scald. Tomato physiologic leaf curl is a symptom of water stress where the leaf curls around into itself on the plant. This is usually caused by lack of moisture and even after watering, the curl is irreversible. In severe conditions, the entire plant may exhibit leaf curl, growth, or yield. Some varieties exhibit these symptoms more than others. Blossom end rot appears when a plant is experiencing a lack of calcium uptake. The bottom of the fruit develops a black or tan lesion when the fruit is 1/3 to 1/2 grown. While this is unsightly, the major problem is that it weakens the skin of the fruit, allowing other fungi or bacteria to grow. These pathogens may result in further decay of the fruit. This defect is a result of soil moisture fluctuations alternating between rain, high heat, and drought conditions, preventing the plant from receiving the necessary calcium. One way to manage this disease is to maintain a continual level of soil moisture. Sunscald is common in plants that suffer leaf loss, either from a leaf spot disease, or to feeding insects, plants that have been over pruned, or otherwise exposed to sun. Sun scald manifests itself with a pale yellow or white spot on the side of the fruit which receives the most sun. The pale yellow to white patch may become a flattened grayish area, and the surface dries out and becomes almost like paper. These affected areas then become prone to infections from fungi or bacteria entering the fruit. Maintenance calls for treating for diseases that affect the leaves, and to reduce direct sun exposure. Growth cracks appear in response to rapid fruit growth. Some of these growth spurts may be a result of over abundant rain (unlikely this year in Lucas County), high temperatures, or when water appears suddenly after drought conditions. Most of the country is suffering from the worst drought conditions since the 1950s. Following are some tips to help keep your vegetable gardens from becoming victims of drought. As we mentioned in our articles last year, when planning your gardens, it's important to make water easily accessible, so you don't have to haul it out to the "back 40." Some options include drip irrigation, or soaker hose (water hose with punctured holes). You will use up to 60 percent less water than spraying your plants from the top with the garden hose or by using sprinklers. This technique also insures that the water is reaching the roots of the plant. Another option is to bury gallon jugs up to the half-way mark (with holes punctured on side and bottom) between the rows of your plants. When you fill the jug, water will be released at a slower pace, maintaining a more level application of water, without experiencing evaporation. If you have not mulched your garden at this point, mulching vegetable plants one to two inches only, will help to maintain moisture requiring less watering. Critical times for irrigation by plant include: ● Broccoli, Cabbage, Cauliflower, and Lettuce -- during head development ● Corn -- silking and tasseling, ear development ● Cucumbers, Eggplant, Peppers & Melons -- fruit development ● Tomatoes -- early flowering, fruit set, and enlargement. There will be a Food Preservation for Local Produce presentation Aug 6 from 7 to 8 p.m. at the Sylvania Branch Library Meeting Room, 6749 Monroe St., Sylvania. It is free and no registration is required.
<urn:uuid:a88f6303-b774-47e4-a8b0-3b2f2b67a9de>
CC-MAIN-2016-26
http://www.toledoblade.com/Gardening/2012/07/28/Drought-s-effect-on-tomatoes.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93967
769
3.15625
3
The seedling stem (hypocotyl) and seed leaves (cotyledons) of common ragweed are green and often splotched with purple. Seed leaves are about ¼ inch (6 mm) long, spoon-shaped or nearly round, somewhat thickened, and have no visible veins. Leaf stalks (petioles) are nearly as long as the seed leaves. The distinct ragweed shape is evident in the first pair of true leaves. These leaves have one or two deep clefts in each margin, forming lobes that are rounded or slightly pointed at the tips. Short, whitish hairs cover the leaves and stem. Hairs are most dense on the lower leaf surfaces. Common ragweed is an annual broadleaved weed and a member of the composite or daisy family. It has a shallow, fibrous root system and grows 2 to 4 feet (60 to 120 cm) high. Its stems vary from unbranched to bushy. Stems may be hairless, but usually they are densely covered with stiff erect hairs about 1/8 inch (3 mm) long. Mature leaves are 6 to 12 inches ( 15 to 30 cm) long and 4 to 6 inches (10 to 15 cm) wide and are deeply indented. On the second and subsequent leaf pairs, the veins are visible as depressions on the upper surface and as ridges on the lower surface. 2. Fernlike mature leaves. 3 . Stems are usually covered with velvety hairs. Male and female flowers of common ragweed are in separate flower heads on the same plant (monoecious habit). The female flower heads are green, stemless, and inconspicuous. They are borne singly or in small clusters in the crooks (axils) of the upper leaves. The male heads are more clustered; 10 to 100 flowers are arranged in tight spikes at the tops of stems and branches. Common ragweed flowers in August and early September, producing huge amounts of dry dusty pollen that is dispersed mainly by wind. Common ragweed reproduces by seeds that are about 1/8 inch (3 mm) long and have several short spiny projections near one end. Seed production begins in late August and continues through September. Although each female flower produces only one seed, a plant that emerges in mid-May can easily have 30,000 to 62,000 seeds. Seeds are dispersed by water (through rain-wash channels or gulleys), birds, burrowing animals, and humans. During summer and fall, wind plays a minor role in dispersal, but in winter it may roll the seeds for long distances over the surface of crusted snow. 4. Male and female flowers of mature common ragweed. 5. Common ragweed (right) and giant ragweed (left). Buried seeds can survive in the soil for thirty-nine years or more, remaining viable until conditions are favorable for germination. Soil temperature is the most important factor in germination. Optimum soil temperature fluctuates between 50° and 80° F (10° to 27° C). Optimum soil moisture for seed germination is 14 to 22 percent. Seeds usually begin germinating in May. By the end of the first week in June germination is 90 percent complete. Common ragweed is extremely competitive, partly because it can accumulate large quantities of trace elements. Corn studies show that common ragweed generally absorbs much more boron, copper, magnesium, zinc, tin, galium, vanadium, bismuth, nickel, chromium, potassium, and calcium than do com leaves harvested at tassel stage. Ragweed grows well on soils containing enough zinc to be toxic to other plants. When com and ragweed were grown in soil with a heavy concentration of zinc, ragweed absorbed about seven times more zinc than com did. A severe ragweed problem causes extreme nutritional deficiencies in crops. Common ragweed is widespread on arable land and is found in cultivated fields, gardens, vacant lots, and waste places, and along roadsides and fence rows. It can grow in clay, silt, and sand mixtures but prefers heavier, moist soils with a pH between 6.0 and 7.0. Ragweed is a typical after-harvest cover in grain and hay fields. It is also abundant in cereal crops and cultivated row crops. Peak growth occurs from mid-July to mid-August. Seed production begins in late August and continues through September. Giant ragweed (Ambrosia trifida) resembles common ragweed, but the two species differ in size and leaf shape. Giant ragweed seed leaves are more than one inch (2.5 cm) long; common ragweed seed leaves are ¼ inch (0.6 cm) long. The first true leaves of giant ragweed are not deeply indented, whereas those of common ragweed are fernlike. Giant ragweed has large, three-lobed (occasionally fivelobed) leaves that are opposite each other on the stem. Leaves of mature common ragweed are alternately arranged. Western ragweed (Ambrosia psiloatachya) prefers dry prairies and plains. It is common throughout the western, midwestern, and eastern United States except for the Pacific northwest corner, Maine and the Great Lakes region. This weed grows 1 to 7 feet (0 .3 to 2.5 m) tall and is characterized by its bushy, dense growth habit and hairy stems. Western ragweed can reproduce by creeping rootstocks as well as seeds, whereas common ragweed is strictly a shallow-rooted annual. Skeletonleaf bursage (Ambrosia tomentosa) is a perennial that reproduces by deep creeping rhizomes and seeds. It grows l to 2 feet (30 to 60 cm) tall. The burrlike fruit surrounding the female flower bears one to three seeds that are covered with sharp spines when mature. Common ragweed has no burrlike structure or creeping rhizomes. A native of North America, common ragweed is found throughout the United States except for the northernmost Great Lakes region and northern Maine. It has since spread and is now common throughout Europe, Asia, and South America. The genus name Ambrosia means "food of the gods." While it may be fit for the gods, it causes nausea and sore mouths in livestock. Fortunately, the bitter taste of ragweed makes livestock poisoning infrequent. Ragweed produces huge amounts of pollen in the fall, afflicting millions of people who have allergies. Destroying the plants before they flower lessens the severity of allergic reactions. Common ragweed is also known as Roman wormweed, annual ragweed, wild tansy, and hogweed. Ragweed can tolerate much abuse, including mowing, trampling, and grazing. Two-inch (5 cm) plants grow back if cut above the seed leaves, and plants cut in midsummer grow new stems and flower only ten days later than uncut plants. Therefore, proper control measures—including both cultivation and herbicide treatment—are important. The longevity of common ragweed seeds enables the weed to counteract the effects of cultivation and plowing. Stirring or plowing the soil decreases but does not eradicate the ragweed population. However, proper cultivation and/or chemical application can kill most of the ragweed seedlings in row crops. Most preemergence herbicides are effective in controlling common ragweed in corn and soybeans. Postemergence applications of Banvel in corn and Basagran in soybeans are recommended for cleaning up escape weeds. For control in small grains, Bifenox (Modown 4F) is often recommended for preemergence control, and MCPA or 2,4-D for postemergence control. The exact treatment, however, depends on the cropping system. For specific recommendations, consult your county extension agent or the most recent Weed Control Manual and Herbicide Guide, available through Meister Publishing Company, 37841 Euclid Avenue, Willoughby, Ohio 44094. Follow label instructions for all herbicides and observe restrictions on grazing and harvesting procedures. Prepared by W. Thomas Lanini, Extension weed specialist, and Betsy Ann Wertz, agricultural writer. Weed Identification 8
<urn:uuid:cb4b367e-8955-4362-9fec-d2bfc5550479>
CC-MAIN-2016-26
http://extension.psu.edu/pests/weeds/weed-id/common-ragweed
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00037-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942588
1,718
3.765625
4
By Calvin Miller Tuesday, January 04, 2011 Adverbs are supposed to modify verbs, adjectives or other adverbs. Generally they do not so much modify as nullify, mystify, stultify and nearly always dissatisfy.The easiest way to make an adverb is to take an adjective and add -ly to the end of it. You thereby hybridize briskness with something less moving and confusing. I have always seen -ly words as court words. Lawyers use them a lot. They are waxy and escapist.I recently heard a politician (and a lawyer use the word allegedly. I thought to myself: Exactly what does the ambiguous word really say? It is like the words comfortably, notably, seemingly, apparently, surprisingly, etc. Such words are fuzz balls in speech, sermons in particular.I also never have believed much in adjectives; they rate only slightly higher than adverbs with me. As adverbs tend to end in -ly, adjectives tend to end in -al, -tive or -ory. Let us take the word expository for instance. This is a very key -ory adjective that when applied to preaching supposedly is supportive of its stronger noun form exposition. Therefore, the word expository should have something to do with exposing the truth of God's Word; but in many cases the adjective form is so pleasing because it gilds the bearer with an affirmation for just about any preaching style.Expository is a darling adjective ending that just about everybody claims. In Evangelical circles, to say, "I believe in expository preaching!" is the equivalent of saying the Apostle's Creed in an Episcopal gathering. On the other hand, to say, "I don't believe in expository preaching" is the equivalent of saying, "I am a member of the A.C.L.U," in a Baptist gathering. So just about every preacher I know claims to be an expository preacher, even if the statement is not sincere—even if his or her preaching style is to read a verse of Scripture and immediately leave the verse to major on opinion or irrelevant illustrations.Topical on the other hand is an -al adjective despised by the -ory people. The topical preachers see the -al ending as their particular ending, which makes the Word relevant, while the -ory-ending people see their word as courageous and honest. The extremist -ories sometimes see the word expository as a command to go through the Bible verse by verse, ignoring the seasons of the sermon. Advent disappears, because they started preaching through the Book of Daniel in September, and by December you're just now getting down to the prophetic fireworks of the book. You secretly would like to rebuke the choir for doing its Christmas music just when you're getting into Gog and Magog.Easter fares a little better than Advent, because most preachers (whether expository or topical) tend to celebrate the Resurrection; but Lent as a word has very little -ory respect because it comes from a word that only means "spring" and therefore has very little to commend it biblically. Once Easter is out of the way, the -ories can get back to their two-year focus on Leviticus.D.A. Carson tried to help bridge this wide -ory—-al gap by proposing yet another adjective: topositional. This coinage has a lot going for it. It allows a merger of the two adjectives by a new one, which always has appealed to me. The topositional preacher is one who tries to bridge the -ory—-al divide with a bit of common sense. The topositional preacher is free to consult a lectionary, figure out how to preach to the seasons of our lives and meet the specific needs for the congregation and develop sermons with expository force that keeps the Bible central while also paying attention to the specific needs of the church during the times and seasons of the church year.There are plenty of other -al adjectives that have come to modify sermons: premillennial, liturgical, liberal, traditional, not to mention a host of -ic adjectives (Calvinistic, classic, frenetic, prosaic, poetic, impolitic, lunatic). Overall, good sermons—maybe even great ones—come from good sensible preachers, who avoid any of the pigeon holes each of these adjectives denote.Perhaps we are only mature as preachers when our yea is "yea" and nay is "nay." Truth is a good noun to work on, as is evangelism, baptism, worship and integrity. These are great substantives on which to build the piers of ministry and preaching (two other good nouns). When these firm words come to define our lives, maybe we can play around less with the adjectives and wily adverbs. Ever notice how nounsy and verbsy the fourth gospel is: "In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through Him all things were made without Him was anything made that has been made." So the text continues for nine verses with only three adjectives; but then, those were the delightful days when the founders of our faith apparently were stuck on the big ideas—ideas so big that only nouns would suffice for preachers who were out to change the world.
<urn:uuid:f8074fa7-2834-4454-8bb7-08e114760b8d>
CC-MAIN-2016-26
http://www.preaching.com/resources/articles/11643667/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974478
1,108
2.515625
3
Copyright © University of Cambridge. All rights reserved. 'A Dicey Paradox' printed from http://nrich.maths.org/ Four fair dice are marked on their six faces, using the mathematical constants $e$, $\pi$ and $\phi$ as follows: |4 4 4 4 0 0 ||$\pi \pi \pi \pi \pi \pi$ ||where $\pi$ is approximately 3.142 ||e e e e 7 7 ||where e is approximately 2.718 ||5 5 5 $\phi \phi \phi$ ||where $\phi $ is approximately 1.618 The game is that we each have one die, we throw the dice once and the highest number wins. I invite you to choose first ANY one of the dice. Then I can always choose another one so that I will have a better chance of winning than you. You may think this is unfair and decide you want to play with the die I chose. In that case I can always chose another one so that I still have a better chance of winning than you. Investigate the probabilities and explain the choices I make in all possible cases. Does it make any difference if the dice are marked with 3 instead of $\pi$, 2 instead of $e$ and 1 instead of $\phi$?
<urn:uuid:e8d8dd10-0add-4750-aa17-4d1669092347>
CC-MAIN-2016-26
http://nrich.maths.org/623/index?nomenu=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917899
274
2.59375
3
1.) An experimental model for a suspension bridge is built in the shape of a parabolic arch. In one section, cables runs from the top of one tower down to the roadway, just touching it there, and up again to the top of a second tower. The tower stand 80 inches apart. At a point between the towers and 28 inches along the road from the base of one tower, a cable is 1.44 inches above the roadway. Find the height of the towers.
<urn:uuid:3f321f0a-8634-492f-a1ac-e886e9102724>
CC-MAIN-2016-26
http://mathhelpforum.com/pre-calculus/119569-solve-problem-finding-equation-parabolic-bridge-print.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00073-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935325
97
3.03125
3
AMES, Iowa – Students are back in school and now is the time for parents to develop routines to help their children succeed academically. Kimberly Greder, an associate professor and Iowa State University Extension and Outreach family life specialist, says parental involvement, more than income or social status, is a predictor of student achievement. Creating a home environment that encourages learning is the first step to guaranteeing success. Greder says parents need to set high, but reasonable, expectations for the children. Those expectations should not only apply to school achievements, but their future careers. Parents should also be involved in their children’s education at school and in the community. “Involvement means many things, including asking your children regularly about their school day and homework,” Greder said. “Make sure your children have a regular place and time to study. Visit with teachers and school counselors to understand how your child is doing in school and what you can do at home to help them succeed.” These steps are not only important for overall academic success, but can also help students who may be at risk for dropping out of school. The dropout rate for students ages 16 to 24 was 6.6 percent in 2012, the most recent statistics available from the National Center for Education Statistics. Latino students had the highest drop rate of 12.7 percent, followed by black students at 7.5 percent. Greder says youth who are most at risk of failing a grade or dropping out of school commonly have parents who have low levels of education, low income, are a racial or ethnic minority, and live in a neighborhood that experiences high poverty. Signs of students who are at risk of dropping out include: - High rate of absenteeism, truancy or frequent tardiness - Limited or no extracurricular participation - Lack of identification with school, which may include feelings of not belonging - Poor grades, which includes failing in one or more school subjects or grade levels - Low achievement scores in reading or mathematics for two years or more Greder suggests parents take these proactive steps to avoid problems in school or potential dropout: - Regularly talk with your child about his or her school day - Encourage reading at home and be a role model to read regularly - Talk to your child’s teachers and school counselor for updates on grades and behavior, and identify resources available to help your child at school - Watch who your child hangs out with and make sure they are doing healthy activities - Get your child involved in activities or sports to develop leadership skills and positive communication and conflict resolution skills. Greder recommends a program such as 4-H that helps youth develop skills to help them at school and throughout life. Iowa State University Extension and Outreach also offers a program that helps Latino youth who are at risk for not completing school successfully graduate from high school and pursue higher education. You can learn more about this program, Juntos: Together for a Better Education, at: http://www.extension.iastate.edu/humansciences/juntos.
<urn:uuid:5dbf13f3-61e9-46a1-9c5f-fdcc4927b38a>
CC-MAIN-2016-26
http://www.news.iastate.edu/news/2014/08/26/schoolsuccess
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968023
639
3.140625
3
University Students' Perceptions of Pedagogical Documentation: A Qualitative Research Study - ECU Author/Contributor (non-ECU co-authors, if there are any, appear on document) - Nicole Mitchell (Creator) - East Carolina University (ECU ) - Web Site: http://www.ecu.edu/lib/ Abstract: Ten university students attended a two week study abroad tour of the early childhood centers in Pistoia, Italy. Using a qualitative design, students participated in reflective writing activities in order to assess their understandings and perceptions regarding pedagogical documentation practices in early childhood education. Pedagogical documentation is one aspect of Reggio-inspired social constructivist practice that is becoming increasingly important in educational practices and teacher education programs as a form of curriculum development and student assessment. Findings from this study indicate that studying abroad increased students' understanding and influenced their perceptions of pedagogical documentation. These findings provide significant implications to the field of early childhood educational practice and teacher education programs embracing a social constructivist approach. - Date: 2010 - Early Childhood Education |Title||Location & Link||Type of Relationship |University Students' Perceptions of Pedagogical Documentation: A Qualitative Research Study||http://thescholarship.ecu.edu/bitstream/handle/10342/3189/Mitchell_ecu_0600M_10276.pdf||The described resource references, cites, or otherwise points to the related resource.
<urn:uuid:abce2de9-1bc5-4dd5-b941-e55e3be2ad67>
CC-MAIN-2016-26
http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=6670
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.875731
311
2.515625
3
1883 – City Hall, Albany, New York Designed by famed architect Henry Hobson Richardson, the construction of City Hall was completed in 1883. Albany City Hall has been acclaimed by critics as one of the most beautiful buildings in America and was added to the National Register of Historic Places in 1972. The architect, H.H. Richardson, included in his design a magnificent tower standing 202 feet tall, crowned by a 50 square-foot chamber opened to the city below. William Gorham Rice first suggested a carillon for Albany in 1918, as a monument to the soldiers who had given their lives in World War I. A campaign to raise money for the carillon began in 1926 and within a few months over 25,000 citizens had contributed $45,000. The John Taylor Company of Loughborough, England was awarded the contract to build the carillon. The nine-year project culminated in September 1927 when Jef Denyn of Belgium played the opening recital on the first municipal carillon in the United States. During Albany’s Tricentennial in 1986 the carillon was restored and enlarged.
<urn:uuid:26a7411c-93a8-4d6f-a9bf-bfeebc511387>
CC-MAIN-2016-26
http://archiseek.com/2010/1883-city-hall-albany-new-york/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977452
227
3.1875
3
Environmentally friendly playgrounds are becoming increasingly popular and prominent. As the "green" label has become a standard line in marketing throughout all kinds of business sectors, the same is evident among playground equipment manufacturers. A few examples of more sustainable playground elements include recycled tires in safety surfacing, recycled plastic benches and playground equipment recycling programs. Another common practice in recent years has been replacing asphalt surfaces with grass and natural surroundings. "There are plenty of new opportunities to transform decaying asphalt playgrounds or vacant lots into natural play areas," Richard Louv, author of “Last Child in the Woods: Saving Our Children from Nature-Deficit Disorder” wrote in a 2007 New York Times opinion article. "Researchers at the University of Illinois, exploring people’s relationship to nature, have discovered that green outdoor spaces relieve the symptoms of attention deficit disorders, improve the quality of interaction between children and adults and, in urban play areas, reduce crime." One of the first schools to move the green playground concept into actuality was the Tule Elk Park Child Development Center, according to an article in Edutopia, the Internet publishing arm of The George Lucas Educational Foundation. "This sixty-year-old San Francisco school in the city's Marina District went green in the early 1990s, with 20,000 square feet of blacktop removed and replaced by an educational Eden complete with native plantings, shady rest areas, and a nature preserve for the three Bs: birds, butterflies, and bugs," the Edutopia article reports. However, Edutopia cautions that "the Tule Elk outdoor redo cost a half-million dollars more than it did a decade ago, far beyond the means of most public schools today." A section of the article titled "follow the money trail" observes how Sherman Elementary School, also in San Francisco, obtained its green playground. "Sherman parents stretched available dollars by doing their own site preparation, mulching, grading, paving, and laying down a permeable cover," the Edutopia article states. "Even the project's architect, Jeff Miller, besides providing a spectacular landscape plan, donated his own sweat equity by running a Bobcat grader during Sherman's green-schoolyard weekends." Another important aspect of eco-friendly play equipment is the minimization of toxic substances used in manufacturing that can obviously pose a danger to kids who frequent a particular playground. "Before 2003, nearly all of the wood used for playground equipment was treated with chromium copper arsenate (CCA) to ensure weather resistance," according to 1-800-Recycling.com. "The arsenic in the finish leached into the soil and was even present in the children who played on this equipment. Eco-friendly playgrounds do a double-duty job of protecting children and the environment from harmful materials like CCA."
<urn:uuid:72eae51c-0826-4e0a-b9bc-96dbe934c4b9>
CC-MAIN-2016-26
http://www.coastaldesignconcepts.com/green-materials
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952181
583
2.953125
3
Hummingbirds didn’t get their name from their singing voices. Instead, their name comes from the fact that they create a humming sound when they fly. Hummingbirds can fly in any direction — forward, backward, up, down, side-to-side and even upside down — and they do so by flapping their wings at an incredible rate of between 60-200 times per second. When they feed on the nectar of flowers, they have to hover above the flower. They do this by flapping their wings in a figure-8 pattern. Their long and slender bills allow them to reach nectar from deep inside long flowers. They also have long tongues that can lick up nectar at a rate of about 13 licks per second. The next time you have an ice cream cone, try licking it 13 times in only one second! If nectar isn’t available, they’ll also eat insects, tree sap and pollen. Hummingbirds have to eat often, because their fast breathing and heart rate, along with a high body temperature, uses lots of energy. At least 12 species, though, spend their summers in North America. Hummingbirds are a delight to most bird lovers in the United States. Hummingbird feeders are a common sight in many yards. It can be lots of fun to watch hummingbirds hover around feeders waiting for their turn to sip the artificial nectar (sugar water) inside. Here some fun hummingbird facts you can share with your friends and family: - Hummingbirds are the smallest birds. - Hummingbirds can eat up to twice their body weight in nectar every day. - Hummingbirds can perch on feeders, but their feet are not used for walking or hopping. - Hummingbirds can fly up to 60 miles per hour.
<urn:uuid:c1ab3cd1-cac0-42b9-b810-7c2dc969a9f2>
CC-MAIN-2016-26
http://wonderopolis.org/wonder/do-hummingbirds-really-hum/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943277
384
3.796875
4
Known to most of the rest of the world as football, or “fútbol,” the beautiful game is almost exclusively referred to as soccer in the United States, but many Americans may be surprised to learn that our outlier moniker actually originated across the pond. Games played by kicking, hitting, throwing or carrying a ball have been around for thousands of years, but in the mid-to-late-19th century many sports—such as baseball, soccer, and American football—codified their rulebooks into the forms we recognize today. Modern soccer was born in 1863, when representatives from several English schools and clubs got together to standardize a single set of rules for their matches. They dubbed their new organization the Football Association, and their version of the game became known as “Association Football.” The word association was used to distinguish their specific sport from other popular games of the day such as “rugby football.” The word soccer comes from a slang abbreviation of the word association, which British players of the day adapted as “assoc,” “assoccer” and eventually soccer or soccer football. (The habit of adding –er to nicknames in British vernacular is frequently attributed to Oxford students of that period, and can be found in other sporting slang such as “rugger” for rugby.) The parallel names soccer and football (or the combined soccer football) were used more or less interchangeably to refer to association football until well into the 20th century, at which point football emerged as the dominant name in most parts of the world. However, in countries where another football variety was already popular—such as America and Australia—the name soccer stuck around.
<urn:uuid:0abb14e2-e9df-4c3e-b980-f039dc0fc270>
CC-MAIN-2016-26
http://www.history.com/news/ask-history/why-do-some-people-call-it-soccer
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.986401
358
3.09375
3
The Dharma Heruka According to Buddha-Dharma there are six realms of incarnation within Dukkha-Samsara: - Naraka-Gati: Infernal suffering ad infinitum. - Preta-Gati: Eternal famine of hungry ghosts. - Tiryagyoni-Gati: Bestial existence, wild as well as domestic. - Manusya-Gati: Human condition; Nirvana close but elusive. - Ashura-Gati: Jealous demigods locked in perpetual combat. - Deva-Gati: Heavenly beings lost through blissful absorption. Ideally, Humans should be capable of breaking this whole vicious cycle but since these realms are literal as well as allegorical, it appears our world borrows most heavily from Ashura-Gati. Tantric doctrines also stress that chief demoniac Ashuras were pacified by Siddhartha-Gautama (historical Buddha) and enlisted as ferocious defenders of the Dharma. Consequentially, current circumstances call for a new breed of spiritual warrior to transcend even Devas when they elevate humanity upon self-sacrificial shoulders. Enter The Dharma Heruka . . . Image: Samvara Mandala
<urn:uuid:ac0bf73a-63be-40b9-83d0-b5e64eb90078>
CC-MAIN-2016-26
http://www.obsidianeagle.com/2010/12/dharma-heruka.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.850684
257
2.5625
3
Building the Broadcast Band Thomas H. White -- June 7, 2008 The history of AM broadcast band (mediumwave) in the United States spans eighty years. This is a review of its first decade -- how it was established, initially evolved, suffered through a chaotic period when government regulation collapsed, and finally was reconstructed by the newly formed Federal Radio Commission, along lines that are still visible today. Guglielmo Marconi's pioneering wireless work, begun in the late 1800's, developed an important principle which more than twenty years later would help determine which wavelengths would be available for broadcasting. Marconi's most significant early discovery was of the "groundwave" radio signal. This was a key development, which made longrange signaling using electromagnetic radiation practical for the first time. Prior to Marconi, all electromagnetic radiation was thought to act similarly. Like light, it was believed to normally travel through the air in a straight line until absorbed or reflected. What Marconi stumbled across was that, for longer wavelengths with a properly constructed antenna, some of the radio waves, instead of just "going through space", actually "traveled along the ground", following the contour of the Earth. Thus, the Earth could be used as a guide, carrying signals over the horizon to distant points. Moreover, it turned out the ocean was an even better conductor than soil for transporting radio waves to distant points. It was found that the longer the radio wavelength, the better the Earth acts as a conductor, and the greater the range for a transmission of a given power. For this and other early work, Marconi shared the 1909 Nobel Prize for physics. And for 25 years following his pioneering work the groundwave signal was the most important factor in determining the desirability of a given radio wavelength. United States Government Regulations In the United States the use of wireless initially was unregulated -- anyone could operate a radio transmitter anywhere, at any time, on any wavelength. And most utilized the longwave signals that traveled so well across land and sea. Naturally severe interference occurred with everyone trying to use the same wavelengths. Eventually it was decided to do something about this, and because the individuals involved were the United States government, the action took the form of An Act to Regulate Radio Communication, passed on August 13, 1912. A year earlier a Radio Service had been established in the Department of Commerce and Labor's Bureau of Navigation. It was initially charged with making sure ships carried wireless equipment, as required by a June, 1910 act. With the passage of the 1912 Act, the job of licencing stations and operators was added to the Radio Service's duties. The country was divided into nine radio inspection districts, with a district headquarters for a Radio Inspector set up at a major port within each district. Initially radio was dominated by ship-to-ship and ship-to-shore stations, plus amateurs who comprised the bulk of the land stations. As far as government control goes the 1912 Act was fairly liberal, since some, particularly the Navy, had wanted to nationalize radio altogether. Unfortunately, the Act's language wasn't always very clear, and was geared toward two-way communication between stations that were permitted, and even expected, to use various wavelengths of their own choosing. Fourteen years later these flaws would help cause a breakdown in the regulation of broadcast stations. The 1912 Act essentially divided the radio spectrum into four parts. Following the standard set by the Service Regulations of the 1912 London International Radiotelegraph Convention, a choice band of wavelengths, from 600 to 1600 meters (500 to 187.5 khz) was appropriated primarily for government use. This band was selected due to the superior groundwave coverage these wavelengths provided. Two additional bands, available for commercial use, were designated on either side of the government band. The first group, consisting of wavelengths greater than 1600 meters (frequencies less than 187.5 khz), actually had groundwave coverage superior to that of the government band. Here were found the huge transoceanic stations. The other commercial band ranged from 600 meters to 200 meters (500 khz to 1500 khz). Groundwave coverage provided by these wavelengths rapidly diminished as the wavelength decreased. This band was used by commercial stations with more limited service areas, and for other special purposes, such as 300 and 220 meters (1000 and 1365 khz), set aside because ship antennas were too short for effective use on longer wavelengths. The final "band" was really a single wavelength -- 200 meters (1500 khz). Although they were not mentioned by name, this wavelength was assigned to amateur stations. Because of its poor groundwave coverage, it was considered to be all but useless, and was far removed from the wavelengths amateurs had used prior to 1912. Still, this limited allocation was better than being completely eliminated, which some, again particularly in the Navy, had favored. The Act also allowed individual amateurs to receive "special" licences to use longer wavelengths, and a number were issued within the 200 to 600 meter band, in order to support communication between amateurs doing "relay" work. (According to the Bureau of Navigation's September 28, 1912 edition of Regulations Governing Radio Communication, "...a special license will be granted only if some substantial benefit to the art or to commerce apart from individual amusement seems probable".) Still, the Act was a major setback for amateurs, and severely restricted their activities. The Rise of Voice Broadcasting All early radio work used telegraphic signaling, in most cases using spark transmitters. However, following the example of the wire telegraph, which would lead to the telephone, many worked to transmit sound by radio. As this work progressed hundreds, perhaps thousands, of experimental and publicity broadcasts were made. Some were even conducted on regular schedules. However, the first technologies used -- high-frequency spark, alternator and arc transmitters -- turned out to be dead-ends in the attempt to provide reliable, high quality, and cost effective voice service. Only with the development of vacuum tube continuous wave transmitters, just before the start of World War I, did broadcasting became practical. During the war all radio equipment -- both sending and receiving -- was either shut down or taken over by the United States government, so broadcasting experimentation ceased. However, the new vacuum tube transmitters were perfected under government supervision. In late 1919, with the end of the wartime restrictions on transmitting, numerous commercial, experimental, government and amateur stations renewed dabbling with broadcasting, using the new vacuum tube transmitter designs. By its September, 1920 issue, QST magazine would note that "it is the rare evening that the human voice and strains of music do not come in over the air". The Westinghouse Stations Of all the players involved with broadcasting experimentation and development, it was the Westinghouse Electric and Manufacturing Company, headquartered in East Pittsburgh, Pennsylvania, which would finally spark the transformation of radio broadcasting from an experiment into a national institution. Westinghouse was a relative newcomer to radio work. Its post-war efforts arose out of wartime contracts, combined with the broadcasts of Westinghouse engineer Frank Conrad's experimental station, 8XK. Westinghouse was to become the first concern to have the vision, commitment, financial stability, and clout to propel broadcasting into the national consciousness. Previously the person most associated with broadcasting had been Lee DeForest, who was behind a number of efforts by various companies on both coasts, beginning before the war. However, these activities always seemed to eventually evaporate. In particular, DeForest had a knack for getting stations shut down for violating regulations. With a well established firm like Westinghouse there was no doubt their broadcast activities were a stable and on-going service, that would be funded in part by profits from the sale of Westinghouse radios to the general public. In contrast, with the DeForest efforts there was always the nagging suspicion that a station's main purpose was to promote the sale of watered stock, or that the company responsible, along with the broadcasts, might soon disappear, as had so many of the previous efforts. By 1921, when Westinghouse's work began to bear fruit, DeForest had left radio research, and was concentrating on work on a sound-on-film system for talking movies. Westinghouse inaugurated its new broadcast service from East Pittsburgh with presidential election returns on November 2, 1920. Most accounts simplify things by crediting this historic broadcast to KDKA, operating on 360 meters. Actually, either due to a delay in the delivery of KDKA's Limited Commercial licence, or more likely indecision about the proper classification for the station's entertainment offerings, the election night broadcast went out under the temporarily assigned Special Amateur call of "8ZZ". Moreover, it wasn't until the fall of 1921 that KDKA moved to 360 meters. Westinghouse's broadcast was hardly unique, as a number of other stations sent out election returns at the same time, and some had also broadcast results during previous elections. Nor were there historic numbers of listeners to the broadcast, since contemporary estimates put the audience at about 100 receivers, and it attracted little attention outside of the immediate Pittsburgh area. However, Westinghouse differentiated itself from the others which had made broadcasts by launching a regular daily schedule, with plans to establish additional stations if the Pittsburgh station proved successful. Westinghouse understandably sought good coverage for KDKA and its later broadcast stations. However, the commercial longwave band beyond 1600 meters was too congested to be usable, while the 600 to 1600 band was reserved for government stations. Thus, KDKA's home would have to be somewhere within the 200 to 600 meter band -- the only wavelengths remaining after earlier radio settlers had claimed the longer wavelengths with their superior groundwave coverage. Information is sketchy, but contemporary reports state that the election night broadcast, using the callsign 8ZZ, was transmitted on a wavelength of 550 meters (545 kilohertz) while later publicity places KDKA's broadcasts on 330 meters (909 khz). There is evidence of shifting around, as some later reports list one or more of the Westinghouse stations on 375 meters (800 khz). With the success of KDKA, the fall of 1921 saw the establishment of three additional Westinghouse stations -- WJZ Newark, NJ (now WABC, New York), WBZ Springfield, MA (now in Boston), and KYW Chicago, IL (now in Philadelphia, PA). At this time Westinghouse officials lobbied for a special wavelength for their stations, and after negotiating with Commerce officials, 360 meters (833 khz) was selected. (Unlike DeForest, Westinghouse seems to have had good relations with government regulators). Louis R. Krumm of Westinghouse later claimed credit for proposing 360 meters as the standard. The first station to receive a license that explicitly specified 360 meters was WBZ on September 15, 1921. Licences for 360 meters for WJZ, KDKA, and KYW soon followed. Establishment of a Broadcast Service Westinghouse apparently thought only its stations would be assigned to 360 meters. However, the Commerce Department had no intention of giving Westinghouse a wavelength monopoly. Officials began assigning 360 meters to broadcast stations that other companies set up beginning in the fall of 1921. Unwittingly, Westinghouse's suggestion for itself instead became the seed wavelength which would flower into the broadcast band. By late 1921 enthusiasm for broadcasting had started to develop nationwide, and the Bureau of Navigation decided to formally designate standards and wavelengths for a specific broadcast service. Moreover, in addition to entertainment broadcasts, it saw the need to provide for broadcasts of official government reports. On December 1, 1921 two wavelengths were formally set aside for broadcasting, set up as a service category within the already existing "Limited Commercial" class of stations. A clause was added to the Limited Commercial regulations, reading: "Licences of this class are required for all transmitting radio stations used for broadcasting news, concerts, lectures, and such matter. A wave length of 360 meters is authorized for such service, and a wave length of 485 meters is authorized for broadcasting crop reports and weather services, provided the use of such wave lengths does not interfere with ship to shore or ship to ship service". Thus, broadcasting was formally introduced using just two wavelengths -- 360 and 485 -- in the 200 to 600 meter band. However, it would rapidly expand, until it ended up occupying almost all of this band, plus some of the "useless" territory beyond 200 meters. In addition, it would also drive out the ship-to-shore and ship-to-ship services it initially was required to protect. At this time there were few limitations on who could get a broadcast station licence. Generally all you needed was the desire, the equipment, and American citizenship--plus an on-duty technician holding at least a commercial second-grade operator's licence. "Crop Reports and Weather Services" Having a separate wavelength -- 485 meters -- for government market and weather reports made theoretical sense, but ultimately proved impractical. After the Navy Department, the Agriculture Department had been the government agency most involved in pioneering radio work. In particular, it wanted to speed weather and market information to isolated farmers, at that time dependent on mailed daily newspapers. (The August, 1913 Monthly Catalogue of United States Documents noted that the Weather Bureau had begun a daily radiotelegraphic "broadcast" of weather reports, which it explained as follows: "'Broadcast', as the term is used in the Radio Service, means that the message is fired out into the illimitable ether to be picked up and made use of by anybody who has the will and the apparatus to possess himself thereof".) Beginning with international conventions preceding the 1912 Act, it was the practice to set aside certain wavelengths for special purposes. So, it was natural to set aside a special wavelength for broadcasting market reports and weather forecasts. Then a radio could be tuned to a single wavelength and receive service from a number of stations. If the reports had instead gone out on 360 meters, farmers would have risked having distant reports drowned out by nearby stations broadcasting at the same time. The 485 wavelength--with its better groundwave coverage--was probably seen as the more important development, and a greater public service, than the mere entertainment being sent out on 360. On many occasions the Bureau of Navigation's Radio Service Bulletin listed stations and schedules of weather and market broadcasts, but it never featured the latest listing of stations carrying the Chase and Sanborn Hour. Any broadcast station could get 360 just for the asking, and most did. However, before the Bureau of Navigation would issue an authorization for 485 meters the station had to first submit a written authorization from the Chief of the Bureau of Markets and Crop Estimates or the Chief of the Weather Bureau. (In its 1922 annual report, the Agriculture Department reported it was limiting 485 authorizations to just two stations per community.) Although the number of broadcast stations authorized to use 485 meters rose from 15 to 137 in the year ending March, 1923, there were few problems with interference. The two Bureaus strictly regulated dissemination of government reports. They also controlled the schedules for the broadcasts, so that stations sending out reports on 485 meters would not interfere with each other. From the government's point of view the dual-wavelength system worked pretty well. For example, in late 1922 the Weather Bureau Office in Springfield, Illinois announced that, using a good receiver, a daily schedule of thirteen weather and market reports, from seven different broadcast stations, could be heard in central Illinois on 485 meters. Unfortunately, individual stations were not as impressed, especially since most concentrated on the entertainment side of their offerings. Credo Fitch Harris, in "Microphone Memoirs", a history of the "Horse and Buggy Days" of WHAS in Louisville, Kentucky, wrote: What logic gave rise to that mandate to tune a transmitter suddenly from its normal operation of 360 meters to 485 for the weather reports, and then quickly back to 360 for the continuance of a program, has never been explained and it still remains one of the most profound departmental enigmas. Practically none but farmers yearned passionately for news of tomorrow's weather, and crystal sets were incapable of serving distant areas. There were a few, though quite exceptional, instances of longer range receivers, -- using earphones of course. These were homemade affairs built from published diagrams and strung out from mother's parlor table to the kitchen, but so imperfect and confusing to tune that usually we had sent the forecast on 485, and were back again on 360, before the tyro had emerged from his wilderness of tangled wires, knobs, rheostats and other gadgets. The rulings were so patently absurd that the chief of the Louisville Meteorological Bureau personally appealed to Washington and had it changed. Parenthetically, for fifteen years I have tried to discover the father of it. None will confess. In defense of the Weather and Market Bureaus, it's doubtful they expected a station to jump back and forth between 360 and 485 meters like WHAS did. Most likely they expected the station to set aside, and publicize, a fixed period each day for the broadcasts on 485, after which it would sign off. Then, after a decent interval, it would start up operations on 360. In any event, as reviewed later the split wavelength operations ended in May of 1923, not because of the intervention of the Louisville Meteorological Bureau, but as a result of the expansion of the frequencies allocated to broadcasting. (The concept of broadcast frequencies reserved exclusively for public weather reports continues into the present, via the NOAA Weather Radio frequencies.) "News, Concerts, Lectures, and Like Matter" The government, viewing broadcasting as a public service, may have thought that 485 meters was the more important development. However, the general public saw 485 meters as only a sideshow. The main attraction was the entertainment offered on 360 meters. However, in contrast to the carefully controlled activities on 485 meters, the situation on 360 meters eventually became badly congested, especially in the larger cities. In the year ending March, 1923 the number of stations authorized for 360 meters jumped from 65 to 524. Moreover, it was up to the stations themselves to come up with equitable timesharing agreements when more than one station was located in the same area. Although most stations only wanted to broadcast a few hours per day or week, most coveted the prime early evening hours. In the New York City area, Westinghouse thought that WJZ, which began broadcasting in October, 1921, was going to be the only station there on 360 meters. Certainly it didn't see a need for additional ones. However, by the middle of 1922 nine more stations had been licenced for 360 meters in the region, requiring a complicated and hard-fought timesharing agreement for the New York City area. Other cities had similar problems. San Francisco had been an early broadcast center, with a number of experimental stations operating on various wavelengths, some of which pre-dated KDKA. However, when the new policies required them to be converted to broadcast stations, they congregated on 360 meters, requiring a timesharing agreement. In a few cases talks came to an impasse, and two stations would start to transmit at the same time, drowning each other out. Officials at the Commerce Department normally refused to get involved in these disputes. Eventually the stations, which looked pretty silly, would bow to public pressure and work out some sort of compromise. (No doubt it also was difficult to lure talent with the opportunity to participate in "broadcasts" that were completely drowned out by another station). Meters and Kilohertz The initial broadcast service allocations referred to the "wavelengths" that stations would use. This practice dated back to early radio work, when the length of the antenna had a strong influence on the wavelength of the radio signals that were transmitted and received. However, for technical reasons, beginning in 1923 the Bureau of Navigation switched to specifying a station's "frequency", as measured in "kilocycles per second" (later recast as "kilohertz"). Frequency and wavelength are reciprocals -- to convert one to the other you just divide the value into the speed of light. So, how many kilohertz is 360 meters? Suddenly the simple division is not so simple, because the speed of light was only roughly known in the early 1920s. In some early Department of Commerce references 360 meters was stated to be 834 khz. In other cases the rounded figure of 300,000 kilometers/second was used for the speed of light, so depending on how many decimal places were calculated the answer became 833 or 833.3 or 833.333. Sometimes a more precise estimate, 299,820, was used for the speed of light, which gives a result of 832.8 khz. And if you use the even more precise modern estimate of 299,792.458, the answer becomes 832.757 khz. (485 meters is equivalent to either 618 or 619 khz, depending on the value used for the speed of light.) All this leads to a question -- if you could go back to 1922 with a modern radio with a digital frequency readout, and you wanted the radio tuned to the exact frequency equivalent for a station operating on 360 meters, what you punch in? The following excerpt from "Microphone Memoirs" gives a clue: The way a transmitter was complacently assumed to be kept on its required 360 in those days could be amusing now, or horrifying. A government inspector arrived every four or five months to 'measure' us. In front of the main panel was a large aluminum disk with a center knob, devised by the manufacturer to vary its emitted frequency. The supervisor would gravely and thoughtfully turn that knob back and forth, watching his meter betimes. He would then take a pencil and make a thin mark on the disk's circumference, announcing solemnly: '360'. Another mark: '485 for the weather'. If those pencil strokes escaped being rubbed off by an over-zealous janitor some early morning, we probably retained an accuracy of five or ten meters, above or under par. Or if they remained long enough for the supervisor's next visit, it was interesting to observe that he invariably rubbed them out himself and put on new ones" A ten-meter swing each way for a station at 360 meters translates to a frequency drift from about 810 to 855 khz. Obviously WHAS' setup wasn't very precise. But its transmitter was no homebrew concoction -- it was an expensive top-of-the-line 500 watt Western Electric, the best that money could buy. Government regulators would struggle for a decade with keeping stations on their assigned frequencies. [Kilohertz to Meters Conversion Charts]. By the end of 1921, 29 broadcast station authorizations had been issued for 360 and 485 meters. In early 1922 the broadcasting bandwagon rapidly gained momentum. On board, in addition to formally recognized broadcast stations, were government, experimental, technical and training school, plus regular and special amateur stations, each operating on their own wavelengths. Government stations were outside the control of the Bureau of Navigation, so nothing could be done about them. In any event, many of their broadcasts were speeches by elected officials, so it probably wouldn't have been wise to try. However, the rest were required to conform to the new regulations, and convert to formal broadcast stations, if they wanted to continue broadcasting to the general public. Broadcasts by amateur stations were explicitly prohibited beginning in January, 1922. The Bureau of Navigation regarded most of the broadcasts coming from these stations as frivolous -- in most cases the best they could offer were scratchy phonograph records. Since most people already had phonographs there didn't seem to be a pressing public need to fill the airwaves with recorded songs. (According to the June 30, 1929 Annual Report of the Chief of the Radio Division of the Department of Commerce: "During the early days the programs of a majority of stations consisted almost entirely of phonograph records. The announcers had favorite records which they repeated numerous times during a program. The Secretary of Commerce foresaw the danger of the station losing public interest if a change were not made in the programs.") Amateur broadcasts were said to only be "temporarily" banned, pending new regulations. Eighty years later amateurs are still waiting for the ban to expire. In the meantime, some amateur stations were converted into broadcast stations, helping to swell the broadcasting ranks. First National Radio Conference By early 1922 it was clear that broadcasting was an important, and probably permanent, development. It was also beginning to tax the ingenuity of its regulators. In order to receive advice on a number of pressing issues, Commerce Secretary Herbert Hoover convened a Conference on Radio Telephony, composed of representatives of various government agencies and radio groups. The conference met in Washington from February 27th to March 2nd, and again from April 17th to the 19th. The resulting conference report proposed that major portions of the 200 to 600 meter band be set aside for broadcasting. In fact, it suggested separate bands for Government and Public, Private and Toll, and City and State Public broadcasting stations. It favored a total ban on "direct" advertising, and even suggested rules governing broadcasting by private detective agencies. The report also favored legislation strengthening the Commerce Secretary's regulatory authority. Secretary Hoover, while lauding the efforts of the conference, moved cautiously, partly because Congress failed to pass any new legislation. Only a single new wavelength, 400 meters (750 khz) was added, as a second entertainment wavelength. This was designated the "Class B" wavelength, with 360 meters now referred to as the "Class A" entertainment wavelength. Although 400 meters was envisioned for the use of "better quality" stations, in order to avoid the appearance of censorship only technical requirements had to be met in order to be assigned to the new wavelength. The maximum power permitted was 1000 watts, and "mechanically reproduced" programs were prohibited. As on 360 meters, stations in the same locality had to devise timesharing agreements. Class B Stations on 400 Meters In most cases there are about a dozen claimants when you try to identify "the first station" in one category or another. Surprisingly, there seems to be universal agreement that the first Class B station was KSD, the Saint Louis Post Dispatch station in Saint Louis, Missouri, beginning in late September, 1922 (now KTRS-550). Eventually around thirty stations nationwide qualified to use 400 meters. Although most stations that met the new standards welcomed the chance to move to the less congested 400 meter wavelength, for some it caused problems. The March, 1923 edition of Radio News carried the following report: "One big broadcasting station after trying out the Class B licence on 400 meters for a short time has returned to the 360 wave. The Department of Commerce has just relicenced WHAS, The Louisville Courier Journal, on 360 meters. That paper believes the 360-meter wavelength is better suited for broadcasting, and more popular with the fans". In fact, the order to move to 400 meters had caused an odd crisis at WHAS. As recorded in "Microphone Memoirs", the following exchange took place between station manager Harris and his technician: 'Now what?' I asked. 'Can you put us on 400?' 'I can try,' he said. 'When the supervisor measured us last September he marked 360 and 485, but the 485 got rubbed off. Let's see. The 400 meter change would be -- ' (out came the slide rule). 'Well, it would be about a third up from where we are to where 485 is if 485 was there, which it isn't. We can't move a third up to nowhere. Maybe I can guess it, within about ten or fifteen meters' This technical problem, plus fear that their listeners would find it as hard to retune their sets to 400 meters as WHAS did, prompted Harris to get permission to stay on 360 meters. Station Wavelength Assignments With the addition of 400 meters, it was now possible for a broadcast station to be licenced to 360-only, 400-only, 485-only, 360/485, or 400/485, where 360 and 400 were Class A and B entertainment wavelengths and 485 continued as the Market and Weather wavelength. Below is a chart reviewing the authorizations on these wavelengths, compiled from official station lists issued for selected dates from March 10, 1922 to March 1, 1923: |Station Wavelength Assignments||Wavelength Totals| (Links to on-line copies of these stations lists are available at Early Radio Station Lists Issued by the U.S. Government). Dawn of the Skywave Because the stations on 400 meters had superior equipment, they did a better job of staying on their assigned wavelengths. Surprisingly, in some cases this resulted in more interference between stations. A letter from Murfreesboro, Tennessee, appearing in the February, 1923 issue of Radio News, in part complained: "Can't you start some kind of a campaign among your thousands of Radio fans and readers to get Washington to do something about this wave-length question? Since all the good stations have gone to 400 meters it is worse than ever, as they are square on 400 meters and all come in together... while before they were scattered below and over 360 meters". This letter reflects a new problem which was being encountered during nighttime hours. It was the result of the development of better radio receivers, combined with the existence of long ignored "skywave" radio signals. Until the early twenties, most radio receivers used by both the public and commercial companies had been primitive. The majority were crystal sets, limited to picking up strong signals, which in practice usually meant only groundwave signals. The spread, in the early twenties, of receiving sets using vacuum tube amplification meant radios were now thousands of times more sensitive. The wavelengths assigned to broadcast stations had relatively poor groundwave coverage, and the stations used relatively low power, with few rated at more than 500 watts. So, considering only the groundwave signal, stations could be packed fairly close together on the same wavelength without unduly interfering with each other. However, with the introduction of the better receivers, at night during the prime listening hours people were beginning to receive stations from far beyond the range of the groundwave signal. This would have profound effects on how to deal with interference between stations operating on the same wavelength. At this point it's valuable to return to Marconi's original work. Like many scientific discoveries, his discovery of the groundwave signal both advanced and hindered the art, because it lead to a single-minded pursuit of good groundwave coverage. Huge spark stations of tremendous power were developed, using giant antennas. By later standards these early stations were absurdly overpowered -- in fact they were so powerful that their signals were probably traveling around the world more than once. However, because receivers were so insensitive, these transmitting behemoths were needed in order to insure quality service. Forgotten in the "cult of the groundwave" was the fact that not all of a station's signal is groundwave -- some of it does indeed travel "through the air". Originally it was thought that these "skywave" signals merely fled into the cosmos, never to be heard again. However, soon there was evidence that something strange was happening, especially at night. Somehow, some of the signals were coming back to Earth at distant points. English physicist Oliver Heaviside did pioneering work on the subject, and found evidence that high above the Earth there is an encircling layer of charged particles. This was originally called the Heaviside Layer, but is now known as the ionosphere, and is the cause of the reflected signals. At first it was mainly viewed as a curiosity, responsible for "freak" reception. Unlike the groundwave signal, which is unaffected by the sun, and has the same strength day and night, the strength of the reflected skywave signal is variable, and usually was too weak to be readily detected by the primitive receivers then in use. Also, on the wavelengths then in use there normally wasn't any skywave signal during daylight hours, so daytime reception was completely dependent on the groundwave signal. In fact the skywave signal was seen mainly as a nuisance, since it interacted with the groundwave signal, causing fading. With the introduction of broadcasting, information about skywave signals suddenly became important. However, a full understanding of what was taking place did not exist in the early twenties. It was obvious the sun was involved, since in most cases skywave signals appeared only at night. Eventually it was determined that the ionosphere is composed of layers, each with distinctive characteristics. What became known as the "E" and "F" layers are responsible for reflecting radio signals back to Earth. (Unlike groundwave signals, the strength of reflected skywave signals are essentially the same across the entire 200 meter to 600 meter band.) Due to the ionizing effect of the sun, these reflecting layers actually are more concentrated, thus more effective at reflecting radio signals, in daylight hours than at night. Therefore, in theory skywave signals should be even stronger during the daytime than at night. However, it turned out that a inner "D Layer" also existed. And the D Layer absorbs signals in the wavelengths that happened to be assigned for broadcasting, blocking them before they have a chance to reach the reflective outer layers. But unlike the E and F Layers, the D Layer only exists during daylight hours, which is why skywave signals disappear during the day but return at night. An analogy is that, when talking about the wavelengths assigned to broadcast stations, the E and F Layers act as a mirror reflecting signals back to Earth, while the D Layer is a curtain drawn in front of the mirroring layers during daylight hours. (It is popularly believed that old "Amos and Andy" shows are winging their way through the cosmos. Unfortunately for old radio buffs on alpha Centauri, in most cases these signals actually were snuffed out by the absorbing D and reflecting E and F layers a fraction of a second after they left the radio station. In the mid-twenties amateurs began experimenting with frequencies higher than the traditional 1500 khz. As expected, the higher they got the worse the groundwave signal. However, unknown to the amateurs, when you get above a certain frequency the D layer no longer absorbs the signals, but they continue to be reflected back to Earth. Thus, they stumbled upon the shortwave frequencies, which have almost no groundwave capabilities -- thus are "worthless" under the old view -- but also have globe-spanning skywave coverage, sometimes even better during the day than at night. As you continue to go up in frequency, you eventually reach frequencies which pass through the entire ionosphere, both day and night. Therefore, unlike AM band and shortwave signals, FM and TV signals are indeed spreading throughout the cosmos.) The greater nighttime coverage on broadcast wavelengths meant it was now possible, at night, for stations to interfere with each other over great distances. In some cases this meant, as reported in the Murfreesboro letter, hearing more than one program at the same time. However, there was an even worse problem. When two stations are close in frequency their signals interact, creating a piercing "heterodyne" tone, which was estimated to extend ten times as far as the audio interference. (For example, if one station were on 833 khz, and the other on 830 khz, the resulting heterodyne tone would be 3 khz, which is the difference between the two station frequencies.) If stations stay within about .05 kilohertz of each other the tone disappears. However, as seen by the earlier WHAS quote on frequency control, with early 1920s technology any such convergence would have only been a fleeting coincidence. (At this time many stations drifted in frequency both in response to what was being transmitted and whenever their antennas swung in the wind. The "flattop" antennas in use at this time had stronger skywave signals, and weaker groundwave, than the modern "vertical" antennas that supplanted the flattops beginning in the 1930s) Until the development of affordable precise frequency control, plus directional antennas suitable for use on the broadcast band frequencies -- both a full decade away -- the only tools for preventing heterodyning on a common wavelength were wide separation of stations, timesharing, plus reduced nighttime powers and daytime-only operation. Second National Radio Conference By early 1923 it had become clear that a major overhaul of the broadcast service was needed. The most critical problem was that two entertainment wavelengths were not nearly enough. Ideally each station should be given its own wavelength, but that was impractical. Secretary Hoover convened a second conference of government and industry representatives, beginning on March 20th. Once more the conference proposed increasing the number of broadcast frequencies. This time the Commerce Department acted quickly, announcing in early April a sweeping expansion of the broadcast allocation. Over a period of time broadcasting was to be assigned, in 10 khz steps, all the frequencies from 550 to 1350 khz (545 to 222 meters). Stations would still be divided into Class A and B, but this now would refer to two bands of frequencies. Class A stations would be limited to 500 watts, while Class B's would use 500 to 1000 watts of power. Although a few new Class A stations were assigned to the new frequencies beginning in April, the full plan did not start to go into effect until noon on May 15th. Under the plan, none of the multitude of stations operating on 360 meters would be forced to change to a new frequency -- they could stay on 360 meters, as "Class C" stations, if they wished. However, no new stations would be assigned to 360 meters, and it was hoped that all the current 360 meter residents would soon voluntarily switch to the new, less congested, Class A and Class B frequencies. Once the stations on 360 meters disappeared, the new band would consist of 50 Class B frequencies running from 550 to 1040 khz, plus 31 Class A frequencies, from 1050 to 1350 khz. The Class A frequencies consisted of lower power stations -- some using as little as 5 watts -- which were located relatively close together. The initial plan specified that about two-thirds of the frequencies could be used in all nine of the radio inspection districts, while the rest would be used in at most three assigned districts. Under this setup, nighttime heterodynes were unavoidable on the Class A frequencies. The upper limit of 1350 khz available for Class A stations apparently was set by the existing ship wavelength at 220 meters (1365 khz). There were more Class B frequencies available than stations qualified to use them, which was a good thing since a number of the frequencies were not immediately usable. The clump of Class C stations on 833 khz were pretty shaky in the frequency control department, so initially no Class B stations were assigned from 810 to 860 khz, giving the Class C's a little wobbling room. Also, 1000 khz (300 meters) was an international ship frequency, so broadcasters stayed clear of 980 through 1040 until the ships could be reallocated to other frequencies. The old Class B entertainment wavelength at 400 meters became just another Class B frequency, now known as 750 khz. (Ironically, this frequency was assigned to WHAS, which apparently had finally figured out how to tune its transmitter to 400 meters). The separate Market and Weather wavelength on 485 meters disappeared. To the relief of stations like WHAS, broadcasters now sent out their entire program on their one assigned frequency. However, the government still maintained strict control over the use of official government reports and forecasts. The handful of stations which had no entertainment offerings, and thus were licenced only for 485 meters, were moved to 360 meters. The Commerce Department made a special effort to assign the showcase Class B frequencies equitably. The United States was divided into five zones, and each zone was assigned at least ten Class B frequencies. Because of the relatively low powers then in use, Zones 1 and 5, on opposite coasts, were far enough apart to permit simultaneous use without nighttime heterodyning interference. However, all the other zones required exclusive use of their frequencies to avoid heterodyning problems. Below is a review of the fifty Class B frequencies, and their zone assignments, as initially announced by the Bureau of Navigation: 550-3 630-4 710-5 790-1 870-2 950-3 1030-4 560-5 640-1,5 720-2 800-3 880-4 960-5 1040-1 570-4 650-3 730-4 810-5 890-1 970-2 580-2 660-1,5 740-1 820-2 900-3 980-4 590-1,5 670-2 750-3 830-4 910-5 990-1 600-3 680-4 760-1,5 840-1 920-2 1000-3 610-1,5 690-1 770-2 850-3 930-4 1010-5 620-2 700-3 780-4 860-5 940-1 1020-2 Within each zone, frequencies were assigned for use by specific localities. Commerce was careful to state that frequencies were allocated to jurisdictions, not to individual stations. But they obviously had taken a close look at the 400 meter roster when deciding the initial allocations. One standard was that there be a minimum 50 khz separation between stations in a given locality. This was viewed as the smallest spacing that an average radio could discriminate between when near two stations. There was also a minimum 20 khz spacing within zones. The final step was to assign stations to the new frequencies. Since there were more frequencies assignments than qualified stations, some Class B frequencies were reserved for later use within specific zones. In some of the more congested cities frequencies were shared by two or three stations. Below is a review of the initial May 15th Class B allocation, plus the stations that were assigned to them by the end of July, 1923. Seventy-seven years later many of these stations are among the most prominent in the nation. Others, with owners who couldn't afford the expense, later became lesser stations or were deleted altogether. In fact, three stations, WDT (Ship Owners Radio Service), WGM (Atlanta Constitution) and KFDB (Mercantile Trust Company) would be deleted before the end of 1923. Amazingly, given all the changes in the succeeding seven decades, three stations have continuously stayed on the frequencies they received under the May 15, 1923 plan: WMAQ-670 Chicago (now WSCR), KFI-640 Los Angeles, and KSD-550 Saint Louis (now KTRS). |Allocations Announced for May 15, 1923||Station Assignments as of July 31, 1923 |Zone ||Location ||Freq. |1 ||Springfield/Wellesley Hills, MA ||890 ||WBZ Springfield, MA | ||Schenectady/Troy, NY ||790 ||WGY Schenectady, NY & WHAZ Troy, NY |New York, NY/Newark, NJ ||660 ||WJZ Newark, NJ | " " ||610 ||WBAY/WEAF New York, NY | " " ||740 ||WJY/WOR New York & WDT Stapleton, NY |Philadelphia, PA ||590 ||WOO/WIP Philadelphia, PA | " " ||760 ||WFI/WDAR Philadelphia, PA |Washington, DC ||690 ||NAA Arlington, VA |Reserved ||640 ||WRC/WCAP Washington, DC |Reserved: 840, 940, 990, 1040|| |2 ||Pittsburgh, PA ||920 ||KDKA East Pittsburgh, PA | ||Chicago, IL ||670 ||WMAQ/WJAZ Chicago, IL |Davenport/Des Moines, IA ||620 ||WOC Davenport, IA |Detroit/Dearborn, MI ||580 ||WWJ/WCX Detroit, MI |Cleveland/Toledo, OH ||770 ||WBAV Columbus, OH & WJAX Cleveland, OH |Cincinnati, OH ||970 ||WLW/WSAI Cincinnati, OH |Madison, WI/Minneapolis, MN ||720 ||WLAG Minneapolis, MN |Reserved ||870 ||KYW Chicago/WCBD Zion, IL |Reserved: 820, 1020|| |3 ||Atlanta, GA ||700 ||WSB/WGM Atlanta, GA | ||Louisville, KY ||750 ||WHAS Louisville, KY |Memphis, TN ||600 ||WMC Memphis, TN |Saint Louis, MO ||550 ||KSD Saint Louis, MO |Reserved ||650 ||WCAE Pittsburgh, PA |Reserved: 800, 850, 900, 950, 1000|| |4 ||Lincoln, NE ||880 ||--- | ||Kansas City, MO ||730 ||WDAF/WHB Kansas City, MO |Jefferson City, MO ||680 ||WOS Jefferson City, MO |Dallas/Fort Worth, TX ||630 ||WFAA Dallas/WBAP Fort Worth |San Antonio, TX ||780 ||WOAI San Antonio, TX |Denver, CO ||930 ||--- |Omaha, NE ||570 ||WOAW Omaha, NE |Reserved: 830, 980, 1030|| |5 ||Seattle, WA ||610 ||KGW Portland, OR | ||Portland, OR ||660 ||KDZE Seattle, WA |Salt Lake City, UT ||960 ||--- |San Francisco, CA ||590 ||KFDB San Francisco, CA | " " ||710 ||KPO San Francisco, CA |Los Angeles, CA ||640 ||KFI Los Angeles, CA | " " ||760 ||KHJ Los Angeles, CA |San Diego, CA ||560 ||--- |Reserved: 810, 860, 910, 1010|| The Commerce Department made a tentative step in establishing frequency control standards by "suggesting" that stations stay within 2 khz of their assigned frequencies. This did nothing to reduce heterodyning interference between stations on the same frequency, but at least it would keep stations from drifting into neighboring frequencies. In spite of the suggestion, there would continue to be reports of stations straying far beyond the 2 khz standard. Although stations were now being assigned in neat 10 khz frequency steps, the public generally clung to the older, and less precise, wavelength nomenclature, usually stated to the nearest meter or tenth of a meter for the corresponding frequency. It would be more than a decade before wavelength references completely disappeared in the United States, and many in Europe (where AM stations are now allocated in 9 khz steps) still use the older terminology. Continued Expansion and the Third National Radio Conference In the year following the May 15, 1923 reallocation the number of Class C stations on 360 meters declined, so the gap of unused Class B frequencies around 833 khz also shrank. Also, with the reduction, and then elimination, of ship transmissions on 300 meters Class B stations were assigned to the frequencies around 1000 khz. However, problems continued, including a shortage of Class A frequencies. Hoover announced a third industry conference, beginning October 6, 1924. One of the conference recommendations was to increase the number of Class A frequencies. Under the May 15th allocation amateurs had gotten a little more breathing room, as Special Amateurs were permitted to move below the traditional 1500 khz (200 meters) to 1350 khz (222 meters). However, this expansion would prove short-lived in the face of broadcasting's appetite for additional frequencies. In July, 1924 the lower limit for amateurs had been shifted back to 1500 khz. Then, following the recommendations of the Third Conference, starting in November, 1924 Class A broadcast stations were assigned to fifteen additional frequencies from 1360 to 1500. Not that very many stations wanted to go there. Along with low powers, poor groundwave coverage, and interference from the nearby amateurs, these stations were faced with the fact that many radios didn't tune this high. Following the conference Class B stations were allowed to experiment with powers of up to 5 kilowatts, to be attained in 500 watt steps. (RCA's proposal that stations be allowed to use up to 50 kilowatts was met with shock and a promise to study the matter further). By April, 1925 the elimination of the Class C stations on 360 meters was essentially complete, and the Class B stations filled in the freed-up frequencies. Thus, from the initial footholds at 360 and 485 meters, broadcasting had expanded in both directions, and now occupied all but the first 50 khz of the 200 to 600 meter band. (Broadcasting's low-end expansion ended at 550 khz due to the need to protect 500 khz -- 600 meters -- from interference. 500 khz was -- and remained so until December 31, 1999 -- an international distress frequency). The three Class A frequencies adjacent to the Class B band had been converted to Class B use, so the broadcast frequencies now consisted of 53 Class B (550 to 1070) plus 43 Class A (1080 to 1500), for a total of 96. Class B Complexities Throughout the mid-twenties there was a tremendous turnover of stations. However, whenever one disappeared another popped up to take its place. The overall number of stations fluctuated between 500 and 600. However, powers steadily increased, along with the resulting interference, especially at night. A major problem developed because of a lack of Class B frequencies. Although Class B radio stations were expensive to operate (and generally there was no direct financial return, as commercial sponsorship was only just beginning to appear) the prestige was great enough that more and more companies wanted one. The crush was exacerbated when the United States, realizing that an entire country was located to its north, informally set aside six Class B frequencies -- 690, 730, 840, 910, 1010, 1030 -- for exclusive Canadian use. (Recognition that other countries, such as Mexico, also existed would not come until 1940 with the NARBA agreements). As a partial solution, some Class B stations were placed on Class A frequencies, but this didn't do much to satisfy their owners. In 1925 the Commerce Department had experimented with shrinking the spacing between the Class B frequencies from 10 khz to 7.5 khz, but this proved unsuccessful. Finally, in October, 1925, the Commerce Department announced it would generally cease licencing new stations, because the broadcast frequencies were filled beyond capacity. Secretary Hoover knew the embargo was on shaky legal ground. For years he had pleaded with Congress to pass a new law, giving him clearer control of radio. However, the two branches of Congress had never come to an agreement, so radio remained under the increasingly creaky control of the 1912 Act. Moreover, station licencing was not the only area of legal challenge. The Zenith Radio Corporation operated WJAZ, a Class B station in Chicago, which it thought of as a showcase for the firm. Unfortunately, due to the Class B frequency shortage the station was assigned a grand total of two hours per week of air time, on 930 khz. Zenith found it's showcase wasn't very visible. So it moved to 910 khz, which had been one of the exclusive Canadian frequencies, and challenged Secretary Hoover to do something about it. Ironically, Zenith had no intention of diminishing Hoover's overall regulatory powers. It only claimed it found a small loophole which permitted frequency shifts for a handful of stations which, like WJAZ, had been granted "Developmental" licences. However, earlier challenges had not been favorable to the Commerce Department, and the effects of the WJAZ case instead would be sweeping. The Commerce Department challenged Zenith's move, and the case ended up in Federal Court in Chicago. In his April 16, 1926 decision, Judge James H. Wilkerson sided with WJAZ on its right to choose its own frequency. However, Wilkerson's ruling mainly addressed the legality of WJAZ's frequency shift, and did not delineate exactly what Hoover could and could not do. The Commerce Department debated whether it should appeal the WJAZ ruling. In the meantime, everyone looked to Congress to pass a new law to stabilize the situation. Congress promptly dropped the ball. Although both branches passed new laws, they were significantly different, and Congress adjourned in early July before the differences could be worked out in committee. Congress would return in session on December 8th, after the elections. Until then Hoover was on his own. Hoover's next step was to ask Acting Attorney General William J. Donovan for advice on what powers Hoover held under the 1912 Act. Donovan had a difficult task in trying to make sense of the Act and how it related to broadcasting. The bill's language was obscure at times, and some important sections were widely removed from each other, so that their exact relationship was unclear. The Act was oriented toward to regulating two-way communication, and allowed stations a great degree of flexibility. A key problem was in frequency assignments. The Act stated that stations were to be assigned a "normal wavelength", but they also were allowed to use additional wavelengths of their own choosing, as long as they fell outside of the 600 to 1600 meter government band. In fact, in keeping with standard practice, the first few broadcast licences were actually issued stating that the station's "normal" wavelength was 600 meters -- not that any broadcast station actually ever used this wavelength. Thus, their broadcast authorizations for 360 and 485 meters fell under the category of "additional" wavelengths. These early authorizations, following guidelines set by the Act, also required the stations be capable of communicating with ships on 300 meters, when needed. Not that it ever was. The Act was also ambiguous whether the Commerce Department could withhold licences from qualified applicants, or could regulate powers and hours of operations outside of the 600 to 1600 meter government band. Given the ambiguity of the Act, various opinions ranged from the extremes that Hoover either had complete authority to regulate broadcasting, or he had virtually none at all. Donovan released an opinion on July 8, 1926. It wasn't legally binding, but did give the Commerce Department an idea whether it should pursue an appeal of the WJAZ case. As it turned out, Donovan's opinion matched Hoover's worst fears. In Donovan's opinion, except for the government band Hoover not only had to issue licences to all upon request, but he also had no right to restrict frequencies used, hours of operation, or powers. Broadcasting had become a free-for-all. The only thing Hoover could do was ask stations for restraint and try to keep track of things until a new law was passed. Just before the breakdown of regulation Canada had complained that its six exclusive Class B frequencies were not enough. In the "wave jumping" by U.S. stations that followed, it would watch this number drop to zero. A Little Bit of Anarchy Because of the new state of affairs, the station list appearing in the December 31, 1926 issue of the Radio Service Bulletin included the following rueful disclaimer: "The power and wavelengths given in this table were compiled from applications for licenses furnished the department by the owners of the stations. Since the department does not make assignments in either respect, this list is not necessarily in conformity with wavelengths or power actually used". Although the first few months saw relatively few changes, eventually a torrent of new stations and frequency changes developed. In an eight month period around 200 new stations flooded the airwaves. Many stations jumped from Class A to Class B frequencies. Some broke new ground, such as WOBB in Chicago, which was reported to be on 540 khz, a step below the former 550 khz lower boundary of broadcast frequencies. WHAP in New York City decided there was just enough room between WJZ-660 and WOR-740 for another Class B station, and it settled on the unorthodox new frequency of 697 khz. Another case where a station headed for a split frequency was KFKB in Milford, Kansas, which began operating on 695 khz. KFKB was owned by J. R. Brinkley, M.D., the infamous "Goat Gland" doctor. His later stations, on the other side of the Mexican border, would continue this affection for split frequencies. In some cases it's hard to determine exactly what frequency a station was operating at, because many were still reporting station wavelengths rather than frequencies. Thus, when KEX in Portland announced it was operating on 447 meters, it probably was specifying the nearest whole-meter equivalent for 670 khz. However, the Commerce Department dutifully divided 447 into 299,820, and reported that KEX was now operating at exactly 670.7 kilohertz. New York and Chicago were worst hit by the increase in stations and congestion, but the effects were felt nationwide, especially with an increase in nighttime heterodynes. In the West, one group of stations staged a novel demonstration in support of the restoration of government controls. According to the June, 1927 Radio Broadcast "Between the hours of eight and nine February 11, KFI, and ten other Pacific Coast stations presented what they termed an Interference Hour. The stations were paired off and so changed their wavelengths as to interfere seriously with one another. After an hour of squeals, howls, indistinguishable announcements, and distorted music, the stipulated wavelengths were resumed, following which pleas were made from each of the stations in support of the radio bill before the senate". Stations turned to the courts to clear things up. Eventually the courts would have stabilized the situation, as a series of rulings generally gave established stations priority and relief from interference from newcomers. However, these rulings were getting dangerously close to giving stations property rights to their radio frequencies, something the government desperately wanted to avoid. Congress reconvened in December, and work slowly began on the radio crisis. Although all agreed that something needed to be done, a controversy broke out whether to strengthen the powers of the Commerce Department, or form an independent commission, modeled after the Interstate Commerce Commission. Finally, on February 23, 1927, President Coolidge signed the newly passed Radio Act of 1927. A compromise, it set up a temporary independent Federal Radio Commission, which would have one year to settle the radio mess. After that most of its powers would revert to the Commerce Department. Most of provisions of the 1927 law were based on the recommendations made by the various Radio Conferences beginning in 1922. The United States was divided into five regions and five commissioners -- one to represent each region -- were appointed. Two promptly died. (Credo Harris of WHAS turned down the offer of a Commission appointment). It was a high pressure assignment -- radio broadcasting, although only six years old, was seen as a national resource. With the chaos radio sales had declined, and there was a sense that radio was being wasted. The whole country was watching. Initial FRC Work The FRC had to act carefully -- every decision was a potential court case. There were a total of 732 broadcasting stations when it took over, far more than could comfortably fit into the broadcast frequencies. The Commission was given the power to delete stations not found to be in the public "Convenience, Interest, or Necessity", but that didn't give it the right to arbitrarily delete stations in bulk. However, it did halt new station grants, except in a few underserved regions of the country. The Radio Act of 1927 explicitly protected stations from deletion for 60 days following the enactment of the new legislation. When this ban expired on April 25, 1927, the FRC made no move to start culling the broadcasting ranks. Instead, all existing stations were given "temporary" operating extensions. A series of 30 to 60 day extensions followed, eventually dragging out for more than a year. Ultimately stations would be required to formally apply for licences, which would give the FRC a chance to winnow the ones that didn't meet standards. But first the standards had to be developed. Until then, it was hoped that time would see an attrition in the number of stations. Meanwhile, information was collected on the stations, and various technical tests and studies were conducted in order to get an idea on what could be done with all of them. Although it was strongly hinted that the broadcast band would be extended by adding 50 broadcast frequencies from 1510 to 2000 khz, in the end the frequencies assigned to broadcasting remained unchanged. (The International Radio Convention of 1927, which met in Washington, DC, specifically set aside 550 to 1500 khz for broadcasting purposes). Among the first actions the FRC did take was to clear out the Canadian frequencies and get all stations back to 10 khz frequencies from 550 to 1500. This produced something roughly like the old Class A and Class B bands, but with a lot of shoehorning in of extra stations. Although they had done nothing illegal, most "wave jumpers" and stations that had popped up in the preceding few months did relatively poorly under the reassignments. Every few weeks or months new refinements were announced, and stations were shuffled to new spots on the radio dial. The commissioners made visits to the regions they represented, to consult with station owners and evaluate the situation. On their return stations within the region were juggled once again. WEBC in Superior, WI was allowed to increase its power from 250 to 1000 watts "in order to make certain that President Coolidge would have good radio reception at his summer home". Although the initial standards were fairly generous, the overall trend was to reduce interference by reducing the number of stations broadcasting simultaneously. This meant an increase in the number of stations forced to share time, or limited to daytime-only operation. The Commission made a special effort to clear the key frequencies of 600 to 1000 khz of "heterodyne and other interference", in order to give the listening public an island of better reception while the band was being reconstructed. The FRC applied pressure to get recalcitrant stations to cooperate, proclaiming "Broadcasters who are parties to placing annoying interference, instead of programs, on their respective channels are not looked upon as serving public interest, convenience, or necessity. Instead of creating good will for themselves certain radio stations have become extremely unpopular due either to blanketing or heterodyning interference, complaining letters indicate". It added "Regarding divisions of time requested, the commission feels a distinct service is rendered to any station which is encouraged to broadcast fewer hours under clear reception conditions rather than full time with its signals at most points utterly valueless". However, the clearing effort met with only limited success. The FRC set a new standard that stations would have to stay within .5 khz of their assigned frequencies. But this was still about twenty times the limit needed to avoid heterodyning other stations on the same frequency. And even this liberal standard proved difficult for most stations to meet. The key objective in the evolving FRC reallocation came to be the reduction of heterodyne interference, especially during the prime nighttime hours. It became clear the FRC was not going to finish its task in the year allocated by the 1927 Act. On March 28, 1928 Congress approved a one-year extension for the FRC, until March 16, 1929. Many wondered why the process was taking so long. Radio Broadcast informed its readers that, contrary to popular belief, "The Commission is not incompetent; it is impotent". The FRC did move aggressively against one class of stations that was a particular annoyance. The Department of Commerce had licenced "portable" stations, usually to transmitter manufacturers, who could move the stations from place to place for demonstrations. The FRC decided it wasn't required to regulate moving targets, so in April, 1927 it restricted portable licences to two frequencies -- 1470 and 1490 -- and announced that eventually all would be eliminated. As of early 1928 there were still about a dozen portable stations, but all were gone by July 1, 1928. Not all were deleted, however. A few were allowed to become permanent stations in underserved areas of the country. In March and April, 1928 the FRC, along with industry engineers, worked to finalize the new broadcasting band structure, choosing from among a number of plans submitted by various public and industry representatives. However, in addition to technical concerns, there was also a political one. The legislation continuing the FRC included a clause that came to be known as the Davis Amendment. It required that station allocations be equitably made between the states. The commissioners were divided whether the provisions of the Davis Amendment could be instituted over time or had to be implemented immediately. Finally the FRC started to pull everything together. All stations were required to formally apply for licences by January 15, 1928. The FRC reviewed the applications, identifying stations which appeared to fall short of meeting the new Convenience, Interest, or Necessity standard. On May 11, 1928 the FRC issued General Order 32. It targeted 164 stations that the FRC felt had failed to meet the new public standard. Hearings would be held July 9, 1928, with the stations to be deleted on August 1st if they were unable to sway the Commission. Most of the stations contested their fate, and a majority survived, with the FRC actually complimenting the work of some of the challenged stations. Figures vary, but between fifty and ninety stations eventually disappeared, many by default or surrendering their licences rather than deletion, and many of the survivors had their powers and hours of operation reduced. Some of the deleted stations had been found to be no longer operational. Others had served as little more than platforms for their owners, used to fill the airwaves with personal opinions and attacks. Perhaps the oddest case was KFQA, licenced to The Principia in Saint Louis, Missouri. The FRC reported that "During the hearing, held on July 9, the representative of the station urged that all the applicant wanted was to maintain a licence from the commission but did not care about the transmitter". In other words, they wanted a licence, but didn't want to actually operate a station, preferring to broadcast through KWK's facilities. In deleting KFQA, the FRC noted: "This case is a good illustration for a direct application of the principle previously announced by the commission that it is not in the public interest, convenience, nor necessity to continue to licence a station which is not putting its transmitter to any use". (A year later KFQA got its wish, and it became a special callsign for KMOX when broadcasting Principia programming). New Broadcasting Structure With the broadcasting ranks now reduced to about 585 stations, the FRC finally announced the long awaited restructuring of the broadcast band. On August 30, 1928, General Order 40 described the new setup. It had taken more than a year for the FRC to come up with a definitive broadcasting reorganization, which was scheduled to take effect at 3:00 AM on November 11th. The Commission itself reported significant disagreement between the commissioners, and the best the final plan could muster was a four to one vote in its favor. The holdout was Commissioner Ira E. Robinson, who reportedly felt the commission was acting rashly, and had favored high-powered stations, to the detriment of the low-powered ones. Nor could Robinson be called a "good loser". After the new plan was announced, he released the following statement: "Having opposed and voted against the plan and the allocations made thereunder, I deem it unethical and improper to take part in hearings for the modification of same". Using legal language best described as "tortured", it was formally announced "That a band of frequencies extending from 550 to 1500 kilocycles, both inclusive, be, and the same is hereby, assigned to and for the use of broadcasting stations, said band of frequencies being hereinafter referred to as the broadcast band". The new plan organized the broadcast band in a more complicated manner than the previous Class B/Class A setup. Most noticeable was that, instead of two adjacent groupings, blocks of high and low power frequencies were placed at various locations within the band. Also, stations were now divided into three categories, which in time would become known as "Clear", "Regional", and "Local". Six of the 96 frequencies were off-limits for United States stations, as 690, 730, 840, 910, 960, 1030 were set aside exclusively for Canadian use. The United States was divided into five zones, and forty frequencies -- eight per zone -- from within the range of 640 through 1190 khz were assigned for the primary use of individual zones. These "Clear Channel" frequencies were the successors to the old Class B authorizations, and stations on them would eventually have powers up to 50 kilowatts. Forty regional frequencies were allocated, for stations using a maximum of 1000 watts, to be used concurrently in two to five zones. These were the successors to the old Class A band. Four additional regional frequencies were permitted to use a maximum of 5 kilowatts, as an incentive to get stations to accept the unpopular high-end frequencies of 1460 to 1490. (These frequencies would eventually be converted to Clear channels.) The final six frequencies effectively marked the reappearance of the old Class C 360-meter wavelength. These were to be used by "local" stations nationwide, with a 100 watt power limit. The overall structure of the November 11th reallocation has been modified over the years, but today's AM band strongly reflects this historic restructuring. Following is the frequency setup that took effect on November 11, 1928, from 550 to 1500 khz. Numbers in parentheses are the zones assigned dominant use of individual Clear Channel frequencies: 550 - 630: REGIONAL 640 (5), 650 (3), 660 (1), 670 (4), 680 (5): CLEAR 690: CANADA (exclusive) 700 (2), 710 (1), 720 (4): CLEAR 730: CANADA (exclusive) 740 (3), 750 (2), 760 (1), 770 (4): CLEAR 790 (5), 800 (3), 810 (4), 820 (2), 830 (5): CLEAR 840: CANADA (exclusive) 850 (3), 860 (1), 870 (4): CLEAR 880 - 900: REGIONAL 910: CANADA (exclusive) 920 - 950: REGIONAL 960: CANADA (exclusive) 970 (5), 980 (2), 990 (1), 1000 (4): CLEAR 1020 (2): CLEAR 1030: CANADA (exclusive) 1040 (3), 1050 (5), 1060 (1), 1070 (2), 1080 (3), 1090 (4), 1100 (1), 1110 (2): CLEAR 1130 (5), 1140 (3), 1150 (1), 1160 (4), 1170 (2), 1180 (5), 1190 (3): CLEAR 1200 - 1210: LOCAL 1220 - 1300: REGIONAL 1320 - 1360: REGIONAL 1380 - 1410: REGIONAL 1430 - 1450: REGIONAL 1460 - 1490: REGIONAL (high power) Radio Broadcast cautiously hailed the new plan. It noted that "We hesitate to praise any constructive step announced by the Commission because, up to this time, it has always reversed itself before promised reforms have been put into operation. It proposed to eliminate all stations persistently wandering from their channels, but backwatered before the echo of its brave statements had died out. It called a host of stations before it to prove they were operating in the public interest, necessity and convenience, and with great fanfare to the effect that they would be weeded out, but the actual result of the hearings was negligible. From past evidence, we cannot avoid fearing a complete reversal of form and a repudiation of the meritorious broadcast allocation plan". In spite of the fears of Radio Broadcast, the FRC moved forward. Its next hurdle was to assign stations to frequencies for their November 11th debut. There were still signs of tentativeness, as assignments were announced September 10th but then modified on three occasions in October. The Commission also made an unsuccessful effort to rationalize network operations. Chains had started to gain prominence, and the Commission was worried all its hard work would be devalued if all the strongest stations ended up carrying the same programs. However, the FRC eventually gave up its effort to reduce network broadcast duplication, and announced that instead the issue would ultimately be part of a comprehensive review of chain programming. Effects of the November 11, 1928 Allocation By all accounts the November 11, 1928 allocation was successful in greatly reducing interference. And the FRC was proud how few stations it had to delete along the way. However, many stations were unhappy with the new allocation, and some headed to the courts to get relief. Most were unsuccessful. Because of its emphasis on reducing heterodyning interference, the Commission had adopted a very conservative approach, assigning low powers and limited frequency slots. And although they hadn't been deleted, scores of stations had in effect been given death sentences. On the regional frequencies the FRC limited the number of stations operating concurrently to two to five nationwide. And, in major population areas the states were over-represented under the guidelines of the Davis Amendment. Thus, in major metropolitan areas, particularly New York and Chicago, the FRC in some cases required four, and occasionally five, stations to share the same frequency. It was impossible for a station to survive economically on a ration of a quarter or a fifth of a broadcast day, especially with the coming of the Depression in late 1929. Fierce legal battles broke out, as stations used the FRC and the courts to wrest broadcast hours from -- or kill off -- the stations they were partnered with. Some of these legal battles lasted years and gained legendary status within the broadcast industry, and were credited with financing the college educations of numerous legal counsel's children. (Ironically, many educational stations were paired with commercial stations, which often lead to the demise of the educational stations. This was one of the main reasons educational channels were set aside when the FM band was created.) The final timesharing agreement in the New York City area wasn't consolidated until 1985, when WNYM (now WWRV) bought out WPOW to gain fulltime status on 1330 kilohertz, while the final timesharing arrangement dating back to November 11, 1928 -- WEDC/WCRW/WSBC on 1240 khz in Chicago, IL -- lasted eleven more years, until the owners of WSBC purchased WCRW, which stopped broadcasting in July, 1996, then bought out WEDC, which made its final broadcast June 12, 1997, to end 68½ years of time-sharing. Consequences and Conclusions The November 11, 1928 reallocation was a major achievement, as government regulators finally regained control over the broadcast band, lost a year and a half earlier. But there was still plenty of work to be done. The Commission had to refine the equalization of station grants, as required by the Davis Amendment. The early thirties saw the development of "vertical" antennas, which replaced the old "flattop" antennas. The new antennas had better groundwave coverage, at the expense of reduced nighttime skywave service. They also could be set up as directional antennas, which, combined with better frequency control that finally eliminated audible heterodyning, allowed closer placement of stations with less interference. Despite the FRC's "temporary" status, and court challenges by disgruntled stations over its constitutionality, the Radio Commission survived until 1934, when it was replaced by the Federal Communications Commission. (In contrast, Radio Broadcast expired in 1930). In the early forties the North American Regional Broadcasting Agreements extended the broadcast band to 1600 khz. However, the overall November 11, 1928 structure remained intact. The lower frequencies were unaffected, and in most cases where stations were moved to a new frequency, all the stations on a given frequency moved to a new dial position as a group. After World War II there was an easing of interference standards, and thousands of stations were added to the AM band. Still, even today, on many frequencies there is a core group of pioneer stations that have shared a common frequency since 1928. One change has been an increase in power limits -- to 50,000 watts on the old Clear and Regional frequencies (now known as Class A and B respectively), and from 100 to 1000 watts on the Local frequencies, now known as Class C. It's an overused phrase, but the best description of the November 11, 1928 reallocation is that it "brought order out of chaos". And nearly seventy years later this historic work still provides the underpinning for the AM broadcast band. From its tentative beginnings on 360 and 485 meters, and through its descent into chaos, broadcasting had finally been given a stable and secure foundation. |Mid-1921 ||Ship || || ||Relay || || ||Ship || ||Ship ||Amateur|| | |Dec. 1, 1921 ||Ship ||M/W ||Relay ||Ent. ||Ship ||Ship ||Amateur| |Late Sep 1922 ||Ship ||M/W ||Relay ||B ||A ||Ship ||Ship ||Amateur| |May 15, 1923 ||Ship || ||===Class B==== ||=C= ||Class B ||Ship ||=====Class A===== ||Amateur| |Nov 1924 ||Ship ||===Class B==== ||C ||==Class B== ||==========Class A========== ||Amateur| |April 1925 ||Ship ||===============Class B=============== ||======Class A====== ||Amateur| |7/1926-3/1927 ||Ship ||Anarchy ||Amateur| |Nov 11, 1928 ||Ship || ||FRC Reorganized Band: '''''''''|||||[|||[||||'|||||[|||''[''''[||||'|[||||||||'|||||||**'''''''''*'''''*''''*'''!!!!* || | |Kilohertz => ||500 ||540 ||550 ||619 ||666 ||750 ||833 ||870 ||990 ||1000 ||1050 ||1060 ||1070 ||1350 ||1365 ||1500 ||>1500| |Meters ==> ||600 || ||485 ||450 ||400 ||360|| ||300 || ||220 ||200 ||<200| The above chart is a general overview of the evolution of the broadcast band, and selected wavelength and frequency allocations from 1921 to 1928. Wavelengths are listed horizontally across the top of the chart, with the kilohertz equivalents directly below. Individual wavelength assignments are marked with a single entry, explained below. Bands of frequencies are marked with double lines. The entries include: M/W: "Market & Weather" (485 meters/619 khz) -- broadcasting wavelength used from December, 1921 to May 15, 1923 for official government reports, including market reports and weather forecasts. Discontinued after the May 15, 1923 expansion. "Ent.", A, C: Entertainment wavelength (360 meters/833 khz) -- broadcasting wavelength used for entertainment offerings beginning in September, 1921 and formally assigned December 1, 1921. In September, 1922, with the creation of the "Class B" entertainment wavelength, 360 meters became known as the "Class A" entertainment wavelength. On May 15, 1923, with the creation of "Class A" and "Class B" frequency bands, it became known as the "Class C" wavelength. It quietly disappeared in mid-1925 when the final holdouts were moved to Class A and B frequencies. B: Entertainment wavelength (400 meters/750 khz) -- created late September, 1922 for better quality stations. Expanded to a band of Class B frequencies on May 15, 1923. Ship: International ship wavelengths. 300 meters and 220 meters were quickly absorbed by the expanding Broadcast Band, while 600 meters (500 khz) was an international distress frequency, thus a barrier for any expansion of the AM band to lower frequencies. Relay: Special Amateur Relay (450 meters/800 khz) -- One of the wavelengths set aside for relay work by Special Amateurs. Special Amateur work was moved to the 1350 to 1500 band in the May 15, 1923 reallocation, and later discontinued altogether. Amateur: Standard amateur wavelengths. FRC Reorganization: Graphical representation of the 96 frequencies assignments, from 550 to 1500 kilohertz, under the November 11, 1928 plan. The following symbols are used: |Regional (40) ||'| |U.S. Clear (40) |||| |Canadian-only (6) ||[| |Local (6) ||*| |High-power Regional (4) ||!| Following are the major sources for this work: DeSoto, Clinton B. "Two Hundred Meters and Down". The American Radio Relay League, Inc., 1936. Harris, Credo Fitch. "Microphone Memoirs". The Bobbs-Merrill Company, 1937. Pejza, Father Jack. "A Beginner's Guide To The Ionosphere". DX Monitor, International Radio Club of America, March 25, 1972. "Commercial and Government Radio Stations of the United States". Annual list issued as of June 30th for 1920 through 1931 by the Department of Commerce. QST. Selected issues from 1920 to 1922. Radio Broadcast. Selected issues from 1922 to 1927. "Radio Communications Laws of the United States and the International Radiotelegraphic Convention". August 15, 1919 edition. Issued by the Bureau of Navigation, Department of Commerce. Radio News. Selected issues from 1920 to 1927, especially The Development of Radiophone Broadcasting by L. R. Krumm, September, 1922, p. 467. Radio Service Bulletin. Issued monthly, beginning in January, 1915 by the Bureau of Navigation, Department of Commerce. Continued in various formats until 1952. Included occasional broadcast station lists plus changes in regulations, including FRC General Orders. "Regulations Governing Radio Communication". September 28, 1912, February 20, 1913, and July 1, 1913 editions. Issued by the Bureau of Navigation, Department of Commerce. "Report of the Federal Radio Commission". Annual reports, 1927 through 1933.
<urn:uuid:78932395-af36-4173-9c47-24bf7c73b455>
CC-MAIN-2016-26
http://earlyradiohistory.us/buildbcb.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973768
17,177
3.8125
4
Artistic Costume Designing was written by Louis Lipson in 1940 and revised in 1941. Mr. Lipson was the founder and director of Lipson's School of Costume Designing in Los Angeles. It is designed as a home study course, similar to many drawing courses that were popular at the time. Not much more information is available about him but the book proves that he was well versed in costume design, he knew materials, he knew drawing and he knew the tricks of the trade. He was also sympathetic to the aspiring student, here are a few of his notes from the introduction: "This book is compiled for that purpose to to fulfill the desires of those who are unable to come to school or cannot afford to spend two or three years, or even one year, paying tuition. Also, some have to work for a living, and have no time to spend in travelling many miles to and from school in order to carry out their plans. The Home Study can be of service in leading many successful men and women, as well as helping others to decide what they would like best to do in the field of Costume Designing. Certain people have a gift for originality, but do not know how to use their ideas by putting them on paper. To have ideas and not be able to materialize them can be likened to a rushing mountain stream; potentialities unharnessed, and therefore of no practical use. Many successful people, doing the work they love, have arrived at that success by developing their latent talents This book gives each and every boy and girl, man and woman the opportunity to discover and develop hidden potential talent." These last illustrations are from a different book by Mr. Lipson about fashion illustration. 9 Heads: A Guide to Drawing Fashion A Manual on Figure Drawing and Fashion Designing Here is another book available at Amazon.com Lipson's textbook of practical costume designing: Create with confidence. Other books at Amazon.com: Fashion Illustration by Fashion Designers Fashion Illustration for Designers (2nd Edition) French Fashion Illustrations of the Twenties: 634 Cuts from La Vie Parisienne (Dover Pictorial Archive Series) Character Costume Figure Drawing: Step-by-Step Drawing Methods for Theatre Costume Designers Project Runway Fashion and Figure Drawing Set 100 Years of Fashion Illustration Big Book of Fashion Illustration: A Sourcebook of Contemporary Illustration Contemporary Fashion Illustration Techniques Essential Fashion Illustration: Poses (Essential Fashion Illustrations:)(New Illustration Series) Fashion Illustration Today New Fashion Figure Templates: Over 250 Templates Jack Richeson Signature Manikin- 12 Inch Wooden Female A Guide to Better Figure Drawing by Cecile Hardy Line of Action - Drawing from the Model - Creatively Clothing the Figure Drawing The Glamour Girl
<urn:uuid:19fc3d0f-ef6d-44c9-9878-9b7e2ff4c5e1>
CC-MAIN-2016-26
http://figure-drawings.blogspot.com/2009/03/artistic-costume-designing-figure.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947537
597
2.578125
3
Everything you need to understand or teach White Noise by Don Delillo. White Noise by Don DeLillo, a National Book Award winner, is about Jack Gladney, his wife Babette, and their obsession with their own deaths. They have four children from previous relationships and marriages between the two of them, but they are now one family. They live relatively normal lives until the airborne toxin Nyodene Derivative infects their town and they must be evacuated. Eventually, the Gladneys are allowed to return to their home and attempt to resume their normal lives, but the incident has increased Jack and Babette’s obsession with and fear of dying. White Noise Lesson Plans contain 116 pages of teaching material, including:
<urn:uuid:b2b90bf5-e354-4ca2-b72b-80282d3fd180>
CC-MAIN-2016-26
http://www.bookrags.com/White_Noise_(novel)/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978144
148
3.109375
3
A Hypertext Resource for Literature Rhetorical Figures[Page 2] These examples illustrate the use of a few prominent rhetorical figures to show how such figures create effects we commonly perceive, often without recognizing them. Parallelism is the use of similar structures in two or more clauses. Abraham Lincoln's Gettysburg Address uses parallelism throughout. For one of many examples in the very short text, Lincoln says, "The world will little note, nor long remember what we say here." The implied full sentence--"The world will little note [what we say here], nor long remember what we say here"-- becomes more elegant and concise because Lincoln uses parallelism to help his listeners hear the sentence's full meaning in spite of the missing words of the first clause. The words Lincoln eliminates in the quotation above are an example of ellipsis, the omission of a word or short phrase that can be understood in context. Parallelism is also a common method of producing antithesis, which occurs when contrasting elements are juxtaposed. Returning to The Gettysburg Address, we can find many examples of antithesis, from simple ones such as "The brave men, living and dead" (juxtaposing "living" and "dead") and more subtle ones such as the contrast between "say" and "did" in this sentence: "The world will little note nor long remember what we say here, but it can never forget what they did here." Note how that sentence combines parallelism, ellipsis, and antithesis. Anaphora is the repetition of the opening word or group of words at the beginning of a group of lines, clauses, or sentences. Whitman uses anaphora frequently, as in this passage from "Crossing Brooklyn Ferry": Just as you feel when you look on the river and sky, so I felt; Just as any of you is one of a living crowd, I was one of a crowd; Just as you are refreshíd by the gladness of the river and the bright flow, I was refreshíd; Just as you stand and lean on the rail, yet hurry with the swift current, I stood, yet was hurried; Just as you look on the numberless masts of ships, and the thick-stemíd pipes of steamboats, I lookíd.Aposiopesis, which literally means falling silent, is the technique of breaking off suddenly to convey some kind of emotion. Lord Byron uses aposiopesis in Don Juan to comic effect, as in this joke about what students really learn in college: [. . .] if I had an only son to put To school (as God be praised that I have none), 'T is not with Donna Inez I would shut Him up to learn his catechism alone, No-- no-- I 'd send him out betimes to college, For there it was I pick'd up my own knowledge. For there one learns-- 't is not for me to boast, Though I acquired-- but I pass over that, As well as all the Greek I since have lost: I say that there 's the place-- but 'Verbum sat.' I think I pick'd up too, as well as most, Knowledge of matters-- but no matter what-- I never married-- but, I think, I know That sons should not be educated so.The humor of the latter stanza comes from the narrator's constant aposiopesis and the reader's awareness of the kind of learning to which the narrator refers. There are many other rhetorical figures that speakers and writers use routinely, knowingly or not. To get a sense of their abundance, see Gideon Burton's Silva Rhetoricae.
<urn:uuid:e7263e0f-8b8e-45fb-b61e-26eae2b8919e>
CC-MAIN-2016-26
http://www.cs.grinnell.edu/~simpsone/Connections/Poetry/Terms/figures2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957569
760
4.09375
4
Polychaete worms have populated the oceans for millions of years. Today they are the focus of study on cryptic species, which shows that apparently identical animals may be entirely different species. Researchers at the University of Gothenburg, Sweden, have now found new worm species in the Kattegat and Skagerrak. Polychaetes belong to a group of segmented worms that display enormous diversity. It turns out that there may be significantly more of these worms than researchers had imagined. Many of the worm species have been identified morphologically, that is to say on the basis of their appearance. New molecular techniques show that many worms that have been assumed to belong to the same family are not as closely related as had been thought. The research scientist Jenny Eklöf at the Department of Zoology works in the rapidly advancing field of research which studies what are known as cryptic species, that is to say animals that are identical in appearance but genetically entirely different. The focus once more is on polychaetes, where Eklöf and her colleagues show that the Scandinavian species Paranaitis wahlbergi is in fact two separate species. The researchers have named the new species, which has been encountered off Sweden, Norway and Scotland, Paranaitis katoi. Singles species in fact two The researchers have also found in the group of worms Notophyllum foliosum that what has been regarded as a single species is in fact two. The two species live in the same geographical area but are found at different depths: below and above the 100-metre limit. The new species is found in deep water and has been given the name Notophyllum crypticum. Eklöf has also found a polychaete not previously encountered in European waters: Axiokebuita, a genus that usually lives in the Antarctic and also in eastern Canada.
<urn:uuid:49ffa173-9776-4a3a-bfa2-e77fa76a2bea>
CC-MAIN-2016-26
http://www.eurekalert.org/pub_releases/2010-06/uog-cwe061410.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967292
383
3.625
4
Follow usFollow us on Google + For practically their entire adult lives, women hear about menopause and its symptoms as something in the distant future. Surprisingly, what they should know is that the menopause process starts a lot sooner than most people think. The first stage of the menopause process is premenopause, the beginning of women's reproductive lives. In the following sections, women will find detailed information about what premenopause really means for their lives, as well as the causes, symptoms, and possible treatments while in premenopause. Premenopause is the first of the four stages in the premenopause process. It starts when a woman enters the reproductive years, and finishes with the first signs that menopause is getting closer. The beginning of premenopause can be identified with the first menstrual cycle. In contrast, the ends is not as clear, as it manifests variably in the late 30s of 40s with the first discomforts of menopause, such as hot flashes, mood swings, etc. Differences: Premenopause and menopause Confusion might arise with these terms, and having a clear notion of both is vital to get a thorough knowledge of what happens within women's bodies. The difference is as follows: Premenopause. First stage of the process, time in which a woman is fully fertile and menopause symptoms haven't manifested yet. Menopause. Total cessation of menstrual cycles for 12 months or more. Bear in mind that not all women go through the menopause process at the same age. Fortunately, tests have been developed to help women identify if they remain in premenopause or have already moved on to the next stage. For more information on premenopause, click on the following link about premenopause, or keep reading to find out about the causes of premenopause. Premenopause marks the fertile years, and so covers a large proportion of a woman's life. Because experiences of premenopause can be so variable, there is often some confusion surrounding this time in a woman's life. This article answers some of the frequently asked questions about premenopause. Premenopause is an often misunderstood time in a woman's life. Many women know that they experience changes during various times of the month, but are unaware of the reasons why. This article provides more detail about premenopause and the role of hormones during the premenopausal period. Hormones are at the very heart of the causes of premenopause. Natural hormones like estrogen and progesterone begin to fluctuate during premenopause, leading to the symptoms that so many women report. Hormonal causes. Occurring gradually in harmony with the rhythm of a woman's body, these are natural fluctuations in hormones that accompany the menstrual cycle and can lead to unpleasant symptoms. External causes. These include prolonged physical or emotional stress, diets rich in refined carbohydrates, and frequent exposure to certain toxins. Click on the following link to learn more about premenopause causes or continue reading below to discover the symptoms of premenopause. Since hormone levels are generally stable during premenopause, the symptoms women experience are not usually as noticeable as the ones from menopause. At most, there could be some upsetting problems during the menstrual cycle. These upsetting problems are also referred as Pre-menstrual syndrome (PMS) For more information about the upsetting manifestations of premenopause, click on symptoms of premenopause. If not, continue reading to discover how women can treat the unpleasant signs and symptoms of premenopause. Premenopause is the first stage of menopause and lasts for a large proportion of a woman's life – her whole reproductive life – and some wonder if certain symptoms are normal for this stage. This article discusses premenopause in more detail and the symptoms that are commonly experienced. The treatments for premenopause vary widely, but each treatment falls under one of three following categories. These categories, divided by intensity, are: These can involve something as simple as eating more carbohydrates or as demanding as sticking to an exercise regimen. When women present any discomfort, experts recommend first to take a look at each woman's habits. Usually, some simple lifestyle changes can have a great impact in over well-being. A healthy diet, as well as reducing smoking and alcohol intake could be some of the measures to take. While acupuncture, massage, and aromatherapy are some of the alternative possibilities, herbal supplements lead the alternative remedies. They can be classified in two groups: Phytoestrogenic supplements, which adds plant-derived estrogen into women's bodies, and hormone-regulating herbal supplements, which stimulate natural hormone production. Prescribed medication, like hormone replacement therapy (HRT), are the most popular form of treatment for premenopause in the United States. However, in recent years, discussion of HRT has focused on the risk it poses for producing serious side effects. Each of these different treatment levels has its own merits and drawbacks. It is often recommended that women first test the waters of the mildest option (lifestyle changes) and gradually move on to more intense treatments if necessary. Please click on the following link to learn more about the different options for premenopause treatments.
<urn:uuid:0e6e4f48-08fc-4d82-9f17-3712288a0971>
CC-MAIN-2016-26
http://www.34-menopause-symptoms.com/premenopause.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940748
1,121
2.953125
3
A finger fracture is a break in any of the bones in a finger. Each finger consists of three bones called the phalanges. The thumb has only two phalanges. Copyright © Nucleus Medical Media, Inc. A finger fracture is caused by trauma to the finger. Trauma includes: A risk factor is something that increases your chance of getting an injury. General risk factors for fractures include: The doctor will ask about your symptoms, your physical activity, and how the injury occurred. The injured finger will be examined. The doctor may order x-rays of the finger to determine which bones are broken and the type of fracture. Treatment will depend on the severity of the injury. Treatment involves: The doctor will put the bones back into place. This is usually done without surgery. However, if your fracture is severe, you may need pins, screws, or small plates to hold the bones in place. Each of these will require surgery. Pins may only require minor surgery, performed under local anesthesia. Your finger will be put in a splint or cast to hold your finger motionless and to protect it. You will need to wear the splint or cast as long as your doctor recommends (usually 3-6 weeks). Your doctor may order x-rays during the healing time to ensure that the bones have not shifted position. When your doctor decides you are ready, start exercises. This is as important as the surgery performed. In certain situations, you may be referred to a physical therapist to assist you with these exercises. If you are diagnosed with a finger fracture, follow your doctor's To help prevent finger fractures: American Academy of Orthopaedic Surgeonshttp://www.aaos.org/ American Orthopaedic Society for Sports Medicinehttp://www.sportsmed.org/tabs/index.aspx Canadian Orthopaedic Associationhttp://www.coa-aco.org/ Canadian Orthopaedic Foundationhttp://www.canorth.org/ ACR Appropriateness criteria for acute hand and wrist trauma. National Guideline Clearinghouse website. Available at: . Published 1998. Updated 2005. Accessed July 7, 2009. Fracture of the finger. American Academy of Orthopaedic Surgeons website. Available at: . Updated October 2007. Accessed July 7, 2009.
<urn:uuid:05f5e36c-9af9-4982-a44c-365ad6e362ca>
CC-MAIN-2016-26
http://www.lifescript.com/health/centers/pain/related_conditions/finger_fracture.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896092
495
3.234375
3
Geneticist reported Wednesday that they had crossed a threshold long considered off-limits: They have made changes in human DNA that can be passed down from one generation to the next. The researchers at Oregon Health & Science University in Portland say they took the step to try to prevent women from giving birth to babies with genetic diseases. But the research is raising a host of ethical, social and moral questions. "That kind of genetic engineering has been ruled off-limits," says Marcy Darnovsky of the Center for Genetics and Society. "And it's a very bright line that has been observed by scientists around the world." There have been lots of reasons for that line. One big one is purely practical, says Dartmouth bioethicist Ronald Green. "If we make mistakes, we'll effectively be introducing a new genetic disease into the human population — for generation after generation," Green says. But beyond the risks, Green says taking that step has long raised more far-reaching fears. It's the kind of technology that could be used to try to create genetically superior humans. "It could easily move into the realm of gene enhancement," Green says. "Higher IQ. Improved physical appearance. Athletic ability. That's a worry to some people — to many people." But in this week's issue of the scientific journal Nature, Shoukhrat Mitalipov of Oregon Health & Science University and colleagues report that they have crossed that line. They have figured out a way to change the DNA in a human egg. Mitalipov says his team is trying to prevent some rare but horrible disorders: genetic conditions caused by defects in a certain kind of DNA known as mitochondrial DNA, which only mothers pass down to their kids. "They are caused by mutations in this mitochondrial DNA, which is pretty small — only encodes 37 genes," Mitalipov says. So Mitalipov's team figured out a way to pluck these little packets of defective mitochondrial DNA out of eggs and replace them with healthy genes from eggs donated by other women. They fertilized the transplanted eggs in the laboratory and showed they could create healthy embryos. "What we showed is that the faulty genes, which are usually passed through the woman's egg, can be safely replaced. And that way, the egg still retains its capacity to be fertilized by sperm and develop," he says. The researchers haven't taken the next step yet: They haven't tried to make babies out of these modified embryos. But they have made baby monkeys this way, increasing their confidence it would work. And some other doctors hope so, too. Mary Herbert of Newcastle University is part of a team that has prompted a national debate in England by doing similar research. She also hopes to help women who have gone through the trauma of giving birth to a baby with one of these genetic conditions. "In severe cases, the child will die in the first days of life, or they might live, you know, a few years and then die," Herbert says. "It's like a game of Russian roulette." But the work raises a long list of questions. One is about the morality of creating embryos in the laboratory for research and destroying them, which some consider immoral. Another is about the safety of the women donating the eggs. And, of course, it's far from clear that the resulting babies will be healthy. But even if they are, there are still more questions. One is about the very genetic identity of any babies made this way. They'd inherit DNA from three separate people instead of the usual two: from the father's sperm; from the egg of the woman whose egg was fixed; and from the egg of the woman who donated some of her DNA to fix the problem. "So yes, we're going to have to, perhaps, get used to the fact that people can have three genetic parents in the future," Dartmouth bioethicist Green says. But beyond that, the move raises those early fears about manipulating DNA to create a brave new world of genetic haves and have-nots, according to Darnovsky. "Socially, what this would mean is we would be moving toward a world in which some people — and it would be people who could afford these procedures — would have either real or perceived genetic advantage," she says. Despite the concerns, Mitalipov and Herbert say the real benefits of preventing genetic diseases outweigh such hypothetical risks. Herbert is awaiting a decision by the British government on whether she can proceed to the next step in her research. Mitalipov has already asked the Food and Drug Administration if he can try to make a healthy baby by genetically altering human eggs. Support the news More NPR or Explore Audio.
<urn:uuid:c0c25b37-dd33-4bd5-ae70-b95f12718a31>
CC-MAIN-2016-26
http://www.wbur.org/npr/163509093/geneticists-breach-ethical-taboo-by-changing-genes-across-generations
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971873
973
3
3
communicator A communicator is an opaque object with a number of attributes, together with simple rules that govern its creation, use and destruction. The communicator specifies a communication domain which can be used for point-to-point communications. communication domain An intracommunicator is used for communicating within a single group of processes; we call such communication intra-group communication. An intracommunicator has two fixed attributes. intracommunicator intra-group communication domain These are the process group and the topology describing the logical layout of the processes in the group. Process topologies are the subject of chapter . Intracommunicators are also used for collective operations within a group of processes. An intercommunicator is used for point-to-point intercommunicator communication between two disjoint groups of processes. We call such communication inter-group communication. inter-group communication domain The fixed attributes of an intercommunicator are the two groups. No topology is associated with an intercommunicator. In addition to fixed attributes a communicator may also have user-defined attributes which are associated with the communicator using MPI's caching mechanism, as described in Section . The table below summarizes the differences cachingcommunicator, caching between intracommunicators and intercommunicators. communicator, intra vs inter Intracommunicator operations are described in Section , and intercommunicator operations are discussed in Section .
<urn:uuid:f47676f8-902b-4537-a615-292f8d07aeb2>
CC-MAIN-2016-26
http://www.netlib.org/utk/papers/mpi-book/node128.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00191-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90462
290
3.265625
3
11-19-2010 (Dayton, OH) - Thanksgiving is a time to give thanks, express gratitude, and enjoy a holiday meal with family and friends. It’s also when there is almost three times the daily average number of cooking fires. In fact, cooking fires are the number one cause of home fires and injuries in the United States. Turkey, stuffing, pumpkin pie and all of the trimmings call for a lot of preparation and cooking. But, when family, friends, and especially children gather in the kitchen, it’s very easy to get distracted and forget about what’s on the stove. Unattended cooking is the leading cause of kitchen fires. Each year, there are approximately 102,408 emergency room visits due to a fire/burn related injury for children ages 0-14. And, contact with a hot surface or flame cause the greatest number of burns in children. Safe Kids Greater Dayton and The Children’s Medical Center of Dayton offer these safety tips to help you prevent a fire and keep the Thanksgiving holiday a memorable tradition. Prevent Cooking Fires - Never leave hot food or appliances unattended while cooking. If you are frying, grilling or broiling food stay in the kitchen. If you are baking, boiling, or simmering food, check food frequently. - Always be alert when you are cooking. If you are under the influence of medication or alcohol, avoid using the stove or stovetop. - Keep anything that can catch on fire at least 3 feet from the stove, toaster oven, or other heat source. - Keep the stovetop, burners, and oven clean. - Do not wear loose fitting clothes when you are cooking as they may catch fire from the stovetop. Prevent burns and scalds - To prevent hot food or liquid spills, use the stove’s back burner and/or turn pot handles away from the stove’s edge. - Keep appliance cords coiled, away from the counter edges and out of children’s reach, especially if the appliances contain hot foods or liquids. - Use oven mitts or potholders when carrying hot food. - Open hot containers from the microwave slowly and away from your face. - Never use a wet oven mitt, as it presents a scald danger if the moisture in the mitt is heated. Keep Your Kids Safe - Create a 3 foot Kid Free Zone around the stove. Young children should be more than 3 feet from any place where there is hot food, drinks, pans or trays. - Never hold a child while cooking, carrying or drinking hot foods or liquids. - Hot foods and items should be kept from the edge of counters and tables. - Do not use a tablecloth or placemat if very young children are in the home. - When children are old enough, teach them to cook safely and always with help from an adult. For more information, contact: Marketing Communications Department We believe there are 18 ways we're just right for our region's kids! Learn more and share your story at justrightforkids.org.
<urn:uuid:2209a491-a1d1-46b5-a26f-05098b4e1676>
CC-MAIN-2016-26
http://www.childrensdayton.org/cms/media_releases/85a431ca7c0496ac/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.888616
658
3.21875
3
Breastfeeding and Illness Over the years, far too many women have been wrongly told they had to stop breastfeeding. The decision about continuing breastfeeding when the mother takes a drug, for example, is far more involved than whether the baby will get any in the milk. It also involves taking into consideration the risks of not breastfeeding, for the mother, the baby and the family, as well as society. And there are plenty of risks in not breastfeeding, so the question essentially boils down to: Does the addition of a small amount of medication to the mother’s milk make breastfeeding more hazardous than formula feeding? The answer is almost never. Breastfeeding with a little drug in the milk is almost always safer. In other words, being careful means continuing breastfeeding, not stopping. The same consideration needs to be taken into account when the mother or the baby is sick. Remember that stopping breastfeeding for a week or even days may result in permanent weaning as the baby may then not take the breast again. On the other hand, it should be taken into consideration that some babies may refuse to take the bottle completely, so that the advice to stop is not only wrong, but often impractical as well. On top of that it is easy to advise the mother to pump her milk while the baby is not breastfeeding, but this is not always easy in practice and the mother may end up painfully engorged. Illness in the Mother Very few maternal illnesses require the mother to stop breastfeeding. This is particularly true for infections the mother might have, and infections are the most common type of illness for which mothers are told they must stop. Viruses cause most infections, and most infections due to viruses are most infectious before the mother even has an idea she is sick. By the time the mother has fever (or runny nose, or diarrhoea, or cough, or rash, or vomiting etc), she has probably already passed on the infection to the baby. However, breastfeeding protects the baby against infection, and the mother should continue breastfeeding, in order to protect the baby. If the baby does get sick, which is possible, he is likely to get less sick than if breastfeeding had stopped. But often mothers are pleasantly surprised that their babies do not get sick at all. The baby was protected by the mother’s continuing breastfeeding. Bacterial infections (such as “strep throat”) are also not of concern for the very same reasons. See previous Information Sheet, Breastfeeding and Medications with regard to continuing breastfeeding while taking medication. HIV (new recommendations) WHO now recommends that all mothers, regardless of their HIV status, practice exclusive breastfeeding – which means no other liquids or food are given – in the first six months. It is recommended that both HIV-positive mothers and their infants take antiretroviral drugs throughout the period of breastfeeding and until the infant is 12 months old. This means that the child can benefit from breastfeeding with very little risk of becoming infected with HIV. With the provision of antiretroviral drugs, breastfeeding is made dramatically safer and the "balance of risks" between breastfeeding and replacement feeding is fundamentally changed. A major additional benefit of this recommendation is that the mother's health is also protected for a greater proportion of HIV-infected women. Antibodies in the Milk Some mothers have what are called “autoimmune diseases”, such as idiopathic thrombocytopenic purpura, autoimmune thyroid disease, autoimmune hemolytic anemia and many others. These illnesses are characterized by antibodies being produced by the mother against her own tissues. Some mothers have been told that because antibodies get into the milk, the mother should not breastfeed, as she will cause illness in her baby. This is incredible nonsense. The mother should breastfeed. The antibodies that make up the vast majority of the antibodies in the milk are of the type called secretory IgA. Autoimmune diseases are not caused by secretory IgA. Even if they were, the baby does not absorb secretory IgA. There is no issue. Continue breastfeeding. - Mastitis (breast infection) is not a reason to stop breastfeeding. In fact, the breast is likely to heal more rapidly if the mother continues breastfeeding on the affected side. (See Information Sheet Blocked Ducts and Mastitis) - Breast abscess Make sure the surgeon does not do an incision that follows the line of the areola (the line between the dark part of the breast and the lighter part). Such an incision may decrease the milk supply considerably. An incision that resembles the spoke on a bicycle wheel (the nipple being the centre of the wheel) would be less damaging to milk-making tissue. These days breast abscess does not require surgery. Repeated needle aspiration, or placement of a catheter to drain the abscess plus surgery. - Any surgery does not require stopping breastfeeding. Is the surgery truly necessary now, while you are breastfeeding? Are you sure that other treatment approaches are not possible? Does that lump have to be taken out now, not a year from now? Could a needle biopsy be enough? If you do need the surgery now, make sure the surgeon does not do an incision that follows the line of the areola (the line between the dark part of the breast and the lighter part). Such an incision may decrease the milk supply considerably. An incision that resembles the spoke on a bicycle wheel (the nipple being the centre of the wheel) would be less damaging to milk-making tissue. You can continue breastfeeding after the surgery is over, immediately, as soon as you are awake and up to it. If, for some reason, you do have to stop on the affected side, do not stop on the other. Some surgeons do not know that you can dry up on one side only. You do not have to stop breastfeeding because you are having general anaesthesia. You can breastfeed as soon as you are awake and up to it. - Mammograms are more difficult to read if the mother is breastfeeding, but can still be useful. Once again, how long must a mother wait for her breast no longer to be considered lactating? Evaluation of a lump that requires more than history and physical examination can be done by other means besides a mammogram (for example, ultrasound, needle biopsy). Discuss the options with your doctor. Let him/her know breastfeeding is important to you. There is no reason that you cannot continue breastfeeding if you become pregnant. There is no evidence that breastfeeding while pregnant does any harm to you, or the baby in your womb or to the one who is nursing. If you wish to stop, do so slowly, though; because pregnancy is associated with a decreased milk supply and the baby may stop on his own. Illness in the Baby Breastfeeding rarely needs to be discontinued for infant illness. Through breastfeeding, the mother is able to comfort the sick child, and, by breastfeeding, the child is able to comfort the mother. - Diarrhoea and vomiting. Intestinal infections are rare in exclusively breastfed babies. (Though loose bowel movements are very common and normal in exclusively breastfed babies.) The best treatment for this condition is to continue breastfeeding. The baby will get better more quickly while breastfeeding. The baby will do well with breastfeeding alone in the vast majority of situations and will not require additional fluids such as so called oral electrolyte solutions except in extraordinary cases. - Respiratory illness. There is a medical myth that milk should not be given to children with respiratory infections. Whether or not this is true for milk, it is definitely not true for breastmilk. - Jaundice. Exclusively breastfed babies are commonly jaundiced, even to 3 months of age, though usually, the yellow colour of the skin is barely noticeable. Rather than being a problem, this is normal. (There are causes of jaundice that are not normal, but these do not, except in very rare cases, require stopping breastfeeding.) If breastfeeding is going well, jaundice does not require the mother to stop breastfeeding. If the breastfeeding is not going well, fixing the breastfeeding will fix the problem, whereas stopping breastfeeding even for a short time may completely undo the breastfeeding. Stopping breastfeeding is not an answer, not a solution, not a good idea. (See Information Sheet Breastfeeding and Jaundice) A sick baby does not need breastfeeding less, he needs it more!! If the question you have is not discussed above, do not assume that you must stop breastfeeding. Do not stop. Get more information. Mothers have been told they must stop breastfeeding for reasons too silly to discuss. Questions? First look at the website nbci.ca or drjacknewman.com. If the information you need is not there, go to Contact Us and give us the information listed there in your email. Information is also available in Dr. Jack Newman’s Guide to Breastfeeding (called The Ultimate Breastfeeding Book of Answers in the USA); and/or our DVD, Dr. Jack Newman’s Visual Guide to Breastfeeding(available in French or with subtitles in Spanish, Portuguese and Italian); and/or The Latch Book and Other Keys to Breastfeeding Success; and/or L-eat Latch and Transfer Tool; and/or the GamePlan for Protecting and Supporting Breastfeeding in the First 24 Hours of Life and Beyond. To make an appointment online with our clinic please visit www.nbci.ca. If you do not have easy access to email or internet, you may phone (416) 498-0002. Breastfeeding and Illness (You Should Continue Breastfeeding (2)) 2009© Written and Revised by Jack Newman, MD, FRCPC, 2014© Revised by Edith Kernerman, IBCLC, 2009©
<urn:uuid:fc59c1ae-5cfe-4eef-bbcb-e960f199cacb>
CC-MAIN-2016-26
http://www.breastfeedinginc.ca/content.php?pagename=doc-B-I
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953569
2,027
2.734375
3
There are two key questions in treatment planning for a patient with a fractured ankle: - is it displaced? - if not, is it stable? Is it displaced? "Displacement" refers to the position of the talus in the mortise (left), not the malleolar fragments (right) An intact deep deltoid will keep the talus safe in the mortise What counts as displacement? Displacement refers to the position of the talus in the mortise, NOT minor displacements of the fibula. The discussion of the basic science of ankle stability showed why fibular displacement in itself is of little significance: - an intact deep deltoid ligament will keep the talus in the mortise in the face of fibular displacement - apparent fibular displacements do not normally represent mortise incongruity Must the talus be exactly congruent? Several older papers recommended no more than 4mm medial clear space (MCS), or 2mm more than the superior clear space. Recent studies suggest 5mm MCS is compatible with an intact deep deltoid ligament. None of these measurements were made on standing films, which may be a more accurate assessment of congruity. Is it stable? Stability can be defined in different ways. Ideally, a stable fracture would be one that can be treated without splintage, with early rehabilitation and with minimal incovenience to the patient. In some literature "stable" seems to have the less ambitious meaning of "does not need internal fixation", although this might be more a statement about orthopaedic culture than about fracture biology! Often stability has been defined in terms of the result of tests - and this can, obviously, be useful - but care needs to be taken that the test result can be shown to make a difference for management. The work on stress testing is open to this critique. Until recently, most studies have proceeded as though "stable fractures" or "supination-external rotation stage 2 fractures" are self-apparent, or can be defined in terms of: - undisplaced, isolated, lateral malleolus fracture. Fox (2005) explicitly discounted fractures above the syndesmosis - no medial tenderness, bruising or swelling - Fox also required a low-energy fracture with an intact soft tissue envelope In 2004 McConnell and then Egol reported the results of stress radiography of undisplaced lateral malleolar fractures, with markedly different results. Both used >4mm MCS as a positive stress test. 63% of McConnell's patients had a negative stress test compared with only 35% of Egol's. Both series reported that physical sign were only moderately predictive of the result of the stress test. In Egol's series, 30% had a positive stress test but no medial symptoms or signs. 20 of these patients were treated in ankle braces and two displaced, although only one was symptomatic. Patients in McConnell's series with negative tests were treated in ankle braces and all united uneventfully. The above studies used a manually applied external rotation stress test. An alternative technique is the gravity stress test, originally described by Michelson. The patient lies in the lateral position and the ankle is allowed to hang down freely over a support. Schock et al (2006) reported that this was less uncomfortable and possibly more sensitive than the manual stress test. 50% had an abnormal stress test, although no additional imging or clinical results were reported. Weber (2010) reported 56 patients with undisplaced lateral malleolar fractures. Medial symptoms and signs were not reported. The patients were initially splinted with partial weight bearing, and had unprotected standing mortise/lateral Xrays at 3-10 days post fracture. 5/56 (9%) displaced and were fixed. The others had a variety of splintage. There was no late subluxation, two delayed unions and two patients developed chronic pain syndromes. Mean AOFAS score was 96.1/100. Weber commented that “…relevant instability is largely overestimated by the stress manual or gravity stress radiographs.” Imaging the deep deltoid Egol and Koval have moved on to investigate the significance of positive stress radiographs with MR imaging of the deep deltoid ligament. Their most recent report found that 91% of patient with positive stress radiographs had only partial deltoid tears and successfully treated in a walking boot. Overall, this would suggest an incidence of complete tears of 6% (0.09x0.65). At the same meeting (OTA 2006), Zeni reported ultrasonography of the deep deltoid ligament in patients with undisplaced ankle fractures. Fourteen patients had norml ligaments, 8 had partial tears and 5(18.5%) had complete tears. Physical signs were moderately predictive of any tear, but the absence of medial signs was 100% predictive of the absence of a complete tear. Fox et al (2005) took a different, more pragmatic, approach. Stable fractures were diagnosed on the basis of medial symptoms and signs. 63% of undisplaced fractures fell into this category, and were successfully treated in ankle braces. The remainder, mainly on the basis of medial signs but a few on fracture morphology, were viewed as potentially unstable and were treated in BKW casts. Two fractures (1% of undisplaced fractures, 2% potentially unstable fractures) displaced. Akhtar (2009) described the potentially unstable fractures in more detail and with larger numbers. Over 10 years, 153 potentially unstable fractures were treated in BKW casts, with a standing Xray at one week. Unlike Weber's series, the one-week weightbearing image was taken in cast. 3/153 (2%) fractures displaced, all in the first week. This supports the view that even potentially unstable fractures are usually, in practice, stable. This issue has been discussed in some detail because - it is crucial to modernising the management of ankle fractures - it is once again an area of active study where conclusions may have to be modified further Currently, our assessment of the evidence on displacement and stability is: - displacement is to be judged on talar position in the mortise, not fragment position - either 4mm or 5mm of medial clear space would be justifiable as a starting point to define displacement, but 5mm is attracting incresing supporting evidence - medial signs are probably adequate to identify important deep deltoid injuries - stress radiography probably overestimates the prevalence of important deep deltoid injuries - the prevalence of important deep deltoid injuries seems to lie between 1-20% of undisplaced fractures - as the highest estimate was in much the smallest study, the true value is probably in the lower part of this range. - if stress radiography is felt necessary, the gravity stress test may be more acceptable to patients than manual testing
<urn:uuid:211b73d9-e97b-423f-be26-1ffcd4e122b4>
CC-MAIN-2016-26
http://foothyperbook.com/trauma/malleolarFx/ankleFxMgtIntro1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94492
1,426
2.6875
3
Pirates changed their ships as easily as they changed their captains. In the early days of the Age of Piracy, vessel changes were numerous and rapid. The buccaneers began with a canoe or rowing-boat, and used it to capture a larger boat; using this they captured a small ship, and with the small ship they captured a bigger ship. Each vessel taking profitted with a larger one. With a few exceptions, pirate ships were not built as such, but were converted merchant ships. Experienced sea-captains could spot them by their raised gunwales, which gave their crew better protection in battle. Often, many of the crew had to sleep on deck in all weathers, for the ships were always overcrowded.
<urn:uuid:e4d07cd9-7d33-458d-9747-126b1f1caebc>
CC-MAIN-2016-26
http://www.piratescovemarketplace.com/ships.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.994317
154
3.71875
4
New German ROKVISS robot experiment on the ISS 20 December 2004 Testing intelligent hinged robot units for future manned and unmanned missions The aim of the German ROKVISS (RObotic Components Verification on the ISS) experiment is to test highly integrated, modular robot components under the conditions experienced in space. The experiment has been developed by the German Aerospace Center (DLR) in the Oberpfaffenhofen-based Institute of Robotics and Mechatronics. At the same time, the experiment also serves to demonstrate various new control procedures in both automatic and so-called telepresence mode. The experimental flight unit will be installed on the International Space Station (ISS) in January 2005 following the launch of a Russian spaceship called Progress in December 2004. The experiment will be run on the station for about one year. The intelligent lightweight hinged robot units from the ROKVISS experiment The components used will form the basis for new lightweight robot elements which it is hoped will be used on future manned and unmanned space missions. At its heart the system consists of a robot arm with two hinges, a ‘metal finger’ at the end of the arm, a stereo video camera and a mono camera. These elements are fitted on a basic platform on the outside of the Russian Service Module (SM) on the ISS. The base plate also accommodates electronic boxes for power distribution and image processing and a special experimental contour that is used for dynamic motion experiments on the robot and for tests on the hinge parameter settings. The robot’s hinges and the cameras are controlled by a central experiment computer inside the ISS. Control from Earth in telepresence mode When in non-automatic mode, so-called telepresence mode, the experiment is run by an operator on the ground. This is only possible when the ISS is passing through the range of transmission and vision of the ground station in Weilheim, south-west of Munich. When in this range, a high-rate S-band link belonging to the ROKVISS ensures communication between the station and Earth and vice versa. The images taken by the stereo camera onboard the ISS are transferred to the operator’s screen while the forces impacting on the robot and its hinge positions are also transferred at the same time. The forces can be sensed on the joystick used by the operator. The mono camera, secured to the head of the robot arm, can check the condition of the ISS at close range and take pictures of Earth. Telepresence mode is only available during the phase of direct radio contact between Earth and the space station. The forces measured on the sensors’ hinges are transferred directly to the operator’s joystick. This mode requires an exacting up-and-down-link for the robot and video data. The maximum time delay for all data transfers is no greater than around 10 milliseconds. Automatic mode is used during the phase in which there is no radio link from the station to Earth. During this time, the experiments are controlled by the experiment computer onboard the ISS and the experiment data saved for subsequent analysis. ROKVISS –German aerospace skills The project is being financed by DLR with funding from the German Ministry of Education and Research (BMBF). The hardware and software has been developed and built by EADS Space Transportation in Bremen, acting as the main contractor, and the DLR Institute of Robotics and Mechatronics in Oberpfaffenhofen, which is responsible for the robot components, running the experiments and scientific evaluation of the results. Kayser-Threde from Munich is assuming responsibility for the development and construction of the experiment’s computer and power supply as well as providing the DLR institute with technical support. Hoerner & Sulger are supplying the camera equipment and electronic accessories. Project management is being handled by DLR’s space agency. Implementation of this mission is based on an agreement between DLR’s space agency, the Russian partner Roskosmos, RKK Energia and Munich-based Kayser-Threde, which is also acting as the main contractor for the S-band communication infrastructure. ROKVISS – the ambitious space experiment For reasons associated with cost and safety, robot applications in space require the use of lightweight elements which must offer a high degree of mobility and interactivity. The ratio of the load the robot has to move to its own weight should wherever possible be 1:1. The new robot concept developed by DLR is designed as a modular system, comprising the basic elements, the intelligent hinge drives and the electronics. The experiment aims to investigate and test highly integrated lightweight robot elements under the real conditions of space along with new control procedures for automatic operations and online control by an operator. System development is based on robot components already used on Earth. ROKVISS – the complex and complete system The ROKVISS system consists of several main components: External flight unit The basic platform is fitted to the outside of the ISS, on the Russian Service Module (SM) on what is known as the universal workstation. It was installed by the cosmonauts when they were working outside the station. The robot arm, along with its two highly integrated hinges, are fitted on the platform, at the end of which are the camera systems. The stereo camera transfers images of how the experiment is proceeding while the mono camera is used for observing at a distance. Other electronic units, such as power distribution and image processing, are also located on the platform. These are joined by the S-band communication unit with antenna, telemetry and telecommand box as the data transmitting and receiving station for controlling the experiment. Kayser-Threde is the contractor responsible for this unit. The contour also fitted on the platform is used to run the experiments, i.e. to shut down paths and undertake contact operations in automatic and telepresence mode. Internal flight unit The ROKVISS onboard computer controls all the external flight unit’s elements and therefore all the operations undertaken by the robot. It is also the link to the Russian control system and to the ROKVISS S-band communication unit. The flight system is controlled and monitored by the transmitting and receiving station of DLR’s Space Operations Centre in Weilheim, southern Germany. The ROKVISS ground control unit is also situated here. This consists of a joystick with force feedback, the computers for path generation and image processing and a 3D virtual reality-based image projection unit. ROKVISS – high levels of flexibility provided by the different operating modes To repair satellites and perform other service tasks in space, it is absolutely essential that the operator can be included in the control circuit and a rapid data circuit is key to this. During the direct space-to-Earth data link, the robot experiment can be controlled directly from Earth. This mode requires what is known as a deterministic, very fast up-and-down link between the ISS and Earth-based control centre in order to transfer control and video signals. The signal time (physical runtime and time required to process the data) on this link should be kept to a minimum. In other words, operations are in real-time. When in this mode, there is direct force feedback between the station and Earth. When combined with the stereo video images transferred from the station, the data gives the operator a real impression of the processes taking place on the robot system on the ISS. When in this mode, the robot is controlled automatically by the experiment computer. The data for automatic mode are saved on-board and transferred to the ground station for analysis once the experiment is complete. This is done during the next phase of radio contact following the experiment. Prospects – what is the next step for space robotics after ROKVISS? The main aim of the development project and experiment is to produce a blueprint for a new complex robot for use with service robots in space. An initial application for this is the TECSAS (Technology Satellite for the demonstration and verification of Space systems) mission. Once the feasibility investigation was completed, the definition phase of this project was begun in October 2004. The project is being run in conjunction with the Russian Space Agency, the Russian company Babakin Space Research Centre in Moscow, German companies and institutes and the Canadian Space Agency (CSA).
<urn:uuid:8225b0db-c41f-45bf-8ca6-2ba7a6e4bf09>
CC-MAIN-2016-26
http://www.dlr.de/en/desktopdefault.aspx/tabid-727/1207_read-2970
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00094-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937589
1,725
3.03125
3
At least 53 caves occur in Maryland. Most of these are located in western Maryland, in Washington, Allegany, Garrett and Frederick Counties. There are no known caves on the Eastern Shore or southern Maryland. A variety of spiders, insects, amphibians, reptiles, fish and mammals make caves their seasonal or permanent homes. In the past, aboriginal human populations used caves for shelter and religious ceremonies. More recently, some of these caves have been mined as a source of metal ores, and nitrates for gunpowder. Maryland caves have a rich natural and cultural history that requires careful preservation and study. Explore the links below for more information.
<urn:uuid:734c967f-1078-4760-bf03-041bf590550f>
CC-MAIN-2016-26
http://www.mgs.md.gov/geology/caves/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94822
131
3.125
3
Ever since the re-establishment of the Commission on Poverty, drawing an official poverty line for Hong Kong has been a high priority on the government's to-do list. On Saturday, the administration is expected to announce details of the new poverty line, long rumoured to be set at half the median household income. The rationale for benchmarking poverty against median household income is rooted in the idea of relative poverty. Unlike more traditional notions of poverty, which are based on absolute needs (for example, food, shelter, clothing), relative poverty involves pegging one's standard of living - for better or worse - to the fortunes of the middle class. The idea is that, as society advances, the poverty threshold should increase in real terms. The problem with a relative poverty measure is that, when it is used for the purpose of policy evaluation, it is always necessary to be mindful that the policy could alter the benchmark. This issue does not arise with an absolute poverty line, because the basic needs of a family cannot be amended by government policy (though society's concept of what these "basic needs" are may change from time to time). The government can, however, manipulate the median income. For instance, if it were to tax the middle class and transfer all the proceeds to a few tycoon families, this would reduce relative poverty in Hong Kong - not because of any change in actual poverty, but simply because the benchmark would be lowered. Unfortunately, while this example is quite fanciful, any policy that affects the median family income in Hong Kong could count as a poverty alleviation measure under the proposed metric. For instance, Hong Kong provides universal access to subsidised education and health care. If such access were to be means-tested - for instance, by requiring higher-income families to pay more fees - relative poverty would be reduced . This is because, under the commission's guidelines, means-testing these benefits would cause them to be added to household income. Depending on who gets the benefits, the median family income could remain the same or rise, raising the poverty threshold. But, in either case, t he income of lower-income households would increase by a margin large enough to reduce measured poverty. This problem is inherent in the relative approach to poverty measurement. Anything that affects the median family income in Hong Kong will cause the number of people in "relative poverty" to change. As a consequence, every time the relative poverty measure goes up (or down), one has to ask whether it is because of changes among lower-income households or changes to the benchmark. Consequently, we should not give up on measuring absolute poverty. Doing so would not only reduce the "noise" inherent in the relative approach, but also suggest concrete measures to directly address poverty. What are basic needs in Hong Kong's society? Nutrition is one, of course, but benchmarks are also needed for other items, such as clothing, transport, communications and adequate living space. In particular, public housing - which is provided for about 30 per cent of the population - is a substantial component of the well-being of lower-income households and needs to be taken into account when measuring the extent and intensity of local poverty. In setting out an official definition of poverty, we should strive to lay a solid foundation for poverty alleviation efforts. Since the benchmark of half the median income is taken from Hong Kong's income distribution, then, based on our historical income distribution, there will always be around 16-20 per cent of the population below this benchmark. It has been suggested that we should aim to cut the number of households below the threshold to a single-digit percentage, which is certainly unrealistic and impossible and, in the absence of an absolute poverty measure, potentially not even related to whether these households are actually poor or not. The more interesting question is to understand the social mobility of people who are below the threshold level and find out how they manage to move up and stay above the threshold. This information will be useful in designing policies to mitigate the poverty problem and respond to changes in Hong Kong's economic environment. We are carrying out a study to provide this information. Also, if we can know where these households are located, we can come up with more focused community-based intervention programmes. The government seems sincere in its efforts to address Hong Kong's poverty problem, but a partnership approach is needed to make it work, involving the business sector, individual workers, the government's commitment, and suitable policies. We need to take collective responsibility in responding to the challenge, respond to the needs of the vulnerable and make Hong Kong a place for all rather than a haven for the privileged few. James P. Vere is an associate professor in the School of Economics and Finance, and Paul Yip is a professor in the Department of Social Work and Social Administration, at the University of Hong Kong
<urn:uuid:5507370e-e1cf-41cc-a9e7-0d58a08ae5fb>
CC-MAIN-2016-26
http://www.scmp.com/print/comment/insight-opinion/article/1318320/hong-kongs-poor-need-lift-and-not-only-paper
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954061
988
3.015625
3
Astronomy Enters a New Era Join us for a live webcast about thrilling new tools that will come online in the next decade. Posted by Mat Kaplan 26-05-2013 20:45 CDT Topics: explaining technology, events and announcements, podcasts and videos, interview, extrasolar planets, stars and galaxies, optical telescopes, radio telescopes, astronomy and astrophysics spacecraft, Hubble Space Telescope, James Webb Space Telescope, astronomy Keck. Hubble. VLA. Kepler. They may not quite be household words, but those of us who follow astronomy know and love them. They and their sister instruments around the world (as well as above it) have opened awe-inspiring new vistas of our universe. They have also pioneered new technologies that are now ready for even greater accomplishments. Yet another generation of telescopes is on the horizon. They will expand our reach by orders of magnitude, revealing still more secrets of the cosmos, possibly including some that we don't begin to suspect. I hope you'll join me when I talk with men and women who are helping to inaugurate this new age. Each is on the team for a great observatory that is now taking shape: Richard Ellis, Ph.D.: Steele Professor of Astronomy at the California Institute of Technology; astronomer and member of the Thirty Meter Telescope Board. Fiona Harrison, Ph.D.: Principal Investigator for NuSTAR, NASA’s Nuclear Spectroscopic Telescope Array; Benjamin M. Rosen Professor of Physics and Astronomy at the California Institute of Technology. Juna Kollmeier, PhD., Astronomer at the Observatories of the Carnegie Institution of Washington in Pasadena, working on the Giant Magellan Telescope. Kartik Sheth, Ph.D.: Associate Astronomer at the National Radio Astronomy Observatory, former ALMA Commissioning & Science Verification Liaison. ALMA is the Atacama Large Millimeter/submillimeter Array. The conversation will be part of the NEXT Science|People|Tomorrow series I host for Southern California Public Radio/KPCC. Seats in the Crawford Family Forum at SCPR's Pasadena, California headquarters are free, but must be reserved at http://www.scpr.org/events/2013/05/30/next-eyes-on-the-universe/ . You can still join us even if you won't be in Southern California. Watch the live webcast that will be available on the same webpage. We begin at 7:00pm Pacific Daylight Time on Thursday, May 30th. Clear skies. Other related posts: Or read more blog entries about: explaining technology, events and announcements, podcasts and videos, interview, extrasolar planets, stars and galaxies, optical telescopes, radio telescopes, astronomy and astrophysics spacecraft, Hubble Space Telescope, James Webb Space Telescope, astronomy
<urn:uuid:91ae3cce-5ecf-4bf7-9c70-3baa542f901e>
CC-MAIN-2016-26
http://www.planetary.org/blogs/mat-kaplan/20130526-astronomy-enters-a-new-era.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.888047
584
2.78125
3
Definition of Scrikes 1. scrike [v] - See also: scrike Click the following link to bring up a new window with an automated collection of images related to the term: Scrikes Images Lexicographical Neighbors of Scrikes Literary usage of Scrikes Below you will find example usage of this term as found in modern and/or classical literature: 1. Popular Lectures on Science and Art: Delivered in the Principal Cities and by Dionysius Lardner (1849) "... while another part is reflected, and scrikes on other bodies, where it is subject to like effects. The body which radiates heat in this manner is, ..." 2. A Glossary of Words Used in the Wapentakes of Manley and Corringham by Edward Peacock (1889) "I fear lest this fellow should perceiue her to be in labour, if he should often hear her scrikes. ..." 3. Science for the School and Family by Worthington Hooker (1863) "But it does partake of the earth's motion, and goes eastward as fast as the height does, and so de- scrikes *ne curved line of a projectile. ..."
<urn:uuid:ffc5e614-1b59-4e21-b18d-89182519d91c>
CC-MAIN-2016-26
http://www.lexic.us/definition-of/scrikes
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925244
258
2.90625
3
Test your child's spelling with our free online spelling tests. Learn to spell online! Take a trial test to see how this thing works. Want to get started? Learn to spell from one of our existing spellings lists or create your own using words from your child's own class. Currently there are 57 spellings lists with over 4000 words - new words and lists are continually being added. As your child learns to spell, we learn the words your child knows and the words they are learning. We remember the words they struggle with so that we can continually challenge their development. High quality audio is essential for an online spelling tests. We use the highest quality text-to-speech software available. Click below to hear an example of the audio.
<urn:uuid:076d1a75-28a9-43bf-869b-d7137f27e8e4>
CC-MAIN-2016-26
http://learning2spell.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.88645
158
2.734375
3
Musculoskeletal Manifestations of Sarcoid Print-friendly version of this page Posted by firstname.lastname@example.org, 8/30/04 at 8:49:37 AM. Musculoskeletal Manifestations of Sarcoidosis Sarcoidosis is an inflammatory disorder characterized by the formation of noncaseating granulomas in tissues without other known cause for granulomatous disease. The disease has a diversity of clinical manifestations, most commonly affecting the lungs, skin, lymph nodes, and eyes, but can involve any organ system, including the musculoskeletal system. The first case of sarcoid was described by Jonathan Hutchinson over one hundred years ago at King's College Hospital in London. The first person to recognize bone involvement in sarcoidosis was Karl Kreibich in 1904. He found multiple radiolucencies, particularly in the distal end of the second phalanges, on the radiograph of a patient with sarcoid. After examining 60 histologic samples and not identifying any tubercule bacilli, he concluded that lupus pernio, the hallmark of chronic sarcoidosis and associated bone lesions, was a distinct granulomatous process unrelated to tuberculosis. The prevalence of sarcoid is estimated to be 10 to 20 per 100,000 people. Incidence varies among geographic regions as well as with ethnicity, with blacks having a three to four time increased risk compared to other ethnicities. In the United States, African Americans have a ten fold increased risk for sarcoid compared to Caucasians while Asians are rarely affected. Clinical manifestations depend on ethnicity, chronicity of illness, site and extent of tissue involvement, and activity of the granulomatous process. African Americans are more likely to present acutely and to have more severe disease than Caucasians, who tend to present with asymptomatic and chronic disease. The disease presents between 20 and 40 years of age in 70 to 90 percent of patients. About one half of cases are diagnosed incidentally in asymptomatic patients by an abnormality on a routine chest radiograph. The most common organ system affected by sarcoid is the lung, with the most common presenting symptoms being cough, dyspnea, and chest pain. Patients may also suffer from fatigue, weight loss, weakness, malaise, fever, and ocular disease. Musculoskeletal involvement usually occurs in patients with generalized disease. Affected patients may present with an acute polyarthritis (especially of the ankle joints), usually occurring in association with erythema nodosum and occasionally with acute uveitis. Involvement of muscle and bone is less common and usually indicates a chronic and prolonged clinical course. The diagnosis of sarcoid is based on compatible clinical and radiographic findings, supportive laboratory data (elevated ACE levels, anergy, elevated serum gamma globulins, positive Kveim test), and evidence of noncaseating granulomas in the absence of other causes for such lesions. The exact etiology and pathogenesis of sarcoid remains unknown. Several hypotheses exist regarding the involvement of bone in this disease, including: - high levels of 1,25(OH)2D3 causing stimulation of osteoclastic activity and bone resorption - granuloma induction of a local osteoclastic reaction - granuloma production of an osteoclastic activating factor inducing bone resorption Joint symptoms and signs occur in 10 to 35 percent of patients with sarcoidosis and occur more frequently in women than in men. Articular disease in sarcoid can be divided into two types: acute and chronic polyarthritis. The acute pattern is seen in the first six months of symptoms and has a self-limiting course, typically resolving in 4 to 6 weeks. The knees, ankles, elbows, PIP joints, and wrists are the most commonly affected joints. The arthralgia is thought to be due to the effect of inflammatory cytokines on the joints rather than direct granulomatous changes. Monoarthritis and effusion are uncommon. Conventional radiographs of symptomatic joints are usually unremarkable or show only osteoporosis and soft-tissue swelling. Sonographic findings include joint effusions, tenosynovitis, and subcutaneous inflammation. Patients may have elevated ESR and C reactive protein. When the acute polyarthritis manifests as periarticular ankle inflammation in combination with erythema nodosum and mediastinal lymphadenopathy, the term Lofgren syndrome is given. Six months or more after the diagnosis of sarcoid, up to 40% of patients may develop joint symptoms due to granulomatous arthritis. The granulomatous synovitis usually follows a chronic transient or relapsing course which may eventually lead to irreversible joint damage. Chronic polyarthritis is more common in women. Involved joints include the knees, ankles, PIP joints, and occasionally the wrists or shoulders. Dactylitis of the fingers may also be seen. Unlike the acute form of polyarthritis, which is often seen with erythema nodosum, this arthritis is commonly associated with cutaneous sarcoid. Radiographic findings related to joint disease are unusual unless there is extension of osseous disease to subchondral bone. Mild joint space narrowing and erosions can be seen but are nonspecific findings. Acute or chronic sarcoid arthropathy is a clinical diagnosis and MR imaging is usually not sought. However, MR imaging may be helpful for lesions that are not detected by conventional radiography. Tenosynovitis, tendonitis, bursitis, and synovitis can be seen on MR but are nonspecific findings and may require biopsy for diagnosis of granulomatous involvement. Sarcoid myopathy may demonstrate either myopathic or nodular type involvement. Discrete sarcoidal muscle lesions are reported in 1.4% of known sarcoidosis cases although muscle biopsies demonstrate skeletal muscle granulomas in 50-80% of sarcoidosis patients, usually in asymptomatic patients. Nodular sarcoid myopathy has a characteristic MR appearance that allows accurate diagnosis. The sarcoid nodules appear as focal intramuscular masses, usually at the musculotendinous junction, are often multiple and bilateral in distribution, and have a lower extremity predominance. Their appearance is decribed as a "dark star" with central areas of fibrosis that have low signal intensity on all sequences and peripheral areas of bright signal intensity on T2WI and enhancement on post-contrast images (Moore, et al.). In myopathic sarcoid myopathy, there are nonspecific findings of symmetric proximal muscle atrophy with fatty replacement and increased signal intensity of involved muscle on T2WI. Corticosteroid therapy may be a confounding factor in evaluating potential sarcoid muscle atrophy with the differentation based on clinical findings. MRI may be helpful in delineating the extent of fatty replacement and indicating an optimal location for muscle biopsy. Reports of bone involvement in sarcoid have ranged between 1 and 13 percent, with an average of 5 percent. An accurate percentage is difficult to obtain as many skeletal lesions are asymptomatic, screening skeletal surveys are not routinely performed, and minor cystic bone changes can be seen in normal individuals. The general pattern of osseous lesions is summarized below: Distribution: often bilateral Site of origin: cortical with preservation of periosteum Location: most commonly the hands and feet, although the long bones, skull, vertebrae, pelvis, ribs, sternum, and calcaneus can rarely be affected Position: usually at the ends of affected bones Shape: cystic or lacelike with minimal involvement of adjacent soft tissues or extensive bone erosion with pathologic fracture Nuclear scintigraphic findings are usually positive before lesions can be seen on radiography. The radiographic manifestations of sarcoidosis vary with the region of the skeleton affected. Small bone sarcoidosis Lesions affecting the small bones of the hands and feet typically have a lytic or lacy reticular appearance on conventional radiographs. Lytic lesions are either minute cortical defects in the phalangeal heads or larger rounded punched-out lesions involving the cortex and medulla. The middle and distal phalanges are the most frequently involved. The lytic lesions likely represent an osteoporotic process producing local and destructive tunneling. The lacy reticular pattern is seen when the tunneling of the cortex is followed by remodeling of the cortical and trabecular architecture. The concave shape of the phalangeal shafts then becomes more tubular. There is often accompanied soft tissue swelling. More localized lytic lesions are also seen, forming cystic defects that may become surrounded by a thin rim of sclerosis as they heal. The lacy reticular pattern of bone loss is seen in the right hand of this patient afflicted with osseous sarcoid, particularly in the middle phalanx of the third digit and the proximal phalanx of the fifth digit. Note the preservation of joint spaces. The same lacy pattern is demonstrated in the left hand of the same patient. Note the subcortical tunneling in the middle phalanx of the fifth digit. A similar pattern of bone loss is seen in this radiograph of the wrist. In an advanced sclerotic phase, a sequestrum may be seen. Fractures are rare but may occur with extensive lytic disease. Alignment deformities may result due to pathologic fractures with bone collapse rather than secondary to actual joint abnormalities. Extensive lytic lesions and subcortical tunneling. There is also acro-osteolysis of the distal phalanges of the third and fourth digits. Lesions occult to plain radiography may be seen with MR imaging which may demonstrate bone marrow lesions, extension of granulomas beyond the cortex, periosseous soft tissue involvement, or fine perpendicular lines extending from the ghost cortex. Although MR imaging is not necessary for the diagnosis of small bone sarcoidosis, it may be helpful in certain clinical situations, such as differentiating the cause for dactylitis in a patient with sarcoid and gout. Large Bone sarcoidosis Detection of sarcoid lesions involving the long bones and axial skeleton is considered uncommon. Lesions may be painful or asymptomatic. Neither bone scintigraphy nor skeletal surveys have been found to be good screening studies for sarcoid lesions. Click on the link below to view radiographic images of sarcoid involving the long bones: Wilcox et al.: Bone sarcoidosis. Current Opinion in Rheumatology 2000, 12:321-330. Large bone sarcoid lesions may be radiogaphically occult or seen as focal lytic lesions or sclerosis. On MR, the lesions may be indistinct or well-marginated and of varying sizes. The signal intensity characteristics are variable but the lesions typically have low intensity on T1WI, increased intensity on inversion-recovery, T2WI, and fat-saturated proton-density weighted images, and may enhance after contrast administration. Signal intensity is likely related to the activity of the disease process. There have been cases of resolution on follow-up studies with ghosts of the prior lesions having signal intensities consistent with fat or fibrosis. Skull and face Calvarial lesions may be expansile and can be evaluated best with CT. MR may be used to assess for associated soft tissue involvement. Facial bone involvement usually reflects the presence of granulomatous disease in adjacent structures. Click on the link below to view images of a case report for sarcoid involving the petrous temporal bone: Ng, Matthew and John K. Niparko: Osseous sarcoidosis presenting as a destructive petrous apex lesion. American Journal of Otolaryngology 2002, 23:241-245. Vertebral sarcoidosis is rare and may have a lytic, mixed lytic and sclerotic, or rarely, a predominantly sclerotic appearance. Lesions tend to affect the lower thoracic and upper lumbar spine with preservation of intervertebral disc spaces. Radiographic appearance may simulate osteomyelitis or tumor. Biopsy is often needed to establish a diagnosis. Images of vertebral sarcoidosis can be seen in the following articles by clicking on the links below: Rua-Figueroa et al.: Vertebral sarcoidosis: Clinical and Imaging Findings. Seminars in Arthritis and Rheumatism 2002, 31:436-352. Jelinek et al.: Sclerotic lesions of the cervical spine in sarcoidosis. Skeletal Radiology 1998, 27:702-704. Osseous sarcoid responds poorly to corticosteroids and other drugs used in treating sarcoid. Corticosteroids decrease pain and soft tissue swelling but do not completely normalize bone abnormalities and increase the risk of osteoporosis, fractures, and avascular necrosis. Colchicine, indomethacin, and other NSAIDs may be used for symptomatic relief. - Jelinek et al.: Sclerotic lesions of the cervical spine in sarcoidosis. Skeletal Radiology 1998, 27:702-704. - Moore, Sandra L. and Teirstein, Alvin E. Musculoskeletal Sarcoidosis: Spectrum of Appearances at MR Imaging. Radiographics 2003, 23:1389-1399. - Ng, Matthew and John K. Niparko: Osseous sarcoidosis presenting as a destructive petrous apex lesion. American Journal of Otolaryngology 2002, 23:241-245. - Resnick, Donald. Diagnosis of Bone and Joint Disorders. W.B. Saunders Company. Philadelphia, 1995: 4333-4352. - Rua-Figueroa et al.: Vertebral sarcoidosis: Clinical and Imaging Findings. Seminars in Arthritis and Rheumatism 2002, 31:436-352. - Takashi et al.: Radiologic manifestation of sarcoidosis in various organs. Radiographics 2004, 24:87-104. - Wilcox, Alison et al.: Bone sarcoidosis. Current Opinion in Rheumatology 2000, 12:321-330.
<urn:uuid:43b4be7f-8405-4184-8948-680147e5ef94>
CC-MAIN-2016-26
http://www.uwmsk.org/residentprojects/sarcoid.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00035-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890026
3,070
2.75
3
advocacy or arts advocacy - Advocacy is the act of pleading or arguing in favor of something, such as a cause, an idea, or a policy. Active support. This term is often used to refer to efforts to support specific art disciplines, or organizations, etc., as well as of support for the arts in general. The benefits associated with participation in the arts are both intrinsic (valuable in themselves) and instrumental (in promoting achievements in other disciplines, social and economic spheres), and that they are valuable in both private and public ways. See the Framework for Understanding the Benefits of the Arts from McCarthy's (2005) Gifts of the Muse, a RAND Corporation study funded by the Wallace Foundation. Arts education talking points: 1. The arts are central to life long learning. We are surrounded by the arts. cars and homes reflect complex and expressive design. We hear music throughout the day. TV and cinema are filled with dance and drama. More and more we go to museums, to the theater, music and dance The arts have been central to every culture past and present. Often the best way to understand other societies is through their arts. The arts are a reflection of our society. They inform and engage us, both subtly and deeply, and give meaning to our shared 2. A comprehensive, sequential arts education is essential for all students. Students can develop unique expressive skills through their creation of the arts, and the arts present ways for students with differing learning styles and abilities to "find their The arts present a powerful way for students to perceive the world around them. Thinking starts with the ability to perceive. Experience with the arts transfers to and strengthens basic thinking skills in a variety of areas, e.g., spatial-temporal thinking for higher level mathematical reasoning (research by Gardiner and Shaw), language and analytical thinking needed for verbal thinking and communication. Experiences in creating the arts are highly motivating ways for students to develop social / group skills, e.g., collaboration, loyalty, responsibility, reliability, respect for others and their work. Many state school boards have mandated all arts for all students through junior high, and proficiency in one art form for high school graduation. 3. The arts should be integrated into the curriculum and taught as independent disciplines. Dance, theater, and the visual arts are each a distinct discipline and students must learn to critique and understand the role of each in society. They should also be introduced to creating in each art form. The arts are basic to the study of social studies and language arts since they are found in all social contexts and are a means of communication. The arts are a highly motivating method for students to learn about many subjects including math, science and foreign languages. 4. Arts education prepares students for There are many well-paying, interesting job opportunities in the arts, or that use an arts background in the technology / communications and entertainment industries and in education. Business seeks students with arts degrees because they have developed valuable reasoning, creating and communication skills. 5. Arts education prepares students for The U.S. Department of Education recommends that college bound middle school, junior high and high school students study the arts. Many universities require one high school arts credit for admission. The skills and behaviors students need to learn for successful job performance are directly impacted by their training in the arts. A 1998 study by the Arts Education Research Center at NYU argues that achievement test scores in academic subjects improve when the arts are used to assist learning in mathematics, creative writing, and communication skills. ("Theory and Practice in Arts Education: A Report on the National Arts Education Research Center at New York University" by Jerrold Ross. ERIC #: ED356977) Arts Rich Schools had a 3% lower dropout rate before graduation than Arts Poor Schools. (Champions of The writing quality of elementary students was consistently and significantly improved by using drawing and drama techniques that allowed the students to experiment, evaluate, revise and integrate ideas before writing began, thus significantly improving results. (B.H. Moore and H. Caldwell) High-risk elementary students with one year in the arts-unfused "Different Ways of Knowing" program gained eight percentile points on standardized language arts tests; students with two years in the program gained 16 percentile points. Non-program students showed no percentile gain in language arts. (J.S. Catterall) "I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce, and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and John Adams (1735-1826), America's second president, wrote in a letter to his wife Abigail from Paris, while on a diplomatic mission to the France during America's Revolutionary War, July, "Art in all its distinct forms defines, in many ways, those qualities that are at the heart of education reform — creativity, perseverance, a sense of standards, and above all, a striving for excellence." Richard W. Riley, U.S. Secretary of Education under President William Clinton. Americans for the Arts has been running rousing campaigns in the mass media, leading with such questions as "There's not enough art in our schools. No wonder people think Martha Graham is a snack cracker." And, ". . . No wonder people say 'Gesundheit' when you say Take a good look at their varied and ongoing art advocacy. During the past decade, arts advocates have relied on an instrumental approach to the benefits of the arts in arguing for support of the arts. This report evaluates these arguments and asserts that a new approach is needed. This new approach offers a more comprehensive view of how the arts create private and public value, underscores the importance of the arts’ intrinsic benefits, and links the creation of benefits to arts involvement. Gifts of the Muse responds to the prevailing view that in the public realm, the arts are an instrument for achieving broad social and economic goals (economic growth, improved student learning, community revitalization), while the intrinsic benefits have been viewed as only of private, personal value. Gifts of the Muse is a powerful tool for those involved in public policymaking, because of its findings that the intrinsic benefits of the arts provide the foundation for the creation of instrumental benefits. It argues that the purely "instrumental" approach ignores key benefits that are created uniquely by arts experiences, and is a springboard for discussion about a new approach which recognizes the continuum between intrinsic benefits and instrumental benefits, asserts that the intrinsic benefits must be created in order for instrumental benefits to be realized, examines how both are connected to creating public value, and how benefits are linked to public participation. The Association for Supervision and Curriculum Development (ASCD) is the principal organization for school administrators in the United States. "In the Front Row - The Arts Give Students a Ticket to Learning" is an article by Rick Allen in the spring 2004 issue of ASCD's jounal Curriculum Update. Rick Allen writes that, "Although the visual arts, music, and theater might seem locked in a losing battle with other subjects for money and time in schools, experts say that a strong case still can be made for increasing the arts in schools. Eric Jensen, author of the ASDC book Arts with the Brain in Mind, argues that the arts should be a major discipline in the schools -- 'one worth making everybody study and learn.'" Not only can the arts be a powerful solution for helping educators reach a wide range of learners, they also "enhance the process of learning" by developing a student's "integrated sensory, attentional, cognitive, emotional, and motor capacities," writes Jensen. Such brain systems are the driving forces behind all other learning, he adds. Rick Allen writes that although reading and math may grab the headlines, arts education advocates retain a long-term optimism as they push for arts integration, professional development, and community partnerships to advance their cause. Keep Arts In Schools is a project for the Ford Foundation, for which Douglas Gould & Company is developing messages and conducting opinion research to determine how best to frame arts education for advocates who seek to build a constituency for lasting change. KeepArtsInSchools.org features the work they have executed to date on this project and seeks to arm advocates with the tools and resources they need to be more effective in their work and in their communications to keep arts education in public schools. The Pew Charitable Trusts, Philadelphia, PA, USA. Optimizing America's Cultural Resources is Pew's largest national cultural initiative ever. Begun in 2000, its goal is to strengthen political and financial support for nonprofit culture by building an infrastructure for the development of more effective private and public policies affecting American arts and culture. This will be a five-year, multi-million-dollar effort. "Art and culture are the second largest export in America after technology," said Marian A. Godfrey, director of The Culture program for The Pew Charitable Trusts. "And while culture plays a significant role in the American economy-contributing between three and six percent of the gross domestic product we have no organizing framework for this remarkable cultural richness and no overall context in which to understand and nurture it." The main goal of this initiative is to usher in a new era of cultural policy development to ensure that the cultural heritage and artistic resources of the USA are appropriately sustained and supported. "We hope to make available, for the first time, a new level of comprehensive, fact-based information on America's cultural life. This information can guide a more meaningful and, we hope, a broader dialogue on the role of arts and culture in our society," said Stephen K. Urice, the officer of Pew's Culture program with responsibility for the new initiative. "We are reinforcing the idea that the arts are a necessary and vital part of the health of our society." The research component of the initiative consists of gathering, developing and evaluating data on American arts and culture. The RAND research study will address the absence of comprehensive data on the arts by compiling an information compendium that would include databases, research studies and other literature on the performing, visual and literary arts and the major disciplines within each of these branches. Envisioned as a key element of the Trusts' research strategy would be the creation of a national cultural information exchange. The exchange would serve as a repository and resource for cultural statistics, sponsor rigorous research and conduct polling. It would deliver its information to opinion leaders and policy-makers through the media, cultural service organizations and professional publications. Promoting a more informed and broader dialogue of the importance of arts and culture to our society is another major objective of the strategy. The Trusts will build on their current support of the National Arts Journalism Program at Columbia University, established in 1993 to increase the quantity and quality of arts reporting, by encouraging the development of arts and culture news programming on public, cable and commercial broadcasting. The Trusts will also seek to strengthen the advocacy capacity of the arts sector by partnering with other cultural organizations and grantmakers. Recognizing governmental and foundations' demands for greater accountability, the Trusts will work closely with cultural institutions and their service organizations to strengthen institutions' capacities to evaluate the results and impact of their programs and activities. It will also seek ways to assist the cultural community to develop the leadership that will be needed to maintain a strong and vibrant future. ArtsEdge at The Kennedy Center for the Arts, Washington, DC. The Arts Education Partnership. The Arts Education Partnership has published a book on arts education titled Third Space. Third Space tells the story of the profound changes in the lives of kids, teachers, and parents in 10 economically disadvantaged communities across the country that place their bets on the arts as a way to create great schools.
<urn:uuid:2c7b2fd4-7a7d-472f-8b7d-e853c66b32ea>
CC-MAIN-2016-26
http://artlex.com/ArtLex/a/advocacy.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935086
2,601
3.859375
4
- A group of vampires has variously been call a clutch, brood, coven, pack or clan. (a clan if their Scottish!) - The Muppet vampire, Count von Count from Sesame Street, is based on actual vampire myth. One way to supposedly deter a vampire is to throw seeds ( usually mustard) outside a door or place fishing net outside a window. Vampires are compelled to count the seeds on the holes in the net, delaying them until the sun comes up. - A rare disease called porphyria vampire like symptoms, such as an extreme sensitivity to sunlight and sometimes hairiness. In extreme cases, teeth might be stained reddish brown, and eventually the patient may go mad. - One of the most famous "true vampires" was Countess Elizabeth Bathory (1560-1614) who was accused of biting the flesh of girls while torturing them and bathing in their blood to retain her youthful beauty. She was by all accounts a very attractive woman. - Vampire legends may have been based on Vlad of Walachia, also known as Vlad the Impaler (1431-1476). He had a habit of nailing hats to people's heads, skinning them alive, and impaling them on upright stakes. He also liked to dip bread into the blood of his enemies and eat it. His name, Vlad, means son of the dragon or Dracula, who has been identified as the historical Dracula. Though Vlad the Impaler was murdered in 1476, his tomb is reported empty. - One of the earliest accounts of vampires is found in an ancient Sumerian and Babylonian myth dating to 4.000 B.C. which describes ekimmu or edimmu (one who is snatched away). The ekimmu is a type of uruku or utukku (a spirit or demon) who was not buried properly and has returned as a vengeful spirit to suck the life out of the living. - Prehistoric stone monuments called "dolmens" have been found over the graves of the dead in northwest Europe. Anthropologists speculate they have been placed over graves to keep vampires from rising. - Chinese vampires were call a ch'iang shih (corpse-hopper) and had red eyes and crooked claws. They were said to have a strong sexual drive that led them to attack women. As they grew stronger, the ch'iang shih gained the ability to fly, grew long white hair, and could also change into a wolf. - In 2009, a sixteenth-century female skull with a rock wedged in its mouth was found near the remains of plague victims. It was not unusual during that century to shove a rock or brick in the mouth of a suspected vampire to prevent it from feeding on the bodies of other plague victims or attacking the living. Female vampires were also often blamed for spreading the bubonic plague throughout Europe. - According to several legends. If someone was bitten by a suspected vampire, he or she should drink the ashes of a burned vampire. To prevent an attack, a person should make bread with the blood of vampire and eat it. - The legend that vampires must sleep in coffins probably arose from reports of gravediggers and morticians who described corpses suddenly sitting up in their graves or coffins. This eerie phenomenon could be caused by the decomposing process. - According to some legends, a vampire may engage in sex with his former wife, which often led to pregnancy. In fact, this belief may have provided a convenient explanation as to why a widow, who was supposed to be celibate, became pregnant. The resulting child was called a gloglave in Bulgarian or vampirdzii in Turkish. Rather than being ostracized, the child was considered a hero who had powers to slay a vampire. - Folklore vampires can become vampires not only through a bite, but also if they were once a werewolf, practiced sorcery, were excommunicated, committed suicide, were an illegitimate child of parents who were illegitimate, or were still born or died before baptism, in addition, anyone who has eaten the flesh of a sheep killed by a wolf, was a seventh son, was the child of a pregnant woman who was looked upon by a vampire, was a nun who stepped over an unburied body, had teeth when they were born, or had a cat jump on their corpse before being buried could also turn into vampires. - Mermaids can also be vampires--but instead of sucking blood, they suck out the breath of their victims. - In some vampire folktales, vampires can marry and move to another city where they take up jobs suitable for vampires, such as butchers, barbers, and tailors. That they become butchers may be based on the analogy that butchers are descendants of the sacrificer. Wednesday, September 19, 2012 This diy comes from www.thevintagedresser.blogspot.com . When ordinary leaves just won't do. Tuscany draws you with an irresistible air to Arezzo transforming a land into a theater filled with a lifestyle of tradition, culture and cuisine. Arezzo which is about 80 kilometers from Florence hosts the Giostra del Saracino or the joust held at the Piazza Grande. This medieval festival displays facets of its old world charm, its famous history, tradition and its tasteful cuisine. Held on the first Sunday in September, the procession of La Giostra del Saracino winds its way right down to the Piazza Grande. Originating from the ancient Crusades, this Saracen joust began in the Middle Ages. The Christian Crusaders battled with the Islamic tribes or the Moors of the North African Arabs in an attempt to drive them out of Europe. This Baroque joust started between the 15th and the 16th centuries and gained popularity. But during the 18th century, the royal air that surrounded it declined and lost its notoriety. A brief spell of fame enveloped this game during the Romantic period. With a culture of tradition, the Giostra del Saracino was re-established as a historic event in 1931 with its original 14th century ambience. With the spirit of competition and joy, the joust also takes place when dignitaries and princes visit the city and during important functions, carnivals and weddings. Held twice a year, La Giostra del Saracino is also enacted on the third Saturday at San Donato as well as the first Sunday in September at Arezzo. Exciting and exhilarating, this medieval joust starts with an air of anticipation as the procession with eight knights clad in their chain armors canter past on their horses. The knights represent the four quarters of the old city. They are known as the Porta Crucifera in red and green, the Porta del Foro in yellow and crimson, the Porta Sant’Andrea in green and white and the Porta Santo Spirito in yellow and blue. The parade follows with 311 people dressed in the 14th century apparel and 31 horses trotting along with their riders with multi-colored flags held by the flag bearers. The joust begins with a traditional ritual with the Bishop blessing the armies on the steps of the Cathedral. Then the ‘Araldo’ reads the ‘Disifida di Buratto’, which is a poetic recital, dating back to the 17th century. A greeting is extended to the knights and the authorities who are in charge. A musical chorus ‘Inno del Saracino’ is sung by the Gruppo Musici and a final ‘go ahead’ signal is given by the Magistrates to start La Giostra del Saracino. The aim of the joust is to hit the shield held by a wooden effigy of a Saracen. The Maestro del Campo or the Master of the Field gives the signal for the knights to race on their mounts towards the wooden effigy. If the knights miss the target, the Saracen effigy portraying the ‘Puppet King of the Indies’ swings a ball with spikes on it that hits the knight if he is not careful. The crowds cheer as the knight from their quarters finds his mark, but lapse into silence when he does not and turn to distracting the knights from the other quarters when it is their turn. The knights of the joust who hit the shield of the effigy win the most number of points and go on to winning the prize of the Golden Lance. Arezzo takes you back into the past with its memorable traditions and a culture that traverses the ancient ages
<urn:uuid:25f2d238-31ce-42e7-893c-71ca46110f4d>
CC-MAIN-2016-26
http://decktheholidays.blogspot.com/2012_09_19_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973593
1,796
2.890625
3
|About Us | What's New | Search | Site Map | Contact Us| A.Word.A.Daywith Anu Garg MEANING:verb intr.: To speak or write at length on a subject. ETYMOLOGY:From Latin disserere (to arrange in order), from dis- (apart, away) + serere (to join). Ultimately from the Indo-European root ser- (to line up), that is also the source of words such as series, assert, desert (to abandon), desert (a dry sandy region), sort, consort, and sorcerer. NOTES:Here are various words with similar looks and sounds, some related, some not: dessert (di-ZUHRT), as in "fat-free dessert", from French desservir (to clear the table) desert (DEZ-uhrt), as in "the Sahara", from Latin deserere (to abandon) desert (di-ZUHRT), as in "to desert the army", from Latin deserere (to abandon) desert (di-ZUHRT), as in "to receive just deserts", from Latin deservire (to serve zealously) USAGE:"There is no small amount of allure in hearing Evan dissert brusquely on his rationale for keeping certain women in the game." Scott Feschuk; Reality Chicks; National Post (Canada); Jan 15, 2003. A THOUGHT FOR TODAY:Like a lawyer, the human brain wants victory, not truth; and, like a lawyer, it is sometimes more admirable for skill than virtue. -Robert Wright, author and journalist (b. 1957)
<urn:uuid:761c8d9b-afd1-4862-9661-956bbdea157f>
CC-MAIN-2016-26
http://wordsmith.org/words/dissert.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00152-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918819
363
2.90625
3
It is, however, important to note that all classes of opioid receptors share key similarities. First, the receptors have a common general structure. Cloning demonstrates that the receptors are usually G protein-linked receptors imbedded in the plasma membrane of neurons (Satoh and Minami, 1995). Once the receptors are bound, a portion of the G protein is activated, which allows it to diffuse within the plasma membrane. The G protein moves within the membrane until it reaches its target, which is either an enzyme or an ion channel. Most often, the targets alter protein phosphorylation and/or gene transcription, which alter the short-term and long-term activity of the neuron, respectively. Although opioids usually activate G proteins, it was recently demonstrated that opioids occasionally act independently of G proteins. A key study found that DAMGO, a selective mu receptor agonist, modulates calcium-dependent potassium channels independently of G proteins in bovine adrenal medullary chromaffin cells (Twitchell and Rane, 1994). The finding further highlights the complexity of the opioid system.A second similarity is that activation any type of opioid receptors inhibits adenylate cyclase (Childers, 1991), which is an enzyme responsible for catalyzing numerous chemical reactions in neurons. Activation of each type of receptor appears to shares a common property, which allows them to alter adenylate cyclase activity. The common property/ies may explain why different types of opioid receptors occasionally have the same effect on a neuron. Even though opioid receptors share the ability to inhibit adenylate cyclase, each receptor subtype has unique series of effects that can not be produced by any other type of opioid receptor. A final similarity is that all types of opioid receptors are present both presynaptically and postsynaptically in neurons (review in Simon, 1991). When acting at presynaptic receptors, the peptides function as neuromodulators affecting the release of neurotransmitters. At postsynaptic receptors, the peptides act as neurotransmitters by directly altering membrane potentials. The overall effect of opioids on a particular tissue depends upon the concentration and location of particular opioid receptors in the area. Clinical and recreational opioid use is limited by the development of tolerance and dependence. Tolerance can be defined as the decreased potency of a drug, such that progressively larger doses must be used to achieve the same effect. Dependence, which is closely associated with tolerance, involves a continued need for opioid administration in order to prevent withdrawal symptoms. These symptoms include nausea, gastrointestinal disturbances, chills, and a general flu-like state in humans (Jaffe, 1980) and ptosis (drooping eyelid), teeth chattering, jumping, irritability, wet dog shakes, and diarrhea in animals (Wei et al., 1973). Lesion studies indicate that no single brain structure is responsible for the withdrawal symptoms (Adler et al., 1978).The opioid system is connected with most neurotransmitter networks in the body. The interaction between the opioids and the dopaminergic system appears to be involved in addiction, tolerance, and withdrawal symptoms. The relevant interaction appears to occur along the mesolimbic projection, particularly in the ventral tegmental area (VTA) and nucleus accumbens (NA). It has been further demonstrated that opiates applied to the VTA prompt animals to engage in behaviors increases dopamine activity. Specifically, VTA morphine causes rats to self-administer cocaine (Stewart, 1984), which is known to potentate DA activity. The study suggests that dopamine further augments the rewarding properties of opioids in the VTA. In fact, morphine enhances the firing frequency of mesolimbic DA neurons projecting from the VTA (Matthews and German, 1984), which provides firm evidence that opioids have an excitatory affect on dopamine. Not only do opioids have an excitatory effect on dopamine; the effects of opioids seem to be contingent upon dopamine activation. Dopamine antagonists, molecules that bind to the receptor and prevent it from being activated, block the effect of opioids by halting morphine-induced activities (Iwamoto, 1981). Although dopamine excitation likely increases the rewarding effect of opioids, it appears that reinforcement is not contingent upon dopamine activation. A key study found that heroine self-administration continues after disruption of DA innervation in NA, which suggests that rewarding effects of opiates are only partially contingent on DA release in the NA (Koob and Bloom, 1988). The finding is consistent with the discovery that animals will self-administer opioids in the NA (Olds, 1982), which suggests that opioid activity in the NA has a rewarding effect independent of neurons from the VTA. It is important to note that the animals will modify their behavior more to obtain opioids in the VTA (Olds, 1982), which suggests that the VTA activation produces a more rewarding effect than NA activation. Another line of research suggests that dopaminergic input is not necessary for opioid reward. 6-OHDA lesions in NA, which specifically destroy dopaminergic neurons, did not effect lever press linked to opioid reward (Robbins et al., 1989). Yet, it does appears that dopamine has an excitatory effect on opioid-induced reward. Dopamine antagonists slow response speed in reinforced tasks, but do not eliminate the response all together (Evenden and Robbins, 1983). A recent study also found potentiation of opioid activity by dopamine agonists, while dopamine antagonists inhibit opioid activity (Cook et al., 1999). On a cellular level, dopamine administration in NA tissue elicits changes in electrical activity when cell slices are placed in a test tube mimicking the brain's environment (Pennartz et al., 1992). In reviewing the body of research, dopamine seems to enhance the actions of opioids on reward in the NA, but does not appear to be required for reinforcement. It is interesting to note that opioid and dopamine agonists alike, both substances associated with addiction, depress overall excitation in the NA (Pennartz et al., 1992). Yet, muscarinic agonists decrease excitation in the NA without altering addiction (Pennartz and Lopes da Silva, 1994). The NA appears to not only be involved in opioid withdrawal, it also appears to play a role in opioid tolerance. A study examining NA dopamine concentrations found that concentrations are higher in tolerant rats than in controls (Johnson and Glick, 1992). Researchers have yet to demonstrate how the change in dopamine concentrations, associated with a change in reinforcement, plays a role in opioid tolerance. Childers, S.R., Opioid receptor-coupled second messenger systems., Life Science, 48: 1991-2003, 1991. Cook, C.D., Rodefer, J.S., and Picker, M.J., Selective attenuation of the antinociceptive effects of mu opioids by the putative dopamine D3 agonist 7-OH-DPAT, Pscyhopharmacol., 144: 239-247, 1999. Dhawan, B.N., Cesselin, F., Raghubir, R., Reisine, T., Bradley, P.B., Portoghese, P.S., and Hamon, M., International union of pharmacology classification of opioid receptors, Pharmac. Rev., 48: 567-591, 1996. Evenden, J.L. and Robbins, T.W., Dissociable effects of d-amphetamine, chlordiazepoxide and alpha-flupentixol on choice and rate measures of reinforcement in the rat, Psychopharmacol., 79: 180-186, 1983. DiChiata, G. and North, R.A., Neurobiology of opiate abuse, Trends in Pharmacol. Sci., 13: 185-193, 1992. Graybiel, A.M., Moratalla, R., Robertson, H.A., Amphetamine and cocaine induce drug-specific activation of the c-fos gene in striosome-matrix compartments and limbic subdivisions of the striatum. Proc. Natn. Acad. Sci. USA, 87: 6912-6916. Iwamoto, E.T., Locomotor activity and antinociception after potative mu, kappa, and sigma opioid agonists in the rat: Influence of dopaminergic agonists and antagonists, J. Pharmac. Exp. Ther., 217: 451-460, 1981. Jaffe, J.H., Goodman and GilmanŐs the Pharmacological Basis of Therapeutics. In Drug Addiction and Drug Abuse (eds Goodman, L.S., Gilman, A., Mayer, S.E., and Melmono, K.L.), 545-546, MacMillan, New York, 1980. Johnson, D.W., and Glick, S.D., Dopamine release and metabolism in NA and striatum of morphine-tolerant and nontolerant rats, Pharmacol. Biochem. and Behav., 46: 341-347, 1993. Koob, G.F. and Bloom, F.E., Cellular and molecular mechanisms of drug abuse, Science, 242: 715-723, 1988. Koob, G.F., Drugs of abuse: anatomy, pharmacology, and function of reward pathways, Trends in Pharmacol. Sci., 13: 177-184. Matthews, R.T. and German, D.C., Electrophysiological evidence for excitation of rat VTA dopamine neurons by morphine, Neurosci., 11: 617-625, 1984. Mavaridis, M. and Besson, M.J., Dopamine-opiate interaction in the regulation of neostriatal and pallidal neuronal activity as assessed by opioid precursor peptides and glutamate decarboxylase messenger RNA expression, Neurosci., 92: 945-966, 1999. Olds, M.E., Reinforcing effects of morphine in the nucleus accumbens, Brain Res., 237: 429-440, 1982. Phillips, A.C., and LePiane, E.G., Pharmacol. Biochem. Behav., 12:965-968, 1980. Pennartz, C.M.A., Dolleman-van Der Weel, M.J., Kitai, S.T., and Lopes da Silva, F.H., Presynaptic dopamine D1 receptors attenuate excitatory and inhibitory limbic inputs to the shell region of the rat NA studies in vitro, J . Neurophysiol., 67: 1325-1334, 1992. Pennartz, C.M.A. and Lopes da Silva, F.H., Muscarinic modulation of synaptic transmission in rat NA slices is frequency-dependent, Brain Res., 1994. Robbins, T.W., Cador, M., Taylor, J.R., Everitt, B.J., Limbic-striatal interactions in reward-related processes, Neurosci. Biobehav. Rev., 155-162. Satoh, M. and Minami, M., Molecular pharmacology of the opioid receptors, Pharmacol. Ther., 68: 343-364, 1995. Simon, E.J., Opioid receptors and endogenous opioid peptides, Medicinal Res. Rev., 11: 357-374, 1991. Stewart, J., Reinstatement of heroin and cocaine self-administration behavior in the rat by intracerebral application of morphine in the ventral tegmental area, Pharmacol. Biochem. Behav., 20: 917-923, 1984. Stinus, L., LeMoal, M., Koob, G.F., NA and amygdala are possible substrates for the aversive stimulus effects of opiate withdrawal, Neurosci., 37: 767-773, 1990. Twitchell, W.A. and Rand, S.G., Nucleotide-independent modulation of a calcium-dependant potassium channel current by a mu-type opioid receptor, Mol. Pharmacol., 49: 793-798, 1994. Wei, E., Loh, H.H., and Way, E.L., Quantitative aspects of precipitated abstinence in morphine-dependant rats, J. Pharmac. Exp. Ther., 184: 398-403, 1973. Welzl, H., Kuhn, G., and Huston, J.P., Self-administration of small amounts of morphine through glass micropipettes into the ventral tegmental area of the rat. Neuropharmac., 28: 1017-1023, 1989. Westernik, B.H. and Korf, J., Regional rat brain levels of dihydrophenylacetic acid and homovanillic acid: Concurrent fluoimetric measurement and influence of drugs, Eur. J. Pharmacol., 38: 281-291, 1976.
<urn:uuid:1eddea0b-3504-434d-8b82-183951733d4f>
CC-MAIN-2016-26
http://www.macalester.edu/academics/psychology/whathap/UBNRP/Dopamine/opioids.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.839289
2,697
3.03125
3
Multicore designs are running out of gas, given the lack of parallelism in most software. Nevertheless, "there are several really interesting opportunities for new microprocessors." Indeed, we're still waiting to see the real benefits of those cores! Multicore platforms are tricky to program, and performance is inherently limited by a single shared memory. I think a much more promising platform is many-core with distributed memory (like Adapteva or Kalray). It will still be difficult to write manycore programs, but at least the architecture is sound. I believe that there could be another way. If it were easier to design hardware (for example using better languages, like Cx), people could actually make their own accelerators. Then all you would need would be better FPGA architectures (using much less area and power) or a simpler, cheaper way to make ASICs. Lattice seems to be getting pretty good at low-power FPGAs, and eASIC's solution looks interesting for lower-cost ASICs. Maybe we'll get there soon? (the fact that Intel is making a hybrid Xeon-FPGA chip might be another indication) Old newspaper headline trick: If the headline ends in a question mark, the answer is "No." Frankly, it's Intel's to lose. To the extent that they continue to provide the most value, Intel well remain. Interesting to note that Intel is leading the disruptions in that they took on the ARM threat in servers head-on and got the microserver chip to market 2-years ahead of ARM offerings. And not only have they gone down-market with Atom which is more power-efficient and/or performant than ARM, they have recently introduced Quark for robotics and the Internet of Things. While it certainly behooves Google and Apple to invest in their own hardware, it doesn't seem like it will pencil out because of the tremendous volumes, and $B in capital it takes each year to be competitive in the chip business. Zvi orbach mentioned a 1T sram cells being developed by one of his companies. Since most processors today are mostly made of large caches ,and since most likely this will be used for processes ouside intel (i believe) , this could greatly help intel's competitors. Also easic had started to offer low NRE, low volume, 28nm asic manufacturing recently.Like Matthieu says, it might be good enough to compete with intel in some use cases. Apple/Google/Amazon/etc. may be able to spend a bunch of $$$$$ to save a few $$ for themselves. Who else would use their custom CPU? Given the tremendous investment required to maintain the processor infrastructure (compilers, memory interfaces, I/O hubs, annual product refresh, etc.), I can't see this being anything more than an attempt to get Intel to lower their prices. Google and others must do what they do best - and it isn't making CPUs. They may choose a different CPU company that better meets their needs, but I can't imagine there ever being a benefit of becoming a CPU company. The biggest threat to Intel's CPU business are the likes of Janet Reno... I would not trust much the opinion of someone with Transmeta credentials regarding microprocessors. Intel's strength is not in microprocessor architecture but in semiconductor processing, in which it is by 2.5-3 generations ahead of the next guy. Google would be really dumb trying to take on Intel in microprocessors. It is VERY far away from their core competence. Charlie Sporck was one of the greats of the semiconductor industry and he lost his job at Nat Semi because he tried and failed to take on Intel in microprocessors. The Nat Semi processor was MUCH better architecturally than the x86 architecture (and it was optimized to run Unix) but Intel easily won because Nat Semi could not compete in IC manufacturing. I dont think Google or Apple can disrupt INtel. Google being a ppredominantly web company can try in hardware but being successful or even being in the market would be a challenge. Hardware is a different business it doesnt work like web. They may scratch for some time and leave it. Apple also may not be go very long in hardware design.
<urn:uuid:98697f08-b580-4e73-bc5f-7ae691501b60>
CC-MAIN-2016-26
http://www.eetimes.com/messages.asp?piddl_msgthreadid=45837&piddl_msgorder=asc
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971789
877
2.53125
3
Change in wind chill factor means current marks can't be directly compared to old ones. 63 below no doubt bone-chilling, but not the same as before 2001 When parents in Minot, N.D., tell their youngsters that Thursday's wind chill of 63 degrees below zero wasn't as low as the wind chill when they were kids, they're not just blowing hot air. In fact, it would have been a 78-below wind chill prior to 2001, when the National Weather Service revised its wind chill index, said Rich Leblang, weather service meteorologist in Bismarck. Wind chills got a lot of attention this week as even the lightest breeze became bone-chilling and record low temperatures were set in Bismarck and Grand Forks. But, while temperature comparisons are consistent over the years, one can't compare today's wind chill figures with those used before 2001. The National Weather Service made the switch that year to a revised formula developed by a U.S.-Canadian coalition and designed as a more realistic guide to how the wind feels to exposed skin. The formula is "ridiculously complicated," and questions linger about the best way to calculate wind chill, Leblang said. "When you get into thermodynamics, it gets ugly, (with) so many molecules moving at different speeds," he said. Leblang said the coldest wind chill he recalls during his 35 years with the weather service in Bismarck was 86 below on the night of Dec. 23-24, 1983. That same night, Williston flirted with a wind chill of 100 below, which would be 71 below by today's index, he said. The current index dips into the negative wind chills faster than the old index, but the wind chill drops faster and further in extreme cold temperatures under the old index, Leblang said. For example, a temperature of 5 degrees with a 5-mph wind yields a wind chill of 5 below under the current chart and zero degrees under the old chart. At a temperature of 40 below with a 20-mph wind, the wind chill is 74 below under the current chart and 95 below under the old chart. The current index was based on a human face model at an average height of 5 feet, but different people will feel different levels of cold from the wind, Leblang said. "Because it involves biology and perception from person to person, there is no perfect formula," he said. North Dakota is expected to take a huge temperature swing out of the deep freeze today, with forecasted highs of 10 degrees in Fargo and 27 to 28 degrees in Bismarck, which set a record low of 44 below Thursday morning. Fargo should warm up into the upper 20s on Tuesday, said Jim Kaiser, meteorologist at the weather service office in Grand Forks. Fargo's low temperature Thursday of 30 below was about 25 degrees below normal, and forecasted highs of 25 to 30 early next week would be 10 to 15 degrees above normal, Kaiser said. "When you combine those two in that short amount of time, that's fairly rare," he said.
<urn:uuid:72903fb3-6532-4750-b13f-df1d118bbd8a>
CC-MAIN-2016-26
http://www.dl-online.com/content/change-wind-chill-factor-means-current-marks-cant-be-directly-compared-old-ones
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958399
653
2.765625
3
|2.4 - Nazi Re-writing of Paragraph 175 The Nazis took power in January 1933 on a platform of law and order, "traditional values," and an ideology of racial purity that included virulent antisemitism and the persecution of unwanted social groups. Among its first steps to create the "New Order," the regime shut down homosexual gathering places, organizations, and publications in a broad attack on "public indecency." The Nazi assault on homosexuality had begun. The "New" Paragraph 175 As the regime consolidated power and centralized state authority, the instruments of persecution emerged. Propaganda in the wake of a major political crisis in mid-1934 linked homosexuality to subversion, even treason, thereby encouraging public intolerance. In 1935, Nazi authorities rewrote criminal law Paragraph 175, and subsequent court interpretation radically expanded the range of punishable "indecencies between men." Enforcement of Paragraph 175 fell to the Criminal Police and the Gestapo, unified by 1936 under the SS and its leader, Reichsführer-SS Heinrich Himmler. The nation's police forces gained extraordinary authority to employ surveillance on suspect individuals and to seize and detain "enemies of the state." During the 30 months from early 1937 to mid-1939, German police arrested almost 78,000 men under Paragraph 175, one-third of whom were convicted and sentenced to prison. Hundreds more were interned in concentration camps outside the legal process. All were subjected to brutal mistreatment at the hands of police, interrogators, and guards. The state's initial steps to restore law and order focused on professional criminals and "habitual sex offenders." The second category included not only men with two convictions under Paragraph 175, but also men expected "with a high degree of probability" to violate that law. Regulations issued in February 1934 ordered police surveillance of these individuals and authorized restrictions on their activities. The increasing police interest in the lives of homosexual men drove a few to emigrate where they could. The vast majority, however, began to conceal their homosexuality; many married. Others committed suicide. Paragraph 175 had been part of German criminal code from time of the German Empire under Kaiser Wilhelm I. As part of a massive rewriting of the criminal code, Nazi jurists revised Paragraph 175. Issued on June 28, 1935, and put into effect on September 1, 1935, the revision emphasized the criminality of both men involved in "indecency." The revised law opened the way to new judicial interpretations because criminalized homosexuality was no longer described as "unnatural" (though the term frequently appeared in police documents thereafter). Even before the new law went into effect, Nazi courts expanded the range of so-called indecent acts beyond the single offense prosecuted under the old law. By 1938, German courts ruled that any contact between men deemed to have sexual intent, even "simple looking" or "simple touching," could be grounds for arrest and conviction. New language added as Paragraph 175a specifically imposed up to ten years' hard labor for "indecency" committed under coercion, with adolescents under the age of 21, and for male prostitution. In practice, however, individuals victimized by acts punishable under these new provisions could be - and were - prosecuted as criminals according to Paragraph 175. (The revised law left homosexuality between women unmentioned.) Reichgesetzblatt Teil 1, Jahrgang 1935, p. 841: Article 6 "Unzucht [indecency] zwischen Männer," §175 and 175a (28 June 1935). United States Holocaust Memorial Museum #058 English translation by Warren Johannson and William Percy in "Homosexuals in Nazi Germany," Simon Wiesenthal Center Annual, Vol. 7 (1990). Indecency between Men 1. §175 of the Penal Code contains the following wording:§175 A male who commits lewd and lascivious acts with another male or permits himself to be so abused for lewd and lascivious acts, shall be punished by imprisonment. In a case of a participant under 21 years of age at the time of the commission of the act, the court may, in especially slight cases, refrain from punishment. 2. The following rule shall be added after §175 of the Penal Code as §175a:§175a Confinement in a penitentiary not to exceed ten years and, under extenuating circumstances, imprisonment for not less than three months shall be imposed: - Upon a male who, with force or with threat of imminent danger to life and limb, compels another male to commit lewd and lascivious acts with him or compels the other party to submit to abuse for lewd and lascivious acts; - Upon a male who, by abuse of a relationship of dependence upon him, in consequence of service, employment, or subordination, induces another male to commit lewd and lascivious acts with him or to submit to being abused for such acts; - Upon a male who being over 21 years of age induces another male under 21 years of age to commit lewd and lascivious acts with him or to submit to being abused for such acts; - Upon a male who professionally engages in lewd and lascivious acts with other men, or submits to such abuse by other men, or offers himself for lewd and lascivious acts with other men. Lewd and lascivious acts contrary to nature between human beings and animals shall be punished by imprisonment; loss of civil rights may also be imposed. Once Paragraph 175a was in effect, the annual number of convictions on charges of homosexuality leaped to about ten times the number in the pre-Nazi period. The law was so loosely formulated that it could be -- and was -- applied against heterosexuals whom the Nazis wanted to eliminate. The most notorious example of an individual convicted on trumped-up charges was General Werner von Fritsch, Army Chief of Staff; and the law was also used repeatedly against members of the Catholic clergy. But the law was undoubtedly used primarily against gay people, and the court system was aided in the witch-hunt by the entire German populace, which was encouraged to scrutinize the behaviour of neighbours and to denounce suspects to the Gestapo. No one knows how many homosexual men were killed by the Nazis before and during the war. But let us look at some figures. First, on the home front. The number of homosexual men (non-military) convicted under Paragraph 175 and sent to prison were: ||Number of convicted gay men ||data not available ||The Nazis passed other laws that targeted sex offenders. In 1933, they enacted the Law Against Dangerous Habitual Criminals and Measures for Protection and Recovery. This law gave German judges the power to order compulsory castrations in cases involving rape, defilement, illicit sex acts with children (Paragraph 176), coercion to commit sex offenses (paragraph 177), the committing of indecent acts in public including homosexual acts (paragraph 183), murder or manslaughter of a victim (paragraphs 223-226), if they were committed to arouse or gratify the sex drive, or homosexual acts with boys under 14. The Amendment to the Law for the Prevention of Offspring with Hereditary Diseases dated June 26, 1935, allowed castration indicated by reason of crime for men convicted under paragraph 175 if the men consented. These new laws defined homosexuals as "asocials" who were a threat to the Reich and the moral purity of Germany. The punishment for "chronic homosexuals" was incarceration in a concentration camp. A May 20, 1939 memo from Himmler allows concentration camp prisoners to be blackmailed into castration. While in 1934 766 males were convicted and imprisoned, in 1936 the figure exceeded 4,000, and in 1938 8,000. Moreover, from 1937 onwards many of those involved were sent to concentration camps after they had served their "regular" prison sentence. To this - a total of nearly 50,000 - should be added a significant proportion of the 56,000 people subjected to "sterilizaton", i.e. castration. These very large numbers of convicted homosexual civilians suggest a much higher figure for the front lines - where most of the men were - and the concentration camps (for which there are few records).
<urn:uuid:ca21fe62-848a-4afa-aa21-902484c07d8e>
CC-MAIN-2016-26
http://andrejkoymasky.com/mem/holocaust/02/ho02d.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957359
1,719
3.703125
4
Magnetic Dipole Moment To determine the strength of the Earth's magnetic field in Dallas. To determine the magnetic dipole moment of a magnet. Helmholtz Coils, multimeter functioning as an ammeter, DC power supply, ruler, stopwatch, cylindrical magnet, compass, thread, triple beam balance, Vernier caliper, The magnetic dipole moment of a substance (how well it acts as a magnet) can be determined by suspending a sample of the substance on a torsion fiber and measuring the period of the oscillations of the sample in an applied magnetic field. The larger the magnetic dipole moment, the faster the oscillation. In this experiment, a permanent magnet in the shape of a thin cylindrical rod is suspended from a rigid support by a cotton thread. The cylindrical axis of the magnet is in the horizontal plane, and thus the plane of oscillation is also the horizontal plane. The equilibrium direction of the magnet is determined by the horizontal component of the Earth's magnetic field; that is, the magnet acts as a compass and aligns itself north-south. When the magnet is displaced from its equilibrium direction, it oscillates in simple harmonic motion. The period of the oscillation depends on the magnetic dipole moment of the magnet and on the strength of the magnetic field; thus, if an additional magnetic field is applied the period of oscillation can be changed. By measuring the period as a function of the applied field and plotting a graph of the results, one obtains a straight line from which the magnetic moment of the magnet and the horizontal component of the Earth's magnetic field can be calculated. The equation of motion of the magnetic torsional oscillator is gven by I d / dt = NF + NB where I is the moment of inertia of the rod, I = M L2 / 12 is the angular velocity, NF is the restoring torque due to the suspension fiber, and NB is the restoring torque due to the magnetic field. where is the magnetic dipole moment of the magnet and B is the total magnetic field, that is the vector sum of the Earth's magnetic field BE and the applied magnetic field BA. The restoring torque due to the fiber is very small compared to the the restoring torque due to the magnetic field NF << NB so we will neglect NF in the following derivation. The magnitude of the magnetic torque vector is given by the familiar rule for the cross product B sin( ) where is the angle between which points along the cylindrical axis of the magnet and B the magnetic field direction. The angular velocity is the time rate of change of the angle d / dt so the equation of motion can now be written as / dt2 = B sin( ) This differential equation is non-linear and very difficult to solve so we will make another approximation (the first approximation was neglecting NF). For small angle oscillations ( < 20o), we can replace sin( ) by with less than a 2% error. The equation of motion is now / dt2 = / dt2 = ( B / I) You should recognize this as the equation for simple harmonic motion with frequency of oscillation given by 2 f = ( B / I) If the applied magnetic field is aligned parallel to the Earth's magnetic B = BA + BE In this experiment, the applied magnetic field is created by a pair of Helmholtz coils. The magnetic field at the center of the two coils is BA = C i where i is the current flowing through the wires of the coils, and C is C = 8 N o / [(125)½ R] where N=60 is the number of turns of wire on one coil, R is the average radius of the coils, and o = 4 x 10-7 henrys/meter is the permeability of free space. Be careful! This symbol o has an entirely different meaning than the magnetic dipole moment -- even the units are different. The equation for the square of the frequency of the magnet's oscillation is f 2 = (C / 42 I) i + Notice that f 2 is a linear function of the current i. This equation looks like the equation for a straight line y = m x + b If f 2 is plotted vs. i, the slope of the resulting straight line allows a calculation of the magnetic dipole moment Furthermore, once is determined, the y-intercept allows a calculation of the horizontal component of the Earth's magnetic field BE. - Use MKS units throughout this lab. That is, convert all length measurements to meters, all mass measurements to kilograms, etc. If MKS units are used for the inputs to calculations, the results will automatically come out in MKS units as well. The units of magnetic dipole moment are A.m2 (amp meter2) and the units of magnetic field are T (tesla). - Use a ruler to measure the average outer radius of the coils. Each coil should be measured several times in different directions and the results for both coils averaged together. - Next, we need to calculate the inner radius of the coils. This is where the wood stops and the copper wire begins. The total number of turns of wire on both coils together is 120. There are N=60 turns on each coil. Measure the diameter of the copper wire several times using a micrometer under the platform where the wire is easily accessible. DO NOT PULL THE WIRE OUT OF THE COILS! Count the number of turns of wire visible in the top layer. You can now calculate how many layers deep the copper is wound, and knowing the diameter of the wire you can find the inner radius of the coil. - Find the average radius (R) of the coils by averaging the inner and outer radii. Use this average radius to calculate C. - Use a triple beam balance to measure the mass (M) and a Vernier caliper to measure the length (L) of the cylindrical magnet. Record these data for use in calculating the moment of inertia (I) of the magnet. - Suspend the magnet by thread so that it hangs in the central region of the coils. Adjust the thread so that the magnet hangs in a horizontal - When the freely suspended magnet becomes stationary, it will point in a magnetic north-south direction (by definition). Verify this direction with the compass held far away from the metal tables and coils. - Connect the coils, power supply, and ammeter in a series circuit. This allows the coil current (i) to be read as the supply voltage is varied. - Supply about 0.30 amps to the coils. Pull the magnet out of the way while holding the compass between the coils. Note the direction of the applied magnetic field. Rotate the coils so that the applied magnetic field aligns with the Earth's magnetic field. Replace the cylindrical magnet and make sure that it is level. - Supply 0.20 amps to the coils. Wait about thirty seconds for the coils to reach thermal equilibrium. Readjust the power supply if necessary. Set the magnet into oscillation about an axis along the thread with amplitude no more than about 20o. Time 20 oscillations. Calculate the - Repeat for currents 0.30 A, 0.40A, ..., 1.20A. - Plot f2 vs. i and fit the best straight line though the data. - Find the magnetic dipole moment of the magnet from the slope of the plot. No error estimate is required. - Use the magnetic dipole moment of the magnet and the y-intercept of the plot to calculate the horizontal component of the Earth's magnetic field. No error estimate is required. - Find C numerically with units and an error estimate. - Find I numerically with units and an error estimate. - Why was it desirable to limit the coil current to 1.2 A? List several - Identify two sources of random error and two sources of systematic error. - How would your plot differ if the Earth's magnetic field BE and the applied magnetic field BA had been in opposite directions rather than in the same direction? (Hint: Would the slope change? Would the y-intercept change?) Back to the Electricity and Magnetism Manual
<urn:uuid:97968a30-44cf-425d-937e-e65cab7fa609>
CC-MAIN-2016-26
http://www.physics.smu.edu/~scalise/emmanual/dipole/lab.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.869265
1,855
3.515625
4
(Küste südlich von Salerno) Lago Trasimeno strasse 84098 Pontecagnano, Salerno Tel. (+39) 089 203 004 Fax. (+39) 089 203 458 » Kostenloser Rückrufservice: Hinterlassen Sie hier Ihre Telefonnummer (auch Handy). Wir rufen Sie umgehend zurück! Mt. Vesuvius is the best known volcano on earth; it dominates the Bay of Naples with its characteristic cone. It is a typical example of a volcano in a volcano made by an outer broken cone, Mt. Somma (1133 metres) with a crateric belt most of which is destroyed. In it there is a smaller cone, the Mt. Vesuvius (1281 metres), divided by lowering named Valle del Gigante (Giants Valley), a part of the ancient caldron where in a later period, perhaps during the 79 A.D. eruption, the Gran Cono (Great Cone) or Mt. Vesuvius arose. The Valle del Gigante is still divided in Atrio del Cavallo on the west and the Valle dell'Inferno on the east. The Somma's ancient crater is well preserved as far as its entire northern part is concerned, in fact in historic times it was less exposed to the volcano's devastating violence, because it was well protected by the height of the internal face that has prevented the downflow of lava on its slopes. The slopes, which vary in their steepness, are furrowed by profound radial grooves produced by the erosion of the meteoric waters. The whole section is then characterized by dikes and fringes of dark volcanic rock. The old crater edge is a stream of summits called cognoli. While the height of mount Somma and its profile have remained the same for centuries, the height and the profile of the mount Vesuvius have suffered considerable variation, because of the following eruptions, with raisings and lowerings. Mt. Vesuvius is a characteristic polygenic mixed volcano, meaning that it is constituted by lava of different chemical composition (for example trachytes, tephrites, leucitites) and formed either by casting of lava or pyroclastic deposits. All the zones at the slopes of the mountain are formed by transported earth of lava mud which goes down from the steep slopes in the rainy seasons through deep and narrow grooves called channels or more commonly "lagni". The high embankments are formed by piles of lavic scoriae, which precipitated in incandescent state and spread towards the low slopes, proving precious for the vegetation thanks to their fertile material, rich in silicon and potassium. Proceeding along the rim of the crater, one can observe the whole extent of the southern part of the volcano and, during days with good visibility, it is possible to see the entire gulf of Naples, from the Sorrento peninsula to Cape Miseno, Procida and Ischia. It is also possible to note the large number of buildings which have been built on the vulnerable flanks of the mountain. The eruption of the 79 A.D. The eruption began on 24 August of the 79 A.D. towards noon. The first eruptive phase was characterized from strong "freatomagmatiche" outbreaks. After this phase, magmatic outbreaks were followed until the morning of the following day, feeding a column constituted mostly from gas, pomici and ashes that were raised until 30 kilometers. The high part of the column expanded, assuming the shape of a pine, and was pushed from the twenty towards south-east. The contained particles in it often fell to the ground, forming a layer of pomici that to Pompei and Oplonti caught up 2-3 m. of thickness. Partial collapses of the eruptive column generated piroclastic flows that noticed to high long speed the flanks of the volcano, caught up and destroyed Ercolano. The city of Pompei, the much farthest one, did not come caught up and the greater part of its inhabitants survived. During the last hours of the night the intensity of the eruptive activity diminished. At the first hours in the 25 morning, a "freatomagmatica" outbreak generated piroclastici, turbulent flows - terrible "the base-surge " - that, travelling at the speed of a hurricane, came down along the slopes of the volcano, devastated the surrounding areas until distances of 15 kilometers and caused numerous victims also between the inhabitants of Pompei that were survivors to the first phase of the eruption. In the course of the day the outbreaks diminished of intensity and, in evening stopped of all, leaving one large pall of ashes and pomici on the huge area. The abundant rains, provoked also from the breaking in in the atmosphere of enormous fine particle and vapor amounts, mobilized this material, forming dense mud taps that came down from the flanks of the volcano and of the Appennine relieves along it goes them to them, ulteriorly having the territory of the vesuvian area. The eruption of 1631 The eruption of 1631 has been most violent and destructive of the history of the Vesuvio in the last millenium. After along period of quiescence, approximately 5 centuries, preceded from one series of premonitory phenomena, which earthquakes and raisings of the ground, the volcano waked causing the death of approximately 6.000 persons and the devastation of an area nearly 500 km. 2 The eruption began at the 7 in the morning of 16 December, with the formation of an eruptive column of approximately 15 km., from which they began to fall pomici and ashes in the area to east of the Vesuvio. At the 10 in the morning of 17 December, from the central crater generated piroclastici flows, gas clouds loaded with magma fragments that, sliding to long high velocity the flanks western and southern of the volcano, they destroyed all that they met in their way. In the night between the 16 and the 17, and in the afternoon of the 17, the abundant rains mobilized the incoherent ash cover causing the formation of mud taps. The taps came down from the flanks of the volcano, from the slopes of the Appennine to north and the northeast. The phase of paroxysm in the eruption lasted three days, provoking an enormous panic between the population. There were on the roads of Naples public confessions of sins, accompanied from extraordinary manifestations of penance, and were organized processions with the statue and the blood of S. Gennaro, so that the patron appease that divine temper of which the outbreak of the Vesuvio it seemed the indubitabile sign. The Count of Monterrey, viceroy of Naples from January of that year, sent some ships to collect the survivors of Torre del Greco and Torre Annunziata. After some months, deeply upsetting from the event, it made to affix in Portici a tablet that it exhorts the descendants not to forget the nature about the mountain, and to recognize ready the premonitory ones of a volcanic eruption. The eruption of 1944 On 18 March of 1944, during the occupation of the allied troops, began the last eruption of the Vesuvio, that concluded a period of activity begun in 1914, during which it had been taken place the only modest eruptions from the central crater . Between 1914 and 1944, lave and the slag produced from the volcano had filled up crater, wide 720 m. and deep 600 m., than it had been formed during the previous eruption of 1906. A little cone of slag it emerged from the crater. The little cone of slag it begins to collapse and the sismic activity becomes more intense. A new slag cone born and collapsed again. The eruption begins in the afternoon with slag launch. At the 16,30 a lavic strained overflowed from the northern part of the crater and catches up the Valle dell'Inferno at the 22.30. Nearly at the same time an other tap overflows from the southern part of the crater. At the 23 there was also a spillage of washes from the western part of the crater: the tap follows the railroad of the funicular and interrupts the railroad. At the 11 the washes flows along the Fosso della Vetrana. Between the afternoon and the night, new strained overflowed from the northern part of the crater. All the effusive activities are accompanied by sismic tremor with increasing amplitude until half of the day. The southern tap arrests at a quota approximately 300 m. on the sea level. In the night, the northern tap catches up S. Sebastiano and Massa di Somma and is divided in two coppers that are left over in direction of Cercola, from which in evening they are distant approximately 1,5 km. S. Sebastiano and Massa di Somma come evacuate and the 10,000 inhabitants transferred to Portici. Around the 17, the spectacular fountains of wash begin to form, the last of which hard approximately 5 hours and catch up a height nearly 1,000 m. Fragments of wash and ashes moved from the winds in quota, are deposited on the Southeastern areas of the volcano, between Angri and Pagani. The smaller fragments catch up distances than beyond 200 km. towards south-east. Slag until a kilogram of weight catch up the city of Poggiomarino, at approximately 11 km. from the crater. Great still warm slag amounts are accumulated on the flanks of the Great Cone. The sismic tremor continues, with maximums of amplitude in coincidence with the emission of the fountains of wash. Towards the 1 p.m. the eruption was in its maximum activity. A column of gas and ash rises at a 6 Km. of height. Ashes and slag fall on the south-eastern slope of the volcano. A big seismic tremor accompanies all the phase, during which the crater becomes wide. A series of explosions are caused by the entrance of water in the volcanic duct and there was a swarm of earthquakes. The eruption end. It caused the death of some persons, the collapse of the roofs and serious damages in S. Sebastiano and Massa di Somma.
<urn:uuid:4474c0c2-4601-45b9-b25e-823e23e8ef73>
CC-MAIN-2016-26
http://www.hotelolimpico.it/deu/vesuvio.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00059-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949701
2,183
2.859375
3
New study finds uranium and radium migrating offsite into St. Louis communities around nation's first atomic bomb dump site Evidence of excessive radon emissions from buried uranium and radium-containing wastes in offsite soil and residential dust samples in St. Louis populated communities nearest the Westlake Landfill where Manhattan Project-era uranium processing from first atomic bomb are buried. “Tracking legacy radionuclides in St. Louis, Missouri, via unsupported 210Pb,” Can be viewed for free at this link for 30 days at http://www.sciencedirect.com/science/article/pii/S0265931X15301685 From the Abstract: "Analysis of 287 soil, sediment and house dust samples collected in a 200 km2-zone in northern St. Louis County, Missouri, establish that offsite migration of radiological contaminants from Manhattan Project-era uranium processing wastes has occurred in this populated area."
<urn:uuid:274bf62f-bd53-47d5-a79d-edbfc47e7805>
CC-MAIN-2016-26
http://www.beyondnuclear.org/home/2015/12/30/new-study-finds-uranium-and-radium-migrating-offsite-into-st.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00077-ip-10-164-35-72.ec2.internal.warc.gz
en
0.828964
193
2.640625
3
An hour of screen time immediately before bed is fine for most teenagers, according to research by an Australian university. But two hours is too much and likely to disrupt their sleep, said Associate Professor Michael Gradisar, a clinical psychologist at Flinders University in Adelaide. He said a review of four years of studies at the university and around the world showed moderate technology use is not as harmful for adolescent sleep as many health professionals and parents believe. "Experiments with video games show that if you give an experienced gamer a violent game before bed, their heart rate is not elevated and they sleep quite fine," Gradisar said. It appears as if they have adjusted to using technology, he said. "This evidence challenges the idea that sleep problems in young people are because of technology." A more common reason is anxiety or "body clock mistiming", which is similar to what happens when a person has jet lag. "For the majority of the population, an hour of technology use before bed seems fine. But anything longer than two hours appears to be detrimental to sleep." However, the Australasian Sleep Association said it was still best for teenagers to avoid technology for at least an hour before bed. "Sleep is a complex biological process and further research is needed to translate the findings of this research into a recommendation," said a spokesman, Sadasivam Suresh, a paediatric sleep specialist. "This research is valuable in adding information on actual sleep patterns in modern society. It is good science that helps us to understand sleep and health better."
<urn:uuid:fa7bc6f3-e072-4cca-a12b-21e8ea8f5a52>
CC-MAIN-2016-26
http://www.theguardian.com/lifeandstyle/2014/jun/17/no-need-to-screen-out-teenagers-gadget-time-before-bed-research-shows
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963344
321
3.171875
3
Importance of School Attendance Consistent attendance in school helps to promote a student’s success. Research has shown that students’ attendance may be the biggest factor in influencing academic achievement. A study by Balfanz, Robert and Vaughan Byrnes at Johns Hopkins University School found, “… that students who missed at least 20 days of school per year — the definition of chronic absenteeism — had lower grades and were more likely to drop out than students with better attendance.” Attending school every day and being on time are important habits for a student to develop. Even in kindergarten, too many absences can cause a child to fall behind in school. Good attendance will help the student function well in school, college, and on the job. Learning is a progressive activity; regular attendance enables each day’s lessons to build upon the child’s previous learning. Completing work independently does not compensate for the loss of insight during class discussions, demonstrations, and experiments. Frequent absences cause a child to be unable to keep up with their schoolwork and, therefore, foster lower grades. Regular attendance patterns encourage the development of other responsible patterns of behavior, while low attendance puts children at risk for anti-social behavior and dropping out of school. Another outcome of absences is the lowering of funding for the entire district, impacting the adequacy of school resources. In Mississippi, unfortunately, schools are funded based on how many students attend classes each day instead of by enrollment. Funding by enrollment would be the logical basis for determining funding since teachers have to be in place for all children enrolled. However, Mississippi schools receive State funding according to their average daily attendance (ADA), the actual number of students present each day. In addition, a new rule has been put in place in Mississippi requiring students to be present for 63% of the instructional day for schools to receive the daily rate of funding for that student. Each child’s educational progress is impacted by their attendance in school. In practical terms, children’s attendance also impacts school resources. By ensuring that our students are present each day for class, we are able to ensure that adequate funds from the State reach our District. Parents can assist in limiting absences by helping their children arrive at school on time, checking their homework, avoid scheduling medical appointments during the school day, and planning family events or trips with the school schedule in mind. Setting a regular bedtime and morning routine, as well as, preparing clothes and school backpacks the night before, are also helpful behaviors in fostering school attendance. Students can fall behind in class easily by missing just a few days of school. As a parent or guardian, it is possible to limit your child’s absences by making school a priority. Building good attendance helps your child to succeed in school and to develop good habits that will serve them throughout life.
<urn:uuid:557b6c82-f76d-482f-b37f-79e720d39244>
CC-MAIN-2016-26
http://www.gsd.k12.ms.us/supmessage.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00191-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966352
587
3.28125
3
Dual Language Learners: Five Tips for Parents Parents with limited English proficiency have heard different messages about the language-learning needs of their children. Some believe that speaking to their children in their native language may hold them back from learning English or confuse them as they enter preschool and kindergarten. While mastery of English is important for success in school, research is showing that being fluent in more than one language can actually contribute to academic success. Check out our five tips every parent should know about dual language-learning. Yet, new research from brain scientists and linguistic experts tells us that a child who learns many words in her native language will have a stronger foundation for learning a second language, like English.2 Studies also show that exposing a child to two languages during their preschool years may help them learn more efficiently as they grow.3 In fact, exposing children to lots of words early on, regardless of the language, is the best way to prepare them for the future. In the earliest years, children’s brains are able to distinguish between, sort, and understand sounds associated with different languages.4 This begins with the processing of sounds and information in the womb,5 and continues as language networks form and grow in the brain. Repeated use of these networks creates the essential building blocks for lifelong language learning.6 Parents or caregivers who do not speak English, but who are eager for their children to thrive in the American educational system, can benefit from this new research. Tips from the research include: 1. Talk, read, sing, and play with your child often – in both your native language as well as other languages you know. Talking directly with a child is the surest way to help them build their early vocabulary. In fact, researchers at Stanford University found that the amount of talk directed at a child predicted the size of their vocabulary as early as 24 months.7 2. Know that if you speak a language other than English at home, it’s normal for your child to start out slowly learning English. With time and attention, they’ll match their peers. Early language learning is complex – under any circumstance – and your child will be working to store two languages at once. It will take time for them to begin sorting out and using new words they learn from friends and teachers in preschool with the words they learn at home.8 By helping your child build their vocabulary in the language of your home, their young minds will be ready to learn new languages. Research has even found that dual language learning children have similarly-sized vocabularies, but spread over two languages, and that many early differences in speech can fade with time.9 3. Be proud. Children raised in households that speak a language other than English are lucky. Research has shown that children who learn two languages display greater concentration, have a better grasp on the basic structure of language, and may have an easier time understanding math and science symbols later on in school.10 In fact, strong evidence suggests that when it comes time for your child to learn English, they’ll be better at it with a strong foundation in their native language.11 4. Visit your public library as often as you can. Local library branches often have children’s books in Spanish, as well English related materials for the whole family. If you or a caregiver you know does not read in English, find books to read aloud in your home language. If books are not available, talk to your librarian. 5. Follow-up classroom or caregiver learning by reading and conversing with your child in your preferred language. Point out words that match some of the new English words that your child may be hearing that share similar roots – such as August and Agosto or plant and planta. This process will reinforce their new language skills while showing them how much they may already naturally understand, boosting confidence and learning at the same time.12 Parents who are not proficient in English may feel stress and anxiety about their children’s language skills. But it is becoming increasingly clear that there are many advantages to growing up bilingual. Parents who talk, read, sing, and play with their children – often and in the languages they know best – will prepare them for success in preschool, elementary school, and beyond. 1 Linda Espinosa, “PreK-3rd: Challenging Common Myths About Dual Language Learners,” (New York: Foundation for Child Development, 2013) and Annick De Hower, An Introduction to Bilingual Development (New York: Multilingual Matters, 2009). 3 Catherine Sandhofer and Yuuko Uchikoshi, Cognitive Consequences of Dual Language Learning: Congitive Function, Language and Literacy, Science and Mathematics, and Social-Emotional Development.” In Faye Ong and John McLean, eds., California’s Best Practices for Young Dual Language Learners (California Department of Education, State Advisory Council on Early Learning and Care, 2013). 5 Barbara Conboy, “Neuroscience Research: How Experience with One of More Languages Affects the Developing Brain.” In Faye Ong and John McLean, eds., California’s Best Practices for Young Dual Language Learners (California Department of Education, State Advisory Council on Early Learning and Care, 2013). 10 Ellen Bialystok, “Levels of Bilingualism and Levels of Linguistic Awareness,” Developmental Psychology, 24 (4) (1988):560-567 and Raluca Barac and Ellen Bailystok, Cognitive Development of Young Bilingual Children: A Review of the Literature (Under Review, 2013). By Rey Fuentes with research assistance from Hong Van Pham and Christine Karamagi
<urn:uuid:866b5f2d-17ae-4441-b15c-0f0475cf932b>
CC-MAIN-2016-26
http://toosmall.org/blog/dual-language-learners-five-tips-for-parents
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950097
1,188
3.71875
4
Resources updated, April 2013 A legacy resource from NICHCY It’s a wonderful thing, to care for children, help them grow and change and learn, and keep them safe on their way. For those of you who help families and children every day by providing child care to the young ones or working in preschools, the rest of us say a profound “thank you.” What a job you do! And with our finest treasures, too—our children. - About developmental delays and disabilities - Legal issues and questions - Approaching families - Working with diverse families - Helping children transition to next settings - Resources on early childhood care Wherever you work, children come to you with gifts, curiosity, challenges, and needs. They are small wonders, to be sure, and as full of diversity as society itself. And because disability is a natural part of life, it’s also likely that some of your little ones may have a disability or a developmental delay that can impact their learning and growth. As a child care provider or preschool teacher, you may even be among the first to notice a child’s difficulties or special needs. That’s why, quite often, child care providers and preschool teachers play a key role in recognizing that a child may need special help and in connecting families with the systems of that help that address children’s developmental and disability-related needs. This page is dedicated to helping child care providers and preschool staff do just that. Here, at the CPIR, you can learn more about disabilities, how to address the needs of wee ones with challenges, and create an inclusive and empowering environment where all children can flourish. About Developmental Delays and Disabilities Recognizing that a child may have a developmental delay or disability is not necessarily an easy matter. Often, it’s downright hard to say, because children develop at their own pace and the range of “normal” development is broad. Cultural and linguistic diversity can also add an extra dimension to the question. Does a child have trouble understanding or speaking because of a disability, for example, or because his or her native language is not English? Two resources that shed light on the nature of disability and delay are: Developmental milestones | Explore the typical developmental stages and milestones that pediatricians and others use to monitor children’s growth and progress over time. Learn about the sequence and timing of a typical child’s earliest development and access resources to learn yet more. Developmental delay | Find out how “developmental delay” is defined and the role that evaluation plays in identifying children with developmental delays. Of course, sometimes the disability or delay is known, and as a child care provider you’d like to learn more about the nature of the disability and how you can support the child in your care. The CPIR can be very helpful in this regard, because we have a lot of information on specific disabilities. If you’re looking for information about a disability, we encourage you to investigate the Categories of Disability under IDEA. There, you can connect with multiple fact sheets on specific disabilities. Legal Issues and Questions A frequent area of concern for child care providers and preschool programs is what they must do legally, when it comes to including children with disabilities in their programs. Here are several salient resources on the subject. Commonly Asked Questions About Child Care Centers and the Americans with Disabilities Act. This 13-page publication explains how the requirements of the ADA apply to child care centers. The document also describes some of the Department of Justice’s ongoing enforcement efforts in the child care area and provides a resource list on sources of information on the ADA. Child Care Law Center. The Child Care Law Center uses legal tools toward making high-quality, affordable child care available to every child, family, and community, while focusing particular attention to low-income families, families and children with disabilities and other special needs, and other families who face barriers in securing and maintaining quality care. Want a quick reference to the ADA for child care providers? Want to know when a child care program is required under the ADA to admit a child with a disability? Visit the Child Care Law Center and find handy information. When you’re a child care provider or preschool teacher, and you suspect that a child in your care may have a disability or delay, you might hesitate over how to bring your concerns to the attention of the child’s parents. It’s naturally difficult and scary for parents to hear that there may be cause for action or concern with respect to their little one. We have a few suggestions that may help you approach the matter. For child care providers and private preschools Know that there are systems of help. Each state must have a system by which it identifies and helps children who may have a disability or developmental delay, even the youngest baby, toddler, or preschooler. This system is called “Child Find” and it is responsible for doing precisely that. So parents have a place to turn to, to have their child screened and/or evaluated free of charge to see if there is, indeed, a disability or delay. If you talk to parents about your concerns, you’ll want to share this information with them (see the next paragraph), so they know where to go and especially that that screening and evaluation of children are provided free of charge to families. Watch Me! Celebrating Milestones and Sharing Concerns. Concerned about the development of a child in your care? The Centers for Disease Control and Prevention offers a FREE, 1-hour online continuing education course, Watch Me! Celebrating Milestones and Sharing Concerns, to provide early care and education providers tools and best practices for them to work with families to monitor every young child’s development and help children with developmental delays get the early support they need. Find the contact info for the local Child Find office. Wondering how to find where Child Find is? Call your local hospital (the neo-natal unit or maternity), and ask for the contact info for the Child Find office in the community in which the child lives. That’s the information you share with parents. There are also disability-specific resources. Early awareness and intervention for young children are two essentials in addressing the child’s individual learning and developmental needs. Are you concerned about….autism? hearing loss? an intellectual disability? a visual impairment? something else? There are specific systems and resources to access for a range of common disabilities in children. We highly recommend accessing these resources, for they are founts of info, support, and guidance. While taking the time to check these out might be beyond your duties as a child care provider, there may be an appropriate time to tell parents more about the scope of help that’s out there. These resources fall in that category! One webpage in particular will take you (or the parents) into the heart of things, and that’s: Early Identification of Specific Disabilities and Children At-Risk, at the ECTA Center (the Early Childhood Technical Assistance Center). For preschool teachers in public schools If you’re teaching preschool in a public school, and you suspect that a child in your classroom has a disability or delay, you’ll want to take a bit different action. Talk to the person at your school (or school district) who is in charge of special education services for children with disabilities. Find out your school’s policies regarding referring children for evaluation under the Individuals with Disabilities Education Act (IDEA). Being well-acquainted with local policies will be extremely helpful in determining the next steps you should take with respect to the child, his or her parents, and the school system itself. Working with Diverse Families Without a doubt, we are a diverse people! Your work as a child care provider most likely brings you into close contact with a spectrum of cultures, ethnicities, and languages. It’s important to realize that cultures do not necessarily view disabilities in children in the same manner we might, and it’s very helpful to have some handy materials on disabilities in other languages. To that end, might these resources help? For Spanish-speaking families. - http://www.parentcenterhub.org/repository/parabebes/ (For children under the age of 3) - http://www.parentcenterhub.org/repository/paraninos/ (For school-aged children, including preschoolers) Birth to six prescreen chart for vision, hearing and development…in Ampheric Ethiopian, Cambodian, Chinese, Farsi, Hmong, Hungarian, Korean, Laotian, Polish, Russian, Spanish, and Vietnamese. Available from CLAS, Culturally and Linguistically Appropriate Services. Listed under the category “Child Find Materials.” About disability help…in Hmong. About disability help…in Somalian. Early intervention is critical. Available in languages: Arabic, Chinese, Hmong, Italian, Japanese, Khmer, Korean, Laotian, Russian, Spanish, Vietnamese, Yiddish This brochure from the state of New York is dedicated to raising public awareness of early intervention. It describes available services and how parents could ask for help. Individual links to the brochure in the different languages are available at CLAS, under “Child Find Materials,” at: Helping Children Transition to Next Settings As children leave your care and head off to new settings, you may wish to work with families to make the transition a smooth one. A lot is known about how to do just that, especially if the transition is from early intervention care to preschool or school-based services. Here are two resources that will help you prepare children with disabilities to move on. Transition from early intervention to preschool. A full page here on the subject. Video on early childhood transition. This 8-minute video provides an overview of the desirable outcomes of transition, research identifying effective transition practices, as well as the legal requirements of early childhood transition. Resources of Information on Early Childhood Care There’s a lot of expertise out there with respect to early childhood settings and care, some specializing in disability issues and others not. Here is a mini-list of centers and groups to consult to learn more about all manner of things child-care. Child Care resources. The USA.gov website can connect you with the Child Care Finder, child care licensure regulations (by state), the locator for Head Start programs, tips on childproofing your home, and tips on choosing child care for your baby or young child. Child Care Aware. Healthy Kids, Healthy Care. This website was developed by the National Resource Center for Health in Child Care and Early Education. National Resource Center for Health and Safety in Child Care and Early Education. This organization addresses the issues of safety and health in child care and early education settings. They also provide licensure regulations from 50 states and DC. Lots of info in Spanish, too! National Network for Child Care. Office of Child Care. The Office of Child Care, a program of the Administration for Families and Children, supports low-income working families through child care financial assistance and promotes children’s learning by improving the quality of early care and education and afterschool programs.
<urn:uuid:b45c9e41-3190-4cc8-bcef-5ff618f58f71>
CC-MAIN-2016-26
http://www.parentcenterhub.org/repository/childcare/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926008
2,374
2.921875
3
4.1 Resizing Algorithms 4.2 Image Sharpening 4.3 Image Cropping 4.4 Image Flipping and Rotation 4.5 Adjusting Image Compression 4.6 Grayscale Conversion 4.7 Sepia Filter 4.8 Code Sample Prior to version 2.2, AspJpeg supported three popular image resizing algorithms: nearest-neighbor, bilinear and bicubic. As of version 2.2, it supports 13 more. Resizing algorithms vary greatly in terms of quality of thumbnail output, sharpness and performance. The algorithm for your thumbnails is specified via the Interpolation property. The default algorithm is bilinear (1). The table below summarizes all available algorithms, their sample output and performance compared to the default algorithm. Note that thumbnail quality is usually gained at the expense of performance. relative to #1 Starting with Version 1.1, AspJpeg is capable of applying a sharpening filter to an image being resized via the method Sharpen. A regular thumbnail and two thumbnails with various degrees of sharpening applied to them are shown below. |No sharpening||Sharpen(1, 120)||Sharpen(1, 250)| The Sharpen method uses two Double arguments: Radius and Amount. Radius controls the size (in pixels) of an area around every pixel that the sharpening algorithm examines. This argument should normally be set to 1 or 2. Amount (expressed in %) specifies the degree of sharpness. This argument must be greater than 100. For this property to take effect, it must be set before calling Save, SendBinary or Binary. AspJpeg 1.1+ is also capable of cutting off edges from, or cropping, the resultant thumbnails via the method Crop(x0, y0, x1, y1). The size of the cropped image is specified by the coordinates of the upper-left and lower-right corners within the resultant thumbnail, not the original large image. If one or more coordinates passeed to the Crop method are outside the coordinate space of the image, this will actually expand the "canvas" around the image. This is useful, for example, if you need to create margins around the image. The following code creates a 10-pixel margin around an image: jpeg.Crop -10, -10, jpeg.Width + 10, jpeg.Height + 10 IMPORTANT: Before version 1.7, the color of the margins created by "negative" cropping was always white. Starting with version 1.7, it is determined by the color specified via Canvas.Brush.Color, and is black by default. To create a white 10-pixel margin with version 1.7+, the following code should be used: jpeg.Canvas.Brush.Color = &HFFFFFF jpeg.Crop -10, -10, jpeg.Width + 10, jpeg.Height + 10 With AspJpeg 1.2+, you can invert an image horizontally and/or vertically by calling the methods FlipH and FlipV, respectively. You can also rotate an image 90 degrees clockwise and counter-clockwise by calling the methods RotateR and RotateL, As of Version 2.3, AspJpeg is capable of rotating an image by an arbitrary angle via the method Rotate. The method expects two arguments: the angle (in degrees) of the counterclockwise rotation, and the fill color of the four triangular corner areas formed by the rotation. The width and height of the image are automatically increased to accommodate the slanted image. The following code rotates the image by 24 degrees and fills the corners with red color. jpeg.Rotate 24, &HFF0000 The result is as follows: AspJpeg is also capable of removing the corner areas entirely by making them fully transparent with the help of PNG image format. This way, the rotated image can be drawn on top of another image. This functionality is covered in Section 10.4 - Using PNG Format for Image Rotation. The JPEG format uses "lossy" compression methods. This means that some minor details of an image saved as a JPEG are lost during compression. The degree of loss can be adjusted via the Jpeg.Quality property. This property accepts an integer in the range 0 to 100, with 0 being the highest degree of loss (and hence, the lowest quality) and 100 being the lowest degree of loss and highest quality. The lower the loss, the larger the resultant file size. The property Jpeg.Quality is set to 80 by default which provides a close-to-optimal combination of quality and file size. Starting with version 1.4, AspJpeg is capable of converting a color image to grayscale via the Grayscale method. This method expects a Method argument which specifies a formula to perform the color-to-B&W conversion. The valid values are 0, 1, and 2. The value of 1 is the recommended method for most applications. The Grayscale method sets the three color components (R, G, B) of each pixel to the same value L using the following formulas: |0||L = 0.3333 R + 0.3333 G + 0.3333 B| |1||L = 0.2990 R + 0.5870 G + 0.1140 B| |2||L = 0.2125 R + 0.7154 G + 0.0721 B| The effect of the Method argument is demonstrated by the chart below: The effect of Method 0 is what Photoshop calls desaturation (Image/Adjust/Desaturate), while Method 1 is similar to Photoshop's conversion from RGB to grayscale (Image/Mode/Grayscale). Note that Grayscale is a method, not a property, so the '=' sign should not be used: The Grayscale method changes the colors of the image to B&W but leaves the image in the original RGB colorspace (3 bytes per pixel.) As of Version 2.1, AspJpeg can also convert RGB or CMYK images to the grayscale colorspace (1 byte per pixel) via the method Jpeg.ToGrayscale which accepts the same argument as Grayscale. The ToGrayscale method makes the image file smaller, and is also useful for PNG alpha channel management described in Chapter 10. AspJpeg 1.6+ offers the Sepia method which makes an image look like an old photograph. This method's two parameters, Hue and Contrast, enable you to adjust the output to your taste. The Hue parameter controls the brownish hue of the output image, and should usually be in the range of 25 to 60 for good results. The Contrast parameter controls the contrast of the image. The value of 1 means no contrast adjustment. Values between 1.2 and 1.5 usually produce good results. |Original Image||Hue=50, Contrast=1.4| The sample Sepia conversion shown here was achieved as follows: Jpeg.Sepia 50, 1.4 The following code sample demonstrates most of the above mentioned features by interactively applying various transformations to an image. The file 04_params.asp/aspx contains a form with checkboxes and radio buttons controlling the visual appearance of the image. This file invokes the script 04_process.asp/aspx which contains the actual image modification routine shown below. Set Jpeg = Server.CreateObject("Persits.Jpeg") Jpeg.Width = Jpeg.OriginalWidth * .8 Jpeg.Height = Jpeg.OriginalHeight * .8 If Request("Grayscale") = "1" Then If Request("Sharpen") = "1" Then Jpeg.Sharpen 1, 250 If Request("Horflip") = "1" Then If Request("Verflip") = "1" Then Jpeg.Quality = Request("Quality") Jpeg.Interpolation = Request("Interpolation") If Request("Crop") = 1 Then Jpeg.Crop 30, 30, 470, 320 <%@ Import Namespace="System.Web" %> <%@ Import Namespace="System.Reflection" %> <%@ Import Namespace="ASPJPEGLib" %> <%@ Page aspCompat="True" Language="C#" Debug="true" %> <script runat="server" LANGUAGE="C#"> void Page_Load(Object Source, EventArgs E) objJpeg = new ASPJPEGLib.ASPJpeg(); objJpeg.Open( Server.MapPath("clock.jpg") ); objJpeg.Width = (int)(objJpeg.OriginalWidth * 0.8); objJpeg.Height = (int)(objJpeg.OriginalHeight * 0.8); if( Request["Grayscale"] == "1" ) objJpeg.Grayscale( 1 ); if( Request["Sharpen"] == "1" ) objJpeg.Sharpen( 1, 250 ); if( Request["Horflip"] == "1" ) if( Request["Verflip"] == "1" ) objJpeg.Quality = int.Parse(Request["Quality"]); objJpeg.Interpolation = int.Parse(Request["Interpolation"]); if( Request["Crop"] == "1" ) objJpeg.Crop( 30, 30, 470, 320 ); Click the links below to run this code sample: for ASP and .NET All Rights Reserved. AspJpeg is a trademark of Persits Software, Inc.
<urn:uuid:60ec8ea0-e644-46cf-8a14-60d9a41af05f>
CC-MAIN-2016-26
http://www.aspjpeg.com/manual_04.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.709015
2,160
2.78125
3
williamghunter.net > Statistics for experimenters Statistics for Experimenters - Second EditionOrder your copy of Statistics for Experimenters: Design, Innovation, and Discovery, 2nd Edition by George Box, Stuart Hunter and William G. Hunter, 2005. Rewritten and updated by George Box and Stu Hunter (Bill Hunter died in 1986), this new edition of Statistics for Experimenters adopts the same approaches as the landmark First Edition by teaching with examples, readily understood graphics, and the appropriate use of computers. From the publisher: Catalyzing innovation, problem solving, and discovery, the Second Edition provides experimenters with the scientific and statistical tools needed to maximize the knowledge gained from research data, illustrating how these tools may best be utilized during all stages of the investigative process. The authors’ practical approach starts with a problem that needs to be solved and then examines the appropriate statistical methods of design and analysis. Complete with applications covering the physical, engineering, biological, and social sciences, Statistics for Experimenters is designed for individuals who must use statistical approaches to conduct an experiment, but do not necessarily have formal training in statistics. Experimenters need only a basic understanding of mathematics to master all the statistical methods presented. This text is an essential reference for all researchers and is a highly recommended course book for undergraduate and graduate students. Statistics for Experimenters, 1st Edition, 1978 by George Box, Stuart Hunter and William G. Hunter. A classic text for experimenters in scientific and business circles. Another genius named R.A. Fisher used this cube to create what is now known as the designed experiment. Fisher's colleagues and friends in the United States, Walter Shewhart and W. Edwards Deming, were inspired by the economy of his ideas. Fisher's model led to productivity breakthroughs in agriculture, medicine, bio-statistics, chemistry, and all other industries. Fisher's student, George Box, helped make Fisher's work accessible to undergraduate engineering students. The Box, Hunter, and Hunter textbook Statistics for Experimenters, A Introduction to Design, Data Analysis, and Model Building is the classic work that we distilled into a simplified model that can be used everyday on the job. Daniel Sloan www.danielsloan.com Professor T.N.Goh from National University of Singapore, August 7, 1999 Vince Adams, author of Building Better Products With Finite Element Analysis - WyzeTek, Inc. "The book that is considered the 'Bible' of DOE" - New England Biometrics Six Sigma Forum Magazine interviewed (May 2002) five leading teachers of Six Sigma and asked the following question: What literature on the topic of Six Sigma do you find most useful. 3 of 5 recommended Statistics for Experimenters, including Tim Clapp, "We recommend this book for students who want to learn more about experimental design. Many of the examples we teach in class are based on this book or on notes from a recent series of lectures Hunter gave here last year." Citations of Statisitics for Experimenters, from CiteSeer. Citation of Statistics for Experimenters in other books, from Amazon.com. "This is a book that every scientist and engineer in industry should read and own." Math Options web site quote. To avoid this problem web site owners may take steps to avoid linkrot.
<urn:uuid:5a99b532-bca0-476e-a59a-377e8da17222>
CC-MAIN-2016-26
http://www.williamghunter.net/statisticsforexperimenters/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915806
688
2.71875
3
Although widespread and common (1) (3), and believed to be relatively resilient to habitat loss and reef degradation (1), Platygyra daedalea faces a number of threats that are impacting coral reefs around the world, and so is assumed to be undergoing a population decline (1). An estimated 20 percent of the world’s coral reefs have already been destroyed (13), and a large percentage of those remaining are at risk of collapse as a result of human activities. These include overfishing, destructive fishing practices, coral mining, pollution, irresponsible tourism, and poor land management practices, which increase the amount of sediment, nutrients and pollutants entering the ocean (1) (10) (13). In general, however, climate change may pose the greatest risk to corals, raising the risk of temperature extremes which can stress coral and cause it to lose its zooxanthellae. This process, known as ‘bleaching’, usually results in the death of the coral. Climate change may also lead to more severe, frequent storms, which can damage reefs, and rising carbon dioxide levels may lead to ocean acidification, which can reduce coral’s ability to create its hard skeleton (1) (10) (13). Such stresses may also make corals more susceptible to disease (1). In addition to these general threats, Platygyra daedalea is also the target of collection for the aquarium trade (1).
<urn:uuid:3a210a10-d43b-49d3-864d-6cbd84ce98cd>
CC-MAIN-2016-26
http://www.arkive.org/brain-coral/platygyra-daedalea/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936479
297
3.890625
4
The hiring of a Senior Composites Engineer at Apple has fuelled more speculation that the company could move away from aluminum for building future devices, choosing to use carbon fiber instead. Kevin Kenny began work at the Cupertino campus this month after spending 14 years building carbon fiber bicycles for Kestral Bicycles, where he was the President and CEO. This isn’t the first time Kenny has worked with Apple; a patent called “Reinforced Device Housing” filed by the company in 2009 had Kenny’s name on it, and depicted an outer casing for electronic devices made from ultra-strong carbon fiber. The patent reveals Kenny was clearly working with Apple for a long time before he became a full-time employee. By using carbon fiber for future devices, Apple could create products that weigh significantly less than the aluminum and stainless steel devices they produce today; while maintaining the strength and durability we’ve all become accustom to. In 2008, it was rumored that Apple would use carbon fiber to replace the aluminum housing on the MacBook Air to make the notebook even lighter. These rumors were obviously a little premature, but with today’s news we won’t rule out a carbon fiber MacBook Air altogether. It certainly sounds like perfect material to enhance the lightweight device. [via 9to5 Mac]
<urn:uuid:4cfd2498-3897-4b1b-94d0-c53b6c762c1c>
CC-MAIN-2016-26
http://www.cultofmac.com/89763/new-apple-hiring-indicates-shift-from-aluminum-to-carbon-fiber-for-its-devices/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962845
272
2.78125
3
Have you had a few too many awkward incidents in which you pronounce someone else's name completely wrong? Are you unsure how to remedy your inability to figure it out? No fear--so long as you follow the steps outlined in this article, you'll soon be on your way to becoming an expert in the field of name pronunciation! 1Examine the name. If you've seen it but not heard it, oftentimes just sounding it out in your head first can help a lot with your pronunciation. Work with each syllable in turn. Unless it's Welsh. - Think about other words you already know that look similar to the name. For example, the letters q-u-i in French sound like the word key in English. So just as the word "quiche" is pronounced keysh, the name "Quitterie" would be pronounced key-tree. - Sometimes city names can get your mind going. Think of ones like San Jose, Guadalajara, Lille, Versailles, and Guangzhou. 2Consider the origin. Does it look French? Spanish? Chinese? Know that every language has a unique alphabet and set of sounds associated with it, so any prior knowledge of languages will assist you in your pronunciation. - Spanish has a very consistent alphabet, unlike English. The vowels are always pronounced "ah," "eh," "ee," "oh," and "oo." - French has a fairly consistent alphabet as well, but it's a bit more tricky. If the name ends in a consonant, don't pronounce it. "Robert" becomes row-bear. And a name like Michelle? It's mee-shell, not meh-shell. - Mandarin Chinese is trickier still. The "Q" is pronounced ch, "X" is pronounced sh, and "Z" is pronounced dr. "Xiaojin Zhu" is shiao-jin drew. - If you're a bit confused about "ei" and "ie" in German, opt for the second letter's name. "Steinbeck" has a vowel like "I"--the second letter. "Auf Wiedersehen" has a vowel like "E"--the second letter. 3Take into account accent marks and other diacritics. They can significantly change the way a name is pronounced. - In Spanish, you want to put the most emphasis on the syllable that has the accent; e.g., María should be pronounced ma-REE-uh. - Unfortunately, French doesn't follow the same rules. The sounds "è" and "é" are two different sounds. Though they are very similar, they are similar to eh (the sound in red) and ay, respectively. Examples of this include Renée (ruh-nay), André (on-dray), Honoré (ah-nor-ay), and Helène (heh-lehne). - The most frequent character used with a cedilla is the "ç"; the cedilla makes it soft (ss, not kuh). 4Look for diacritics indicating tone. Though this requires a familiarity with the language, some tones are quite logical. - A mark going down (`) generally indicates a falling tone; a mark going up, rising. - A mark going up and down (or down and up) is just that--your tone should follow. 1Ask around. This can be as sneaky as you're capable of. "Hey, who's that guy we're working with on the etymology project again?" Maybe your friends don't know either! - Don't be afraid to ask the person yourself. Odds are if you don't know, people butcher their name all the time. Say to him or her, "What's the native way of pronouncing your name?" to get them to pronounce it how they would back home. They'll love that you're making an effort. 2Say it over and over. Once you have it, don't let it go. As Dale Carnegie said, "Remember that a person's name to that person is the sweetest and most important sound in any language." - Repeat it in your head seven times. You'll be less likely to forget the correct way to say it when you have it logged in your memory. If the pronunciation surprises you, think of a rhyme to ease recall. 3Go online. Because the world has become such a global village, there are quite a few websites out there dedicated to just this. - You can always do more research on the pronunciations of less prevalent accent marks, using books or websites like this for Spanish words and this for French. - If you've just met someone and already forgotten how to pronounce their name, you can cover up your lapse in memory by introducing them to someone else you know. Say something like, "Hey, I want you to meet my friend Judy," and hopefully the person whose name you've forgotten will repeat it for Judy's sake. This approach works best at parties and other large social gatherings, so be cautious about using it in groups of a dozen people or less. - Don't worry too much about mispronouncing a name that you thought you knew. Apologize, then shrug it off and make up for it by pronouncing the name correctly every time thereafter. In other languages: Português: Pronunciar Nomes, Español: pronunciar nombres, Deutsch: Namen richtig aussprechen, Italiano: Pronunciare i Nomi, Français: bien prononcer des noms, Русский: правильно произносить имена, Bahasa Indonesia: Mengucapkan Nama dengan Benar Thanks to all authors for creating a page that has been read 12,994 times.
<urn:uuid:3386c1bf-d197-4eaa-a5ca-3cadfc651c32>
CC-MAIN-2016-26
http://www.wikihow.com/Pronounce-Names
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941842
1,253
3.25
3
One of the earliest and initially successful alternatives called MOND by Professor Milgrom was a modified law of gravity. It had a solid predictive value at galactic scales but has crashed and burned in the galactic cluster context. The leading successor to this approach is now J.W. Moffat, of Waterloo, Ontario, whose claims that his ever name changing theory, currently known as MOG for modified gravity, overcomes problems of earlier theories with the same approach such as MOND (a good overview of both theories is found here and also at this power point presentation). Notably, the theory identifies inertia as the cumulative gravitational effect of distant objects, an idea similar to the idea known as "Mach's principal" but with a different theoretical basis. The flavor of the theory can be seen in the early part of one of his recent papers: The preferred model of cosmology today, the CDM model, provides an excellent fit to cosmological observations, but at a substantial cost: according to this model, about 96% of the universe is either invisible or undetectable, or possibly both. This fact provides a strong incentive to seek alternative explanations that can account for cosmological observations without resorting to dark matter or Einstein’s cosmological constant. For gravitational theories designed to challenge the CDM model, the bar is set increasingly higher by recent discoveries. Not only do such theories have to explain successfully the velocity dispersions, rotational curves, and gravitational lensing of galaxies and galaxy clusters, the theories must also be in accord with cosmological observations, notably the acoustic power spectrum of the cosmic microwave background (CMB), the matter power spectrum of galaxies, and the recent observation of the luminosity-distance relationship of high-z supernovae, which is seen as evidence for “dark energy”. Modified Gravity (MOG) (Moffat 2006) has been used successfully to account for galaxy cluster masses (Brownstein & Moffat 2006a), the rotation curves of galaxies (Brownstein & Moffat 2006b), velocity dispersions of satellite galaxies (Moffat & Toth 2007c), and globular clusters (Moffat & Toth 2007b). It was also used to offer an explanation for the Bullet Cluster (Brownstein & Moffat 2007) without resorting to cold dark matter. Remarkably, MOG also meets the challenge posed by cosmological observations. In this paper, it is demonstrated that MOG produces an acoustic power spectrum, a matter power spectrum, and a luminosity-distance relationship that are in good agreement with observations, and require no dark matter nor Einstein’s cosmological constant. . . . 2 MODIFIED GRAVITY THEORY Modified Gravity (MOG) is a fully relativistic theory of gravitation that is derived from a relativistic action principle (Moffat 2006) involving scalar, tensor, and vector fields. . . . 2.1 Scalar-Tensor-Vector Gravity Our modified gravity theory is based on postulating the existence of a massive vector field, μ. The choice of a massive vector field is motivated by our desire to introduce a repulsive modification of the law of gravitation at short range. The vector field is coupled universally to matter. The theory, therefore, has three constants: in addition to the gravitational constant G, we must also consider the coupling constant ! that determines the coupling strength between the μ field and matter, and a further constant μ that arises as a result of considering a vector field of non-zero mass, and controls the coupling range. As one of his earlier papers (possibly with an earlier version of the theory, I have trouble discerning whether it is precisely the same or not) explains: An important feature of the . . . theories is that the modified acceleration law for weak gravitational fields has a repulsive Yukawa force added to the Newtonian acceleration law. This corresponds to the exchange of a massive spin 1 boson, whose effective mass and coupling to matter can vary with distance scale. A scalar component added to the Newtonian force law would correspond to an attractive Yukawa force and the exchange of a spin 0 particle. The latter acceleration law cannot lead to a satisfactory fit to galaxy rotation curves and galaxy cluster data. . . . A modified gravity theory based an a D = 4 pseudo-Riemannian metric, a spin 1 vector field and a corresponding second-rank skew field Bμ and dynamical scalar fields G, ω and μ, yields a static spherically symmetric gravitational field with an added Yukawa potential and with an effective coupling strength and distance range. This modified acceleration law leads to remarkably good fits to a large number of galaxies and galaxy clusters . . . . In contrast to standard dark matter models, we should not search for new stable particles such as weakly interacting massive particles (WIMPS) or neutralinos, because the fifth force charge . . . that is the source of the neutral vector field (skew field) is carried by the known stable baryons (and electrons and neutrinos). This new charge is the source of a fifth force skew field that modifies the gravitational field in the universe. Translated into English, one way of articulating what this theory does is do away with dark matter, dark energy, extra dimensions, the Higgs particle or other undiscovered fundamental particles in favor of what you could either call a modification of the law of General Relativity, or a fifth force (in addition to the electromagnetic force, the strong nuclear force, the weak nuclear force and traditional gravity). His theory also identifies the Big Bang as a non-singularity from which the Second Law of Thermodynamics proceeds in opposite time directions (forward in time in ours, backward in time on the other side of the Big Bang). Thus, overall Moffat provides a less weird explanation of the world than most other prevailing theories trying to explain current observations, while claiming to fit the data. This doesn't mean that Moffat is right, or even that I believe he is right. Minority theories in science usually fail and usually fail for good reason. But, it would certainly be comforting if he was right, and it appears that it may be feasible to figure out if he is right using means available over the next decade or two. If the Large Hadron Collider defeats everyone's expectations and fails to detect a Higgs boson, then Moffat's scientific stock will rise immensely. Footnote, a non-MOG paper of Moffat's about the nation that gravitons may be bound pairs to neutrinos and could explain dark energy is similarly interesting: The graviton is pictured as a bound state of a fermion and anti- fermion with the spacetime metric assumed to be a composite object of spinor fields . . . . If we assume that the fermion is a light neutrino with mass m ∼ 10−3 eV, then we obtain the effective vacuum density ¯ ∼ (10−3 eV )4, which agrees with the estimates for the cosmological constant from WMAP and SNIa data. . . . [W]e have predicted a vacuum density ¯ρ ∼ (10−3 eV )4 in agreement with λCDM model estimates from WMAP and SNIa data, when we identify the bound state fermion associated with the graviton condensate with a light neutrino with mass m = m ∼ 10−3 eV. By identifying ψ with a light neutrino field, we have predicted the correct magnitude of ¯ρ that fits the λCDM model interpretation of dark energy. This suggests that we describe the dark energy as graviton condensates formed from fluctuating light neutrinos. The source of dark energy would be light neutrino and anti-neutrino condensates. The research of Jack Burns from the University of Colorado at Boulder, featured on Colorado Matters today, also provides some interesting insight into how galactic clusters, where MOND theory fails, differ from other cosmic phenomena. He notes that galactic clusters tend to appear at the intersections of hard to see gaseous macrofiliments of matter that seem to form the skeletal outline of the universe.
<urn:uuid:96c16025-7e46-4389-a3dd-4ab970c97353>
CC-MAIN-2016-26
http://washparkprophet.blogspot.com/2008/01/has-jw-moffat-figured-it-out.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917461
1,713
2.515625
3
A Packet Sniffer is a legitimate network administration tool, which can be extremely powerful in the wrong hands. The beauty, from the cracker's point of view, is that the sniffer is passive - that means that no systems need to be broken to run it, and it is not possible to detect it running. (Haha, as if... see later) It not only allows you to view what people are looking at on the internet, but also internal protocols such as people's connections to a mail server, secure shared directories and so on. How it works A network typically consists of computer equipment, hubs and routers. When a message leaves a computer, destined for another, the first place it hits is usually a hub. The hub will broadcast the message to all of its other ports, and so on through the network. A router will take messages and only forward them to the appropriate ports - thereby lowering traffic on individual parts of a local network. There may be a subnet per floor in a large building, with all machines on each floor connected by hubs. Each machines on each subnet will receive all messages destined for any machine on that subnet. They then check the destination address, and if it is the correct destination it will pass it on to the software layer for appropriate use. It is easily possible (given administrator/root privileges on a machine) to ask the network card to report all traffic, not just that destined for the current machine. You will then be capable of watching and decoding any traffic on your router spar - possibly your whole company, maybe a floor of your building, maybe just the room you're in. Packet Sniffers come in various levels of complexity. Some will simply log or save all data with little decoding. Others will decode several protocols and log various items separately. To the newbie cracker, this is a magical device. Set it running on a network and you'll end up with a few log files. One will contain everyone's passwords as they check their POP3 mail. Another will contain all outgoing mail. And so on. To the white hat and system administrator, the tool is useful for determining which services and protocols are unencrypted; giving important information for prioritizing work. To the black hat, it is a source of passwords, private data and so on. For example, imagine that someone uses their cash card PIN as their logon password. Free money! I once performed a password log for research purposes, and discovered, in a sample of 18 passwords: - 6 unchanged from the day they were handed out - these were generated very simply, including the surname of the user. - 3 dictionary words - a brute-force cracker can obtain these quickly. - 1 possible cash-card PIN - this is very bad protection of privacy. - 2 over-shoulder passwords - these would be easily remembered if you stood behind the user, and watched them type. - 6 mostly-sound passwords - mixtures of characers, numerals, symbols etc. The novice sniffer may think they are undetectable because they are not compromising any systems in order to sniff. This is not the case. Two classic detection methods are now presented: 1. A blabbing NIC. Before sniffing is possible, the network interface card must be put into promiscuous mode. Several NICs will warn the network when this occurs. A vigilant sysadmin will spot it. To avoid getting caught, a potential cracker should check the MAC address of the machine they want to run the sniffer on; then get hold of the same card (using its vendor ID to identify it), and check if it's the blabbing sort. 2. A ping detection. Since the kernel of the sniffing machine is seeing all packets, not just the ones destined for it, a ping can be sent to a suspect machine. Given the correct IP address, but wrong MAC address, this should normally be filtered in the NIC. While in promiscuous mode, the packet will get through, and the sniffing machine will acknowledge the ping. Oops!
<urn:uuid:5ca1a2a4-926e-4a58-a6c2-e1d34c4619b5>
CC-MAIN-2016-26
http://everything2.com/title/Packet+sniffer
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935066
847
2.625
3
Present Perfect Simple This post will look at how the Present Perfect Simple is formed, how to use it, along with some exercises to help you with your understanding. After reading this post and doing the exercises you will have a greater understanding of the present perfect. We are going to keep adding to this page and making it even better, so if there is anything that you would like us to add then get in contact. Let’s start with a song that uses the present perfect a lot: U2 – “I Still Haven’t Found What I’m Looking For.” Try and spot the different examples of the present perfect (including the title). |Subject||Auxiliary Verb (to have)||Past Participle| |I / we / you / they||have||worked| |He / She / It||has||worked| Here are the contractions of the affirmative present perfect. - We’ve = We have - You’ve = You have - They’ve = They have - She’s = She has - He’s = He has - It’s = It has |Subject||Auxiliary Verb (to have) in the negative||Past Participle| |I / we / you / they||have not||worked| |He / She / It||has not||worked| ‘Haven’t’ is the contracted form of ‘have not’ ‘Hasn’t’ is the contracted form of ‘has not’ |Auxiliary Verb (to have)||Subject||Past Participle| |Have||I / we / you / they||worked?| |Has||He / She / It||worked?| The first thing to note is that the present perfect simple is used to describe actions that have happened at an unspecified time in the past. It CAN’T be used with specific time expressions: yesterday, two days ago, last night etc. (except when we use since). Time expressions used with the present perfect are: this week (month, year), today, for, since, ever, never, already, yet, and still. “Have you ever….?” This is a common question in English. When you meet people they want to know what you have done in your life. When asking this question it is not important ‘when’ but ‘if’ you have done something. Look out for the adverbs “never,” “ever” and “before.” - I have climbed the highest mountains (from the song). - I have been to France. - Have you ever played chess? - We haven’t met him before. - I have never seen that film. - Have you ever been to Japan? - She has never eaten Sushi! David Talks about His Experiences Hi my name is David and I like to travel. I have travelled to Japan, China, Thailand and South Korea but I have never been to South America and I would love to go. I also like to watch movies. I have seen many different films but I have never seen Star Wars. Last year I started learning Spanish but I have never spoken to a native Spanish speaker. “I have been a teacher for…” We use the present perfect to talk about an action that started in the past and continues in the present (and probably will continue in the future). Look out for – ‘for’ and ‘since’ - I have been a teacher for 2 years. - She hasn’t seen him since Saturday - How long have you worked here? ‘For’ is used when we talk about duration (two minutes, one hour, two weeks, three months, four years etc.) and since is used when referencing a specific time (Saturday, last year, two years ago, 1999, January). “Ouch, I’ve cut myself!” We use the present perfect to talk about new information or a change. - I have bought a new car. (I didn’t have one but now I do) - She’s left (She was here and now she isn’t) - They have gone to Spain. (They were here yesterday but now they are in Spain. This is used to talk about an action when we are still waiting for it to be completed. Look out for – “still.” - But I still haven’t found what I’m looking for (from the song). - She still hasn’t arrived yet. - They still haven’t done it. - The game still hasn’t finished. 1. Write about where you have been to. E.g., I have been to France, Italy etc. 2. Write about other experiences. What you have and haven’t done. E.g., I have tried sushi but I haven’t tried Indian food. I’ve watched a lot of films etc., 3. Write about what has changed over the past year. E.g., I’ve passed my driving test etc, I’ve bought a new car and I’ve moved to London Etc., 4. Read the following present perfect dialogues to become familiar with more examples.
<urn:uuid:edc235b2-92bc-4fa3-92b3-97627bb1802c>
CC-MAIN-2016-26
http://www.jdaenglish.com/tenses/present-perfect-simple/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959665
1,161
2.6875
3
Their slightly sweet taste and versatility are part of what make carrots so popular, but beyond this, you should strive to eat more carrots because of what they can offer your health. - Heart Disease Eating more deep-orange-colored fruits and vegetables is associated with a lower risk of coronary heart disease (CHD). In particular, carrots are associated with a 32 percent lower risk of CHD, leading researchers to conclude:3 "… [A] higher intake of deep orange fruit and vegetables and especially carrots may protect against CHD." The consumption of carrots has also been associated with a lower risk of heart attacks in women.4 Antioxidants in carrots, including beta-carotene, may play a role in cancer prevention. Research has shown that smokers who eat carrots more than once a week have a lower risk of lung cancer,5 while a beta-carotene-rich diet may also protect against prostate cancer.6 Research published in the European Journal of Nutrition also found a significantly decreased risk of prostate cancer associated with the intake of carrots.7 The consumption of beta-carotene is also associated with a lower risk of colon cancer8 while carrot juice extract may kill leukemia cells and inhibit their progression.9 Further, a meta-analysis found that eating carrots may reduce your risk of gastric cancer by up to 26 percent.10 Carrots also contain falcarinol, a natural toxin that protects carrots against fungal disease. It's thought that this compound may stimulate cancer-fighting mechanisms in your body, as it's been shown to cut the risk of tumor development in rats.11 A deficiency in vitamin A can cause your eye's photoreceptors to deteriorate, which leads to vision problems. Eating foods rich in beta-carotene may restore vision,12 lending truth to the old adage that carrots are good for your eyes. In addition, research shows women may reduce their risk of glaucoma by 64 percent by consuming more than two servings per week of carrots.13 Carrots are also a rich source of lutein, and research suggests "increased lutein consumption has a close correlation with reduction in the incidence of cataract."14 - Brain Health Carrot extract has been found to be useful for the management of cognitive dysfunctions and may offer memory improvement and cholesterol-lowering benefits.15 A high intake of root vegetables, including carrots, is also associated with better cognitive function and smaller decline in cognitive function during middle age.16 And a study published in the British Journal of Nutrition found a diet rich in plant foods is associated with better performance in several cognitive abilities in a dose-dependent manner among the elderly.17 Notably, carrots had one of the strongest positive cognitive associations of the plant foods tested. - Liver Protection Carrot extract may help to protect your liver from the toxic effects of environmental chemicals.18 - Anti-Inflammatory Properties Carrot extract also has anti-inflammatory properties and provided anti-inflammatory benefits that were significant even when compared to anti-inflammatory drugs like Aspirin, Ibuprofen, Naproxen and Celebrex.19 - Anti-Aging Benefits Carrots are a valuable source of antioxidants, including carotenoids (beta carotene, lutein and alpha-carotene), hydroxycinnamic acids (caffeic acid and ferulic acid), and anthocyanins. Antioxidants may help to ward off cellular damage from free radicals, slowing down cellular aging. As noted by the George Mateljan Foundation:20 "Different varieties of carrots contain differing amounts of these antioxidant phytonutrients. Red and purple carrots, for example, are best known for the rich anthocyanin content. Oranges are particularly outstanding in terms of beta-carotene, which accounts for 65% of their total carotenoid content. In yellow carrots, 50% of the total carotenoids come from lutein. You're going to receive outstanding antioxidant benefits from each of these carrot varieties!" - Skin Health Orange-red vegetables are full of beta-carotene. Your body converts beta-carotene into vitamin A, which prevents cell damage and premature aging. Beta-carotene may also protect your skin from sun damage. Researchers even found that carotenoids, which are found in high concentrations in carrots, impart a warm glow "sufficient to convey perceptible improvements in the apparent healthiness and attractiveness of facial skin."21 - Oral Health Carrots help to clean your teeth by increasing saliva production. Eating them at the end of a meal may even help to reduce your risk of cavities.22
<urn:uuid:60ff397a-efc9-4b08-91e2-8c31a291593a>
CC-MAIN-2016-26
http://articles.mercola.com/sites/articles/archive/2016/02/15/health-benefits-carrots.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00115-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945739
973
3.359375
3
COP21 Climate Change Campaign Shortly after the United States entered World War II, all four of the Compton children chose to serve their country. Dan commanded a Navy PT Boat in the Pacific. Jim, a Marine Lieutenant, led his platoon in the liberation of Iwo Jima. Ann served in a Navy hospital as a social worker with the American Red Cross. John Parker, the youngest, left Princeton University after his freshman year to train with the 10th Mountain Division on skis, in the Rocky Mountains of Colorado. He fought with the 10th in the Italian Alps. Near the small village of Iola in the mountains northwest of Florence, a sniper’s bullet ended his promising young life. The untimely death of John Parker, the tragic loss of so many young people, and the impact of this devastating war on the whole world moved Dorothy and Randolph to establish a charitable trust in 1946. The primary mission they set for the trust was to build the foundations for peace and to help prevent another world war. Shortly after the end of World War II, Dorothy and Randolph visited the village of Iola and made friends with the village priest and local residents. Later they helped make possible the rebuilding of their bombed-out church. The parish installed a plaque in John Parker’s memory at the entrance to the church which commemorates his death and the bond formed between his parents and the people of Iola. Another tragic loss occurred when the Compton’s eldest son, Dan, died of polio in 1955, only months before the Salk vaccine secured FDA approval. Dorothy and Randolph believed that world peace would only be possible if the conditions that brought about war could be eliminated. As a result they focused their funding on the problems of the rapid growth of the human population, the depletion of natural resources due to population growth and increasing consumption levels, the accompanying degradation of the environment, and the chaotic status of human rights in much of the world. Emphasis of the Compton Trust included training for exceptional young scholars within the primary fields of interest of the Trust, educational opportunities for minority students and students from developing countries, and assuring public access to information. Randolph believed strongly in the importance of combining research and activism to address world problems. He felt that scholarship was needed to define problems and to provide effective solutions. Equally important was using the knowledge and information obtained to get the facts before the public, encourage debate, and press for political change to correct the conditions which threaten human survival. A primary interest of Dorothy’s was taking leadership in providing equal educational opportunities for minorities. Dorothy and Randolph both felt that the quality of individual leadership determined the potential for success in any venture. In order to ensure the vitality and integrity of the Trust, Dorothy and Randolph sought the advice and assistance of other individuals they respected with ideas worthy of pursuit. The family has always looked to non-family Board members to enrich the Foundation and the quality of its grants with their talents and perspectives. Over time the Trust’s mission expanded to include support for welfare, social justice, and the arts in the communities where family Board members live. The Trust was converted to a Foundation in 1973. In 1987, the Foundation expanded its international focus as a result of a gift of stock from the Danforth Foundation given for this purpose. In 1989 the Foundation relocated its offices from New York City, where it was founded, to Northern California, where Jim Compton and Ann Compton Stephens lived. Since then several of Dorothy and Randolph’s grandchildren have served or serve on the Board, and the third generation and their spouses have taken an increased interest in the Foundation and have assumed additional responsibilities. Dorothy and Randolph’s vision is still alive today and still very much a part of the Foundation’s legacy. Times have changed and the Foundation recognizes new approaches and new problems, but it continues to honor the Founders’ ideas and values and the world challenges they met with such passion. COP21 Climate Change Campaign
<urn:uuid:0b885c6a-5eef-40b6-a574-762192791975>
CC-MAIN-2016-26
http://www.comptonfoundation.org/about/history/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958244
816
3.28125
3
Skip to Content View Additional Content In This Section Polyhydramnios is a condition in which there is too much fluid in the amniotic sac, the sac that holds the developing baby (fetus). This liquid is called amniotic fluid, and it surrounds the fetus throughout pregnancy. Polyhydramnios can be caused by: Sometimes the cause of polyhydramnios may not be found. Polyhydramnios increases the risk of: Severe polyhydramnios may be treated with medicine, such as indomethacin. Excess fluid is sometimes removed through a needle that is inserted through the mother's abdomen into the amniotic sac Newman RB, Rittenberg C (2008). Multiple gestations. In RS Gibbs et al., eds., Danforth's Obstetrics and Gynecology, 10th ed., pp. 220–245. Philadelphia: Lippincott Williams and Wilkins. ByHealthwise StaffPrimary Medical ReviewerSarah Marshall, MD - Family MedicineSpecialist Medical ReviewerWilliam Gilbert, MD - Maternal and Fetal Medicine Current as ofMay 22, 2015 Current as of: May 22, 2015 Sarah Marshall, MD - Family Medicine & William Gilbert, MD - Maternal and Fetal Medicine To learn more about Healthwise, visit Healthwise.org. © 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:6a04c504-e652-44b8-87e8-cf7b2a0eb1e2>
CC-MAIN-2016-26
http://www.asante.org/app/healthwise/document.aspx?navigationNode=/1/59/1/&id=aa100180
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.853375
322
3.15625
3
ATB instruments; or ATB recorders Composed by Martin Agricola. Miscellaneous Music, Recorder Trios. Early Music Library. Sacred. 3 Scores. Published by London Pro Musica (MM.EML0311). Item Number: MM.EML0311 These three-part motets by Alexander Agricola (c. 1445-1506), based on plainsong cantifermi, are written in a distinctive, rhythmically elaborate style often found in three-part music of Agricola's time, whether sacred or profane. Si dedero was one of the most widely-known pieces of its kind, surviving in over twenty different polyphonic sources (mostly without text underlay), together with several lute and keyboard arrangements. The antiphon Da pacem was used by many composers as a basis for more or less elaborate settings: the smooth contours of the opening allowed the easy use of many of the standard rhythmic and melodic figures of the day: canonic settings are also quite common (for instance, two double canons in Andrea Antico's 1520 collection of such pieces). CONTENTS: Si dedero O quam glorifica luce Da pacem.
<urn:uuid:8adbc4ee-ceca-42f7-b144-0befb998e865>
CC-MAIN-2016-26
http://www.sheetmusicplus.com/look_inside/747857
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921408
250
2.65625
3
Seabrook 1977watch a preview by Robbie Leppzer black and white, 80 min, 1978 Non-profit, K-12, and Individual pricing also available See pricing information and conditions In April 1977, the small coastal town of Seabrook, New Hampshire became an international symbol in the battle over atomic energy. Concerned about the dangers of potential radioactive accidents, over 2,000 members of the Clamshell Alliance, a coalition of environmental groups, attempted to block construction of a nuclear power plant in Seabrook. 1,414 people were arrested in that civil disobedience protest and jailed en masse in National Guard armories for two weeks. Filmed in a video-verité style, Seabrook 1977 chronicles the dramatic events which made world headlines and sparked the creation of a grassroots antinuclear power movement across the United States. Scenes of the nonviolent demonstration and subsequent internment are interwoven with interviews with participants on all sides of the event, including local Seabrook residents, antinuclear activists, New Hampshire's pro-nuclear Governor Meldrim Thomson, police and utility officials. The video vividly documents the unfolding events as people march with banners and backpacks across the tidal marshes onto the construction site, erect a colorful tent city, and conduct on-site negotiations with the governor and police. After the mass arrests at the nuclear site, the scene changes to inside the armories, where the video follows the extraordinary experiences of the largest group of U.S. citizens incarcerated since the Vietnam war protests. Seabrook 1977 tells the story of this seminal event of 1970's environmental activism and shows people making history from the grassroots. As the nuclear energy lobby tries to sell nuclear power as a “carbon-free alternative” to fossil fuels in the current debate over climate change, the experiences of 1970's anti-nuclear activists are more relevant than ever. “Seabrook 1977 is an invaluable historical document. It portrays one of those events in American history omitted from our textbooks but which is an important part of the ongoing struggle of the people in this country for a healthy society. The film manages to capture not only the sights of an extraordinary action, but the voices of ordinary people expressing their most personal feelings about one of the critical issues of our time.” — Howard Zinn, Author of A People's History of the United States “A potent catalyst for social action. A fascinating educational and emotional experience that left me feeling that at last I understood exactly what happened at Seabrook. Seabrook 1977 is full of surprises. Not only does it document the events— it also makes them meaningful. Above all, it never loses sight of the essential humanity of all the individuals involved.” — Monica Faulkner, Amherst Morning Record “Superb directing. Seabrook 1977 shows why Seabrook became legend.” — William Scaife, Springfield Republican This DVD is a part of Turning Tide: The Robbie Leppzer Collection Nationally Broadcast on Free Speech TV WGBY-TV (PBS, Springfield, MA) Turning Tide Productions View more photos on www.flickr.com Voices of Orchid Island
<urn:uuid:8f77c3ea-9c22-4353-8f3a-a78e2e8de8a9>
CC-MAIN-2016-26
http://www.der.org/films/seabrook-1977.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906234
673
2.65625
3
by David Johnson Types of Ballots Italian Origins of "Ballot" The word "ballot" comes from the Italian, "ballotta" or "little colored ball." In the 13th century some Italian communes used ballottas for votes. But the use of some type of voting mechanism is even older. Questions over the Florida ballot in the 2000 presidential election generated new interest in the various kinds of ballots Americans have used over the years. As the country developed, ballots known as "papers" came into use. The word "ballot" was adopted around 1676. The British colonies in America were the first to use a secret ballot, which later became widespread. But the ballots Americans would recognize today, which contain the names of all candidates, had still not made its appearance. Until the 1880s, there was no single ballot. Political parties issued long "tickets" listing all the candidates running for office on that party. Voters were urged to "vote a straight party ticket." First used in the Australian state of Victoria in 1857, the paper ballot listing all the candidates was first known as "the Australian ballot." In 1889, New York became the first American state to use these ballots. Gradually, it came to replace voting by ticket. Although they were once common, today only 1.7% of registered voters use paper ballots. They are primarily used in small towns, rural areas, or for absentee voting. Until recently, more than half of all American voters used machines with levers beside the name of each candidate. The voter entered a booth, drew a curtain, and then pulled the levers corresponding to each voting choice. The machines recorded the votes and the numbers of people voting. Also known as the "Myers Automatic Booth," mechanical lever machines made their first appearance in the U.S. at Lockport, N.Y., in 1892. Rochester, New York, used them four years later and soon they were used across New York State. By 1930, residents of most major American cities voted on mechanical machines. In the 1996 presidential elections, however, roughly 20% of all voters used the machines, which are no longer made. Marksense or direct recording electronic systems are now replacing machines. The famous "butterfly" ballot used in Florida is a type of punch card ballot. There are two main types of punch card ballot. To use one type, voters are issued a list of candidates and ballot questions. Each voting choice is assigned a number. They also receive a punch card covered with holes. Beside each hole is a number. They must punch the hole that corresponds to the number of the choice they wish to make. For example, ballot question 8 might have two choices—number 10 for "yes" and number 11 for "no." Voters would have to punch the hole at the correct number to register their preference. Or, in using the other type, voters make a hole beside the name of the candidate of their choice. Punch cards were first used in two Georgia counties for the 1964 presidential primary election. In 1996, 37% of all voters used punch cards, including the 3.8 million registered voters in Los Angeles County, the nation's largest electoral jurisdiction. The marksense system, also known as optical scan, is becoming more popular. In 1996, 25% of all American voters used the system. Optical scanning calls for voters to use a black marker fill in a circle, or box beside their voting choice. A scanning machine then picks up the dark markers on the paper, tabulating the results. The direct recording electronic method, DRE, uses a voting machine with the candidates printed on a computer screen. The voters push a button or the appropriate spot on the surface to record their choices. Those wishing to write-in a candidate are able to use a keyboard to type the name. In 2004, nearly 29% of voters used a DRE system. Town meeting form of government, which is mainly confined to the six New England states, decides questions of government, including the annual operating budget, town by-laws, or other laws with an actual show of hands. If the vote is close, there are provisions in most towns for a secret paper ballot. In some cases the town meeting moderator could ask voters to stand. In larger towns, voters will elect representatives to legislate at town meeting. In smaller communities, however, "open town meetings" are the norm. In communities with open as opposed to representative town meeting, any registered voter may attend, to speak and vote on the articles under consideration. While various voting methods are used in the United States today, the Federal Election Commission heads a consortium that has designed standards to ensure that electronic systems are accurate and fair. The consortium includes state election commissioners and various technical experts. The National Association of State Election Directors (NASED) maintains and election center in Houston, Texas that maintains records of which voting systems have been tested. Frequently, American ballots are quite long, especially if voters are being asked to decide a number of questions. English and Canadian ballots, by contrast, are usually quite short. For more information on voting, go to the Federal Election Commission website.
<urn:uuid:9d4bd509-5e0d-413b-99ed-90e230429ada>
CC-MAIN-2016-26
http://www.factmonster.com/spot/campaign2000ballot.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961909
1,068
3.59375
4
Inflammatory Bowel Disease and Vaccines 1. What is Inflammatory Bowel Disease? Inflammatory Bowel Disease, also known as IBD, is a general medical term used to refer to chronic inflammatory diseases of the intestine. Two common inflammatory bowel diseases are ulcerative colitis and Crohn's disease. These chronic illnesses can inflame the gastrointestinal tract causing bloody diarrhea, abdominal pain, and weight loss. Ulcerative colitis can affect the entire large intestine or the rectum. Crohn's disease mainly affects short segments of both the small and large intestine. Although IBD can begin at any age, it's usual onset is from age 15 to 30 years. IBD is a rare disease with 3-20 new cases recognized per 100,000 persons per year. 2. What causes inflammatory bowel disease? The cause(s) of inflammatory bowel disease is not known. There are several unproven theories as to the cause(s) of IBD: A) IBD is known to occur in the same family suggesting a possible inherited cause. B) A possible environmental cause is suggested because Crohn's disease most often occurs in people who smoke , residents of Northern European countries and in urban areas. C) Another theory is that significant emotional events in a persons life may trigger the disease. D) Other researchers speculate that the disease may be caused by an infection or virus. E) Still others believe that the body's immune system is reacting to unidentified or unknown antigens (a protein marker on the surface of cells that identifies the cell). This antigen would cause the immune system to respond inappropriately resulting in chronic inflammation. It is not known if measles, mumps or rubella virus infection can cause IBD. The virus that causes measles disease infects the respiratory system and then spreads to lymphatic tissue (an important part of our immune system). During the acute infection, lymph cells in the gastrointestinal (GI) tract are infected but whether this causes chronic inflammation is highly questionable. One theory speculates that measles virus may persist in the intestine in certain individuals and later trigger a chronic inflammatory infection, however this has not been proven. Because MMR vaccine contains a very weak live measles virus it has been suggested that measles vaccine could cause an inflammatory process in the intestine. This theory has not been proven and is speculative. Two types of data - epidemiological and pathological-link measles infection and IBD. However, because conflicting results have been obtained for both types of data by different investigators, this link can not be established. 3. Does the measles virus vaccine cause IBD? There is no scientific proof that measles vaccine virus can cause IBD. In fact, because almost everyone is vaccinated when they are young, most people with IBD will have received a vaccine. In order to prove that measles vaccine caused IBD it would be necessary to prove that the measles virus is definitely present in GI lesions, that it is active, and that it can cause an inflammatory response. Additionally, it would have to be shown that this reaction was caused by the measles virus or by the attenuated (weakened) measles vaccine virus. 4. What about studies that have suggested an association between measles virus vaccine and IBD? Isolated studies that have suggested an association are weak and have several flaws. The possibility of an association between measles virus and chronic inflammatory bowel disease was recently discussed in a British medical journal, The Lancet. Researchers believe they discovered a new childhood illness that caused bowel disease and psychiatric problems including behavioral disorders and autism. MMR vaccine was suggested as a possible cause. The theory is that MMR vaccine could lead to intestinal inflammation resulting in decreased absorption of the intestinal tract to essential vitamins and nutrients which in turn could lead to developmental disorders. An editorial expressing concerns about the study was also published in the same issue. That all patients had bowel disease is not surprising since all were referred to a department of gastroenterology. Some of the concerns expressed were that in this small study (12 patients) there is no report of detection of vaccine viruses in GI or brain tissue for any of the patients. Multiple laboratories using more sensitive and specific laboratory tests, have failed to detect any findings to suggest this. In addition the GI pathology should have existed prior to the behavioral symptoms to support there theory. The researchers reported the onset of GI symptoms was unknown in 5 patients and noted after the onset of behavioral symptoms in another 5 patients. A few Swedish studies have also suggested a high risk of Crohn's Disease in those exposed to measles in utero. However, the Swedish studies involved very small numbers of cases, 2 cases in the first study and 4 in the second study (2 of which were cases in the first study). Another study suggested in a retrospective cohort study that MMR vaccine might be a risk factor for Crohn's disease. However, the selection and recall biases and the differences in data collection in this study were so substantial as to cast doubt on the validity of the findings. Another study has reported finding measles virus proteins and RNA in the intestinal tissue of cases of Crohn's disease using in situ hybridization and immunologic staining . 5. Is there scientific evidence (both epidemiologic and laboratory) to show there is no association between measles vaccine and inflammatory bowel disease? There is strong scientific evidence (both epidemiologic and laboratory) to show there is no association between measles vaccine and inflammatory bowel disease. In June 2000, the results of a population-based epidemiologic study conducted by the CDC, concluded that there was no evidence that vaccination with MMR or other measles-containing vaccines, or the the age of vaccination early in life, was associated with an increased risk for the the development of IBD. Using a case-control design, the study compared patients diagnosed with IBD and those without IBD, and looked at the vaccination history of MMR vaccine and the timing of vaccinations. No association was found to link the MMR vaccine and IBD. This study was the result of a collaborative project between the CDC and four large HMO's, part of the Vaccine Safety Datalink Project. Four other epidemiologic studies have failed to confirm the possible association between measles virus and inflammatory bowel disease. Nielsen et al. examined all possible cases of measles in pregnant women admitted to a Copenhagen hospital from 1915-1966. None of the offspring of the 25 identified women had developed Crohn's disease. In 1995, Hermon-Taylor compared the incidence of Crohn's disease in England and Wales with measles infection, including information after the introduction of measles vaccine. No association was found. In their case-control studies, Jones et al and Feeney et al found no association between IBD and measles infection or measles vaccine, respectively. In another study, researchers used the same laboratory methodology as Wakefield et al., and could not identify any measles virus in patients with IBD, although they did find the presence of other viral and bacterial agents (Liu). Several other research groups using more sensitive and specific tests (polymerase chain reaction, PCR) have not found any evidence of measles virus RNA in the gastrointestinal tissues of patients with Crohn's Disease or ulcerative colitis. 6. What does the CDC recommend for measles, mumps rubella (MMR) vaccine? The CDC continues to recommend two doses of MMR vaccine for all persons; for children, the first dose is recommended at 12-15 months of age and the second dose is recommended at 4-6 years of age. Although the risk of Inflammatory Bowel Disease (IBD) is higher for those who have relatives with IBD, there are no data to suggest that measles vaccine will increase or decrease this risk. Measles vaccine is recommended for children with a family history of IBD unless there is another specific reason not to vaccinate. (for example, in persons who very ill and are not able to fight infections) Source: Centers for Disease Control and Prevention, U.S. Department of Health and Human Services, November 2000
<urn:uuid:be4c768d-5ea7-419d-bdd1-4ab16d2371ae>
CC-MAIN-2016-26
http://www.healingwell.com/library/ibd/info13.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957824
1,638
3.484375
3
From ladybug crafts to butterfly crafts, we have a bunch of creepy crawly crafts for kids! Most of our kids bug crafts use simple materials, that you probably already have on hand. We've included templates on all crafts where necessary to make them even easier to put together. As always, we've made all of these bug crafts in our own home (many with our own daughter) to ensure that they are fun and child-friendly. We hope you enjoy making our kids bug crafts with your children too! This friendly bumblebee craft is fun to make at the beginning of the year when learning the "bee" rules (be kind, be gentle, be a good listener, etc.) the letter B, or insects. When finished, this bee craft looks great hanging from a classroom ceiling or as a refrigerator magnet. Kids will enjoy making this handprint butterfly mosaic style craft. This colorful butterfly craft is a perfect project for spring or when learning about butterflies. This educational craft uses pasta to teach the stages of a butterfly! You can use this craft when you have real caterpillars in the classroom and are waiting for them to become butterflies. This craft is perfect for a preschool class or to do at home. Kids will love creating this cute spring craft and waiting to see if their caterpillar has changed into a butterfly! This is a classic children’s craft that is a great spring craft and a great bug craft too! This cute butterfly craft is simple to make, with lots of opportunities for a child to personalize their creation. When the craft is done the butterfly can also be used as a cute puppet or prop and your child flies it around the house. This inchworm craft is a perfect animal craft to make when learning about the Letter I. Our inchworm is easy to make using our provided template, construction paper and glue. Perfect for even young crafters, this alphabet craft is sure to hold their interest. Check out this fun glow in the dark firefly craft! Kids are sure to love the end result of this one and they will love the process of making it too. When you are done, take it into a dark room and watch it glow just like a real firefly. This little bug makes the perfect toddler or preschool craft. Transform three paper plates into a ladybug in a few simple steps. Our daughter loved making this ladybug craft (especially making the spots!) and we hope your child will too. Recycle a few water bottle lids into a happy caterpillar with our easy bottle cap caterpillar craft. Simply color the bottle caps with a crayon and then glue them to a piece of paper. Add glitter, googly eyes, and pipe cleaners to finish your caterpillar craft. This is a great bug craft for kids to make. Turn a toilet paper roll into a cute butterfly that is able to stand up by itself! All you need is our printable template, some crayons and a little imagination. Glue the pieces together and you have a great insect craft for kids. Turn a styrofoam ball into your very own itsy bitsy spider with our fun styrofoam spider craft! This bug craft uses fun materials and lots of paint so it's sure to catch your child's interest. In addition to making a fantastic kids bug craft, it would also make a great halloween craft. This fuzzy caterpillar craft is a perfect toddler or preschool craft. Made from pom-poms and popsicle sticks, it's easy to make a whole family of these caterpillar crafts. They are also a great teaching tool to work on the concepts of big and small, color, and patterns. Our daughter loved this great bug craft. If you are looking for a great preschool bug craft, our paper inchworm craft is the answer! All you need are a few paper strips, some scotch tape, googly eyes and pipe cleaners to make this very fun, simple kids craft. This is one of our favorite simple kids bug crafts. This handprint butterfly craft is a great craft to record your child's handprints so you can remember how small they once were long after they grow up. We love handprint crafts and do as many as we can come up with in our house. This bug craft would be great as a decoration or as a kids spring craft. This is a simple craft that can be adapted to many age ranges and abilities. Kids will enjoy making this spider from simple materials including two paper plates. This spider craft is great for toddlers and is even fun to play with when you are done. Who says all bug crafts have to look like real bugs! Our styrofoam lovebug craft is a great craft to let your children's imagination take off. Help them create their own insects using our bug craft as a guide. If you stick with our design, this lovebug is a great Valentine's Day craft as well. Bubble wrap, paint and our provided template make it easy to put together this cute summer kids craft. The bubble wrap is a wonderful stamp that gives the beehive a fun look. The addition of fingerprint bees makes this summer craft even better. Make this really cute heart bee craft with the help of our printable template and some basic craft materials. This craft is simple to do with kids and also makes a cute "Bee Mine" Valentine's Day craft. Our paper plate spider web craft is a twist on the typical bug craft for kids. Help your children spin their own spider web made from yarn and a paper plate, and then draw on some spiders as well. This bug craft makes a great preschool craft, as well as a fun way to incorporate a fun nature lesson into your crafting. To make our folding paper butterfly craft, print and cut out our butterfly template, fold it in half and have your child paint a design - the goopier the better! Then just fold the unpainted side over the painted side and press together to create a perfectly symmetrical butterfly and a great kids bug craft! Our daughter loved this one. Our heart butterfly is a great preschool craft. Turn hearts of various sizes into a smiling butterfly. We've found that it's almost impossible to make just one of these heart butterfly crafts - our daughter is a huge fan of paper crafts and we ended up making a whole little family of butterflies. This bug craft also makes a great Valentine's Day craft. This tissue paper butterfly craft is an incredibly simple bug craft for kids to make, so we suggest making many of them in a lot of different colors to decorate your home. Check out our collection of free printable bug worksheets for kids. We've created a whole set of kids bug worksheets to go along with this collection of kids bug crafts. This set of printables includes worksheets designed to work on early spelling skills, math skills and more.
<urn:uuid:2bb1d730-490c-422e-9ea9-5272d97a2ebe>
CC-MAIN-2016-26
http://www.allkidsnetwork.com/crafts/bugs/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943142
1,412
2.53125
3