text
stringlengths
307
28.2k
__index_level_0__
int64
0
904
An EV’s speed controller is the equivalent of the carburetor or fuel-injection system in an ICE vehicle. To control the vehicle’s speed, the controller takes the energy from the battery pack and feeds it to the motor in a regulated manner. Modern controllers do this by pulse-width modulation, taking the full voltage from the battery pack and feeding it to the motor in thousands of tiny on–off pulses per second. The longer the duration, or “width” of the “on” pulses, the more electricity the motor receives and the faster the vehicle moves. Because the pulses are so tiny, the process feels completely smooth to the driver. EVs can have AC motors or DC motors, and each needs its own kind of controller. In EVs with AC motors, an AC controller must first convert the DC from the batteries into AC before feeding it to the motor. How does the controller know how much energy to give the motor? The potbox tells it. This linear potentiometer is a sensor that produces a resistance output proportional to its displacement or position. It responds to the driver’s foot pressure on the throttle pedal and sends a corresponding signal to the controller. The throttle pedal in an EV works just as it does in an ICE vehicle—the more you depress it, the faster you go. The motor is the brawn of the EV, converting electrical energy from the batteries into mechanical energy to propel the vehicle. Instead of invisible electrons flowing through wires, we now have rotating components. It’s the relationship between electricity and magnetism that enables the motor to do work. Passing electricity through a wire creates a magnetic field around the wire. By winding wire in a motor and running electricity through it, magnetic poles that repel each other are created, causing the shaft of the motor to spin. If the EV has regenerative braking, the motor can also act as a generator. When the vehicle is coasting or braking, the momentum of the car drives the motor—rather than the motor driving the car. The magnetic fields induce current in the wires, the flip side of the process described above. The electricity flows backward through the controller (which rectifies it from AC back into DC) and into the battery pack. This process also creates drag on the motor—the “braking” part of regenerative braking, which is very similar to what happens in an ICE car when you lift your foot off the throttle in a low gear. Though it’s an intrinsic part of AC drive systems, regenerative braking is more rare in DC systems, where a special controller and extra wiring are required to allow the motor to serve as a generator. The energy output from the spinning shaft of the motor now needs to reach the drive wheels. On a very small EV, the motor might drive the wheels directly, but with full-size vehicles, at least one level of gear reduction is necessary to reduce the revolutions per minute (rpm) of the motor to a usable speed at the wheels. A “direct-drive” vehicle will have a single gear reduction, which might be a gearbox or a belt-and-pulley arrangement. No shifting is necessary. This is common with AC motors that have upper limits of 8,000 to 13,000 rpm. DC motors with limits of about 5,000 to 6,000 rpm usually use the same kind of multiple-gear manual transmissions found in ICE cars. In EVs with manual transmissions, the clutch is usually retained and works the same as in an ICE vehicle.
648
Deep in a desiccated, Utah desert, surrounded by mountains and fringed with scorched sage and saltbush, stand the surreal remains of German Village. Out of bounds, out of place, out of time and 90 miles from Salt Lake City, it is surely the most bizarre feature of Dugway Proving Ground, a test site created by the Allied military during the second world war to develop weapons of mass destruction for use against civilian targets in Germany and Japan. All that survives of German Village is a single block of high-gabled, prewar Berlin working-class housing. It is accurate in every respect. And it should be: commissioned by the chemical warfare corps of the US army, it was designed by Erich Mendelsohn (1887-1953), the German architect who settled in the US in 1941 after a spell in England. I was alerted to the story of German Village by Mike Davis, who features it in his provocative book Dead Cities: A Natural History, a study of the vulnerability of modern cities from New York to Tokyo to destruction by man and nature. Mendelsohn's involvement in this deathly project, seemed, at first, bizarre. A Jew from Allenstein in East Prussia (today, Olsztyn in Poland), Mendelsohn settled in Berlin, where he trained as an architect after studying economics in Munich. From the trenches of the great war he sent home visionary sketches of extraordinary streamlined, or expressionist, buildings. In 1919, during the great flu epidemic, he began work on the Einstein Tower, an astrophysics laboratory for the German mathematician and scientist, at Potsdam on the edge of Berlin. Over the next 10 or 12 years, Mendelsohn developed beautiful, sweeping, clean-lined, light-filled architecture - much of it in Berlin - that appeared to catch the spirit of the old German Enlightenment and represent it afresh in the uncertain days of the Weimar Republic. These included the Metal Workers Union building, the Universum Cinema on Kurfürstendamm, the Columbushaus (Galeries Lafayette) and several villas, including his own. Outside Berlin, the streamlined department stores he built for Schocken at Nuremberg, Stuttgart and Chemnitz were hugely influential worldwide. Mendelsohn left for England when Hitler was voted into power in 1933. Here, with the Russian-born dandy Serge Chermayeff, he built, among a number of fine houses, the De la Warr Pavilion at Bexhill-on-Sea. How can this Erich Mendelsohn be the architect of the dark and deathly German Village in the Utah desert? Mendelsohn left no correspondence or notebooks relating to the Dugway Proving Ground project, where napalm and poison gases were developed and tested. He had been under gas attack in the trenches, yet it is hard not to think that his primary motivation in the desert of Utah was revenge on the Nazis. If this seems fair enough, what remains disturbing is the fact that this work was expressly designed to destroy working-class districts of Berlin, including Wedding and Pankow. These had been communist strongholds, virulently anti-Hitler, before the Gestapo and SS all but destroyed opposition to the Nazi regime. A concerted Allied attack, by the British and US air force on working-class districts of German and Japanese cities had, however, become more or less official policy by 1943. Churchill wanted to gas them. Killed and mutilated in sufficient numbers, the German working class would, he argued, rise up against Hitler and bring a quick end to the war. "It is absurd to consider morality on this topic," he told RAF planners when the first German V1 rockets fell on London. "I want the matter studied in cold blood by sensible people, and not by psalm-singing uniformed defeatists." Sensible people included the prime minister's favourite scientific adviser, Professor Frederick Lindemann (Lord Cherwell), who insisted that "the bombing must be directed essentially against working-class houses. Middle-class houses have too much space around them, and so are bound to waste bombs." Psalm-singing uniformed defeatists included the US's air force's celebrated commander Jimmy Doolittle, who took against Churchill's proposed Operation Thunderclap that aimed to kill 275,000 Berliners in a single 2,000-plane raid scheduled for August 1944. It did not take place. Washington's war secretary Henry Stimson said he did not want "the United States to get the reputation of outdoing Hitler in atrocities". His less diplomatic deputy, Robert Lovett, pleading the case for adopting anti-personnel bombs loaded with napalm and white phosphorous, said: "If we are going to have a total war, we might as well make it as horrible as possible." Churchill trumped Lovett by calling on US president Franklin D Roosevelt to speed up production of a promised 500,000 top-secret "N-bombs" - filled with anthrax, developed at Dugway - to be dropped on Berlin and five other German cities. As the debate raged in political and military circles, Mendelsohn, with scientists from Standard Oil and German-emigre set designers from Hollywood's RKO studio, set to work on German Village. RKO expertise contributed the design of proletarian Berlin interiors down to the last detail. Using forced labour (inmates from Utah state prison), German Village and its six "mietskasernen" (rent barracks) apartment blocks were completed in 44 days, in time for experiments scheduled from May 1943. Mendelsohn and his team had done a good job. Their designs were far superior to the German housing built in England for test destruction by the RAF at Harmondsworth, near Heathrow airport. Assaulted by napalm, gas, anthrax and incendiary bombs, German Village was rebuilt several times during 1943. Nearby, the Japanese Village (long since vanished), designed by the Czech-educated architect Antonin Raymond (1888-1976), paved the way for incendiary attacks on working-class districts of Tokyo. On March 9 1945, 334 US air force B-29 superfortress bombers dropped 2,000 tons of napalm and magnesium incendiaries on the timber and paper houses of Asakusa. Officially, 83,793 Japanese were killed, 40,918 injured and 265,171 buildings destroyed. The same month, German Village aided the fire raids on Dresden. By the time Germany surrendered in May 1945, US and British raids had destroyed 45% of German housing. And, as Davis wryly observes: "Allied bombers pounded into rubble more 1920s socialist and modernist utopias than Nazi villas."Mendelsohn was the architect of some of the very best of these white, concrete dreams. Dugway, Davis argues, "led the way to the deaths of, say, two million Axis civilians", and German Village remains "a monument to the self-righteousness of punishing 'bad places' by bombing them". There is no doubt that Nazi Germany and Imperial Japan had to be defeated; but did the Allies really need German Village, Japanese Village and the refined architectural efforts of Mendelsohn and Raymond? At the fiery dawn of the 20th century, beneath the civilised, enlightened facades of Britain and the US, as well as Germany and Japan, was a desire for expansion, destruction and terrible revenge. Sitting on the sun-deck of Mendelsohn's pavilion at Bexhill-on-Sea, this axis of modern evil seems so very far removed, as far away, in fact, as the sole surviving "rent barrack" of German Village, Utah. · Dead Cities: A Natural History by Mike Davis, The New Press, £16.95.
410
June 1st 1584 Perth's HammermenThe Hammermen’s Incorporation of Perth embraced a whole collection of crafts including silversmiths, goldsmiths, clocksmiths and watchmakers, gunsmiths, locksmiths, blacksmiths and others. At the head of the organisation was the Deacon who was responsible, not only for the testing and quality of the articles made by the various crafts, but also for such matters as the employment of apprentices and the times and places of public selling. He had the authority to punish either by fines or in extreme cases by expulsion. Another important member was the boxmaster or treasurer who had responsibility for ‘Saint Eloyis Box’, a wooden chest which contained money, valuables and securities of all kinds. As a rich and powerful organisation, the Hammermen were able to safeguard the interests of their group and also to provide welfare and education for the widows and orphans of their former members. This was considered to be an important aspect of their work. From 1584 all transactions and business transacted by the Hammermen were recorded in the Hammermen’s Book. The book itself was provided by “William Lauder, burgess of Perth, as maister joynit with the Craft of the Hammermen as buik binder and pearchment maker.” From 1589, the Hammermen had their own seats in the gallery of St John’s Kirk. There are few records of the early work of silversmiths or goldsmiths and the unsettled political conditions of the time were not conducive to the production of fine silverware. It was not until the late 18th Century that there was something of a boom in the trade. Even so it continued to be strictly functional, plain communion cups, plain spoons and teapots with the minimum of decoration. It was early in the 19th Century before more ornate work appeared. This came in particular from Robert Keay the elder and his nephew Robert Keay the younger who between them carried on business for sixty five years, until 1865. Other well known names were John Pringle, John Hogg, Charles Murray, John Scott, Charles Sheddon and David Greig the elder. But already there were two factors which between them were to kill the craft of silversmithing in Perth. In 1836 an Act was passed requiring all silver to be essayed in Glasgow or Edinburgh. This had the effect of concentrating production in these cities. Then by 1850 the new cheaper process of electro-plating took over much of the cheaper end of the market. Perhaps the last successful Perth silversmith was David MacGregor who was active from 1860 until his death in 1908.
810
The common name for sedums is Stonecrop. There is a Stonecrop Nursery in eastern New York which was the first garden created by Frank Cabot. Frank created the Garden Conservancy, an organization which strives to preserve some of our exceptional gardens for posterity. Each year it also runs its Open Days Program which opens gardens to the public throughout the country. Frank Cabot went on to create Les Quatre Vents, an outstanding garden at his family home in Quebec. There are two sedums which most gardeners grow; Sedum acre, a tiny low-growing groundcover plant with bright yellow flowers. This is being used effectively in the Peace Garden in the plaza between the library and city hall. The other is Autumn Joy which is in bloom now and will continue to provide color for months to come. Some references say it requires full sun. Not so! I have it in three locations in my garden. I have several plants growing out of a south-facing wall. But there are tall oaks and maples to the south so that the only time it gets direct sun is in spring before the oaks leaf out. The rest of the year it is dappled light. Another plant is in the east-facing bed on top of my long stone wall where it gets only morning sun. The third plant is in my shrub-perennial border where it gets a bit of sun mid-day. Mine is the ordinary run-of-the-mill Autumn Joy, but there are several cultivars offered in nurseries. Among these are: Crimson, Iceberg, which has white flowers, Autumn Fire and Chocolate Drop, growing only eight inches tall with brown leaves and pink flowers. There are two native sedums: Roseroot, Sedum rosa and Wild Stonecrop, Sedum ternatum. A third Wild Live-forever, Sedum teliphiodes, grows on cliffs and rocks in Pennsylvania and southward.
358
- science (n.) - c.1300, "knowledge (of something) acquired by study," also "a particular branch of knowledge," from Old French science, from Latin scientia "knowledge," from sciens (genitive scientis), present participle of scire "to know," probably originally "to separate one thing from another, to distinguish," related to scindere "to cut, divide," from PIE root *skei- (cf. Greek skhizein "to split, rend, cleave," Gothic skaidan, Old English sceadan "to divide, separate;" see shed (v.)). Science, since people must do it, is a socially embedded activity. It progresses by hunch, vision, and intuition. Much of its change through time does not record a closer approach to absolute truth, but the alteration of cultural contexts that influence it so strongly. Facts are not pure and unsullied bits of information; culture also influences what we see and how we see it. Theories, moreover, are not inexorable inductions from facts. The most creative theories are often imaginative visions imposed upon facts; the source of imagination is also strongly cultural. [Stephen Jay Gould, introduction to "The Mismeasure of Man," 1981] Modern sense of "non-arts studies" is attested from 1670s. The distinction is commonly understood as between theoretical truth (Greek episteme) and methods for effecting practical results (tekhne), but science sometimes is used for practical applications and art for applications of skill. Main modern (restricted) sense of "body of regular or methodical observations or propositions ... concerning any subject or speculation" is attested from 1725; in 17c.-18c. this concept commonly was called philosophy. To blind (someone) with science "confuse by the use of big words or complex explanations" is attested from 1937, originally noted as a phrase from Australia and New Zealand.
258
Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers' actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology -- traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach.
627
: Grade-2 English Games Sight Words is a simple vocabulary and memory exercise for kids. It is designed to introduce kids to some common dolch sight words. Early readers recognize sight words from having memorized them. In this exercise, kids will learn to recognize and recall dolch sight words by flipping the cards and making a pair of the matching words. Kids will enjoy as they sharpen their memory skills and build their vocabulary. This will help kids in improving their reading skills also.
632
3.1 Systematic Errors Systematic errors are uncertainties in the bias of the data. A simple example is the zeroing of an instrument such as a voltmeter. If the voltmeter is not correctly zeroed before use, then all values measured by the voltmeter will be biased, i.e., offset by some constant amount or factor. However, even if the utmost care is taken in setting the instrument to zero, one can only say that it has been zeroed to within some value. This value may be small, but it sets a limit on the degree of certainty in the measurements and thus to the conclusions that can be drawn. An important point to be clear about is that a systematic error implies that all measurements in a set of data taken with the same instrument are shifted in the same direction by the same amount - in unison. This is in sharp contrast to random errors where each individual measurement fluctuates independently of the others. Systematic errors, therefore, are usually most important when groups of data points taken under the same conditions are being considered. Unfortunately, there is no consistent method by which systematic errors may be treated or analyzed. Each experiment must generally be considered individually and it is often very difficult just to identify the possible sources let alone estimate the magnitude of the error. Our discussion in the remainder of this chapter, therefore, will not be concerned with this topic.
282
Did You Know... The Facts About HPV? The Abramson Cancer Center of the University of Pennsylvania Last Modified: January 13, 2008 - HPV is one of the most common sexually transmitted diseases in the world. - By age 50, about 80% of women have been infected by some type of HPV. - Most HPV infections do not cause any symptoms, therefore people are unaware that they are infected. - 70-80% of HPV infections resolve spontaneously because our immune system fights them off. - HPV is found in 99% of cervical cancers. - There are 100 strains of HPV, 12 or more are classified as “high risk” and are linked to cancer. - Most women with HPV do NOT develop cervical cancer. - About 20% of women infected with HPV will develop chronic infection, and 2% of these will develop cervical cancer. - Two strains of HPV are responsible for causing genital warts. - Sexual intercourse is not necessary for transmission of HPV; skin to skin genital/genital, genital/anal and possibly genital/oral contact is sufficient for transmission. - Use of condoms can reduce the risk of transmission, but it cannot prevent all transmission because some genital tissue remains uncovered. - HPV is the cause of many high and moderate grade abnormal pap smears (CIN 2/3) and some low grade lesions (CIN 1). - HPV is estimated to cause: - 70% of anal cancers - 50% of vaginal and vulvar cancers - 50% of penile cancers - 20% of head and neck cancers - Researchers are looking at vaccinating men as well as women.
376
TV is one of the most popular forms of media. As much as there are many people who might want to dispute this, a majority of people still find themselves drawn to the entertainment box. Radio which was more popular in the previous generation is slowly losing its glory. This is mostly because there are numerous gadgets that can be used to listen to music. Podcasts for example can be used in place of the radio. This is one of the things that is increasingly gaining popularity in social media. Some of the basics that involve the podcasts include: This is a combination of the words “broadcast” and “iPod”; podcasts are a kind of digital media that is episodic. Users can download to this media via web syndication when they have subscribed. They take the form of the audio although there are video podcasts where slideshows are used. In the past, they were only used by the techie demographic but the general public son caught up. Content used in the podcasts usually ranges from culture, sports, music and magic among many others. Social Media and Podcasts Podcasts were introduced almost at the same time as social media. Most people understand social media as fancy web 2.0 terminologies that are related to Facebook and Twitter which are a means of opening up communication as well as interactions between organizations and their followers, fans and audience, brands and persons. The podcasts allow for a wider interaction with people mostly because of the syndication feeds. Feeds permit the users to download the podcasts once they have been released and listen to them using a variety of mediums. This is different from the radio where you have to tune in at a specific time or date. Most of the time, the people who have produced the podcasts will become the producers and vice versa thus they end up holding conversations with each other. Podcasting calling usually calls for active participation and listening. Content and Freedom Podcasts empower both the consumer and the producer. Listeners can choose the time they want to listen to the content. Anyone with a computer and microphone can participate as it does not have too many restrictions. They have become one of the most unique ways of sharing information/content. Podcasts have broad audience, do not have any restrictions, can be used easily as it does not have difficult applications and is one of the major forms of media which gives people voice to be heard. More on podcast transcription
384
On April 30, 1803, the United States government made one of the greatest land transactions of all time when it purchased from the French Republic for $15 million dollars, a piece of real estate extending from the Gulf of Mexico north to Canada, and from the Mississippi River Basin west to the Rocky Mountains. Called the Louisiana Purchase, this transaction added 830,000 square miles of uncharted wilderness to the territory of the United States. The Louisiana Purchase opened the west for settlement by Europeans and Americans and had grave implications for American Indians, who would soon find their ancestral homelands taken from them. It allowed for the extension of slavery, brought an end to French and Spanish domination in Arkansas and allowed a diversity of settlers to develop and perpetuate their own cultures in the six distinct geographic regions of the state. Play the games below or download the lesson plans and activity sheets to learn more about the Louisiana Purchase. Download these lesson plans and activity sheets for use in your classroom. All documents are in PDF Format. Play the Games Click on the following images to play the game! You'll need Shockwave Flash to play. ©2013 Department of Arkansas Heritage. All Rights Reserved Aristotle Web Design ®
224
All rights reserved Printed in the United States of America No part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording, or otherwise), without the prior permission of the publisher. Requests for permission should be directed to: email@example.com, or mailed to: Cambria Press 20 Northpointe Parkway, Suite 188 Amherst, NY 14228Library of Congress Cataloging-in-Publication Data Lively, Donald E., 1947- The Constitution, race, and renewed relevance of original intent : reclaiming the lost opportunity of federalism / Donald E. Lively. Includes bibliographical references and index. ISBN 978-1-60497-562-8 (alk. paper) 1. Slavery—Law and legislation—United States—History. 2. Race discrimination—Law and legislation—United States—History. 3. Segregation—Law and legislation—United States—History. 4. African Americans—Civil rights—History. 5. Constitutional amendments—United States—History. 6. Constitutional history—United States. 7. Equality before the law—United States. 8. Federal government—United States. I. Title.
859
HAD Edmund Cartwright thought for one moment of the hardship and misery he was about to unleash on a vulnerable section of society, he would never have sat down to invent the power loom. Cartwright (1743-1823) was a Church of England minister, with a clergyman's love for his fellow man, so he would have been mortified to watch the long, agonising death throes of the hand-loom weaving industry that his invention caused. The 18th century was full of men who excelled in several different fields at the same time, but if anyone deserved the description of polymath, it was Cartwright. Born into a landed family in Marnham, Nottinghamshire, he was one of three brothers who all became nationally famous. John, born in 1740, would be one of Britain's best-known radical politicians while George, the eldest (born 1739) followed his army career by becoming a fur trapper and explorer in Canada. He earned the sobriquet of “Labrador Cartwright,” and was the first man to bring Eskimos to Britain – a family of five accompanied him back after one voyage and became favourites at Court. Edmund was educated at Wakefield Grammar School and University College, Oxford. He became a fellow of Magdalen College and while there, made a name for himself as a literary critic and a poet. His mythical poem Armine and Elvira ran to several editions and was described by Sir Walter Scott as “a “very beautiful piece.”Cartwright was marked down for the church and in 1772 he was appointed curate of Brampton, near Wakefield, moving, seven years later, to become vicar of Goadby Marwood in Leicestershire. It was here that his ingenuity first surfaced. Attending the sickbed of a young boy who was dying from putrid fever, or typhus, he spotted a tub of yeast in the room. Recalling an old tradition that rotting meat suspended over yeast would become pure and sweet again, he dosed the youngster with it – and cured him. He treated several other parishioners in the same way and with the same result, and the treatment was widely adopted by 18th-century medics. Still, Cartwright might have settled for rural obscurity in his rectory had it not been for a chance holiday encounter. THIS was Cartwright's first attempt at loom building - before he had seen how others did it. So, how did a man of the cloth become a man who made cloth? Cartwright tells the story of his invention in his own words, in an interview later in life with a representative of the Encyclopædia Britannica: “HAPPENING to be at Matlock in the summer of 1784, I fell in company with some gentlemen of Manchester, when th conversation turned on Arkwright's spinning machinery. One of the company observed that as soon as Arkwright's patent expired, so many mills would be erected and so much cotton spun that hands would never be found to weave it. "To this I replied that Arkwright must then set his wits to work to invent a weaving mill.
736
Life in our towns in the Middle Ages is the main subject of our work. We want to look at it from all points of views, like: history, music, clothing food, law, traditions, markets, churches, building houses, medicine..... - Subjects: Art, Cross Curricular, Design and Technology, Foreign Languages, Geography, History, Media Education, Music, Special Needs Education - Languages: DE - EN - Pupil's age: 7 - 16 - Tools to be used: Audio conference, Chat, e-mail, Forum, Other software (Powerpoint, video, pictures and drawings), Video conference, Web publishing - Aims: The aim of this project is to get to know the history of the Middle Ages in the hometown, area,... read more - Work process: Life, traditions, music, food, markets, clothes, building houses, churches, law, medicine in the Middle Ages in my town, area, country. Each... read more - Expected results: Historical competence about the Middle Ages beginning in the home town, area, country and learning about the same historical time... read more
472
The Immigration History Research Center is home to thousands of feet of archival records that illuminate the immigrant experience, past and present. While these records are available to students and teachers for research in the University of Minnesota’s Andersen Library, exploring the archive from afar is as simple as connecting to the internet. The IHRC’s collection of digitized archival material provides a plethora of resources suitable for a variety of purposes, including the creation of curriculum for K-12 educators interested in migration history. What were characteristics of the immigrant experience? How did immigrants and refugees adjust to their new lives in the United States? Inversely, how did American citizens born in the United States react to the increased diversification of their communities, and learn to live with individuals from different ethnic and cultural backgrounds? Minnesota K-12 Academic Standards in Social Studies expect students of all grade levels to consider how immigrants and citizens alike participate in the civic lives of their communities, and to understand the steps that immigrants take to become United States citizens. Incorporating digitized archival material into lesson plans will provide opportunities for students to engage these questions, and prompt young people to begin considering the many types of common experiences that bring together people from all corners of the globe. In the pages that follow, IHRC staff members have identified digitized images from the collections of the Ukrainian Folk Ballet of the Twin Cities; Immigration and Refugee Services of America; and the International Institute. Founded in 1919, the International Institute was established to provide various services for migrants who recently arrived to the United States. With branches in Minneapolis/Saint Paul, St. Louis, San Francisco, and other major U.S. cities, the International Institute continues to address the needs of the immigrants and refugees who settle in the United States. The images have been organized thematically in order to initiate student discussion related to Minnesota K-12 Academic Standards and Benchmarks that address the study of migration. By incorporating these archival records into lesson plans, students will be able to think critically about the immigrant experience. Furthermore, working directly with primary sources will enable students to practice and develop research skills that will become increasingly important as they progress in their studies. English language class To discover more digital records for use in K-12 lesson plans, visit the IHRC’s portal for digital resources via the University of Minnesota’s UMedia Archive:http://ihrc.umn.edu/research/digitalsources.php Click here for additional resources. Back to the Spotlight on Selected Sources index page Back to all finding aids in IHRC VITRAGE.
347
Self-Reliance in a Power Outage People do not usually think of a power outage in the same light as an earthquake. However, when the power is out for a long period of time, citizen requests for fire, police, medical, and other public services will begin to mount. At some point, the increased demand for services could result in delayed response times. For this reason, every citizen should learn to be self-reliant in an emergency. And even though power outages may only last a few hours, individuals and organizations should be prepared to be without assistance for 72 hours or longer. To assist individuals prepare for an emergency, the City of San Mateo, State of California, American Red Cross, Federal Emergency Management Agency, and Pacific Gas and Electric Company have provided information on what to do during a power outage or other emergency. - Check Circuit Breakers. If your power goes out, check your home's circuit breakers or fuses first. Your power could be out because a circuit has tripped or a fuse has blown. - Report Electrical Outages. See if the lights in your neighborhood are off. Contact the local electric utility to report an outage. - Power Lines. If you can see any power lines on the ground, stay at least 10 feet away from them as electricity might still be flowing through the lines. - Sensitive Appliances. Protect appliances from possible power surges when electricity is restored. Unplug appliances and computers, if possible, and turn off non-essential lights. - Keep Food Cold. Keep refrigerator and freezer doors closed as much as possible to help prevent food spoilage. Refrigerated foods should remain safe to eat for four hours. Food in a closed freezer can stay frozen for up to two days. If in doubt, throw it out. - Dry Ice. Add dry or block ice to the freezer to help keep food frozen. Never add dry ice with your bare hands or place directly on top of food. - Water. Discontinue non-essential water usage. Do not drink cloudy or dirty water. Don't be alarmed if chlorine level is higher than normal. Notify water officials of low or no water pressure. - Stay Cool. During hot days, stay cool indoors and drink plenty of fluids. - Check on Neighbors. Check on elderly or medically dependent neighbors. - Life Support Equipment. If someone in your household uses life support equipment, make arrangements for a back-up power supply. - Generators. Establish independent, short-term power supplies such as generators or battery-operated devices. If you own a generator, never plug it into any electric outlet in your home. Instead, plug appliances directly into the generator. - Monitor Radio and Television. Monitor battery operated radio or television for current information on the outage. - Telephones. A telephone that does not depend on electricity. Cordless phones will not function during an outage. - Garage Doors. Know how to manually release and open any electric doors, like garage doors. - Security Gates. Find out the steps needed to take to open and close security gates. - House Numbers. Ensure house numbers are readily visible from the street for emergency response. - Anticipate Traffic Delays. Intersections should be treated as four-way stops when traffic lights are out. Anticipate long traffic delays in areas where the power is out.
505
Small Planet Around Kepler -37 NASA's Kepler mission scientists have discovered a new planetary system that is home to the smallest planet yet found around a star similar to our sun. The planets are located in a system called Kepler-37, about 210 light-years from Earth in the constellation Lyra (the general direction of the star Vega). The smallest planet, Kepler-37b, is slightly larger than our moon, measuring about one-third the size of Earth. It is smaller than Mercury, which made its detection a significant challenge. The moon-size planet and its two companion planets were found by scientists with NASA's Kepler mission to find Earth-sized planets in or near the habitable zone, the region in a planetary system where liquid water might exist on the surface of an orbiting planet. However, while the star in Kepler-37 may be similar to our sun, the system appears quite unlike the solar system in which we live. Kepler-37 is a yellow dwarf, G-type star just like ours. Astronomers think Kepler-37b does not have an atmosphere and cannot support life as we know it. The tiny planet almost certainly is rocky in composition. Kepler-37c, the closer neighboring planet, is slightly smaller than Venus, measuring almost three-quarters the size of Earth. Kepler-37d, the farther planet, is twice the size of Earth. The artist's concept image depicts the new planet dubbed Kepler-37b. The planet is slightly larger than our moon, measuring about one-third the size of Earth. Kepler-37b orbits its host star every 13 days at less than one-third the distance Mercury is to the sun. The estimated surface temperature of this smoldering planet, at more than 800 degrees Fahrenheit (700 degrees Kelvin), would melt the zinc in a penny. The first exoplanets found to orbit a other stars were giants. As technologies have advanced, smaller and smaller planets have been found, and Kepler has shown even Earth-size exoplanets are common. "Even Kepler can only detect such a tiny world around the brightest stars it observes," said Jack Lissauer, a planetary scientist at NASA's Ames Research Center in Moffett Field, Calif. "The fact we've discovered tiny Kepler-37b suggests such little planets are common, and more planetary wonders await as we continue to gather and analyze additional data." There are a lot of stars and even more planets. Kepler-37's host star belongs to the same class as our sun, although it is slightly cooler and smaller. All three planets orbit the star at less than the distance Mercury is to the sun, suggesting they are very hot, inhospitable worlds. Kepler-37c and Kepler-37d, orbit every 21 days and 40 days, respectively. A fairly crowded inner space near the host star. "We uncovered a planet smaller than any in our solar system orbiting one of the few stars that is both bright and quiet, where signal detection was possible," said Thomas Barclay, Kepler scientist at the Bay Area Environmental Research Institute in Sonoma, Calif., and lead author of the new study published in the journal Nature. "This discovery shows close-in planets can be smaller, as well as much larger, than planets orbiting our sun." The research team used data from NASA's Kepler space telescope, which simultaneously and continuously measures the brightness of more than 150,000 stars every 30 minutes. When a planet candidate transits, or passes, in front of the star from the spacecraft's vantage point, a percentage of light from the star is blocked. This causes a dip in the brightness of the starlight that reveals the transiting planet's size relative to its star. The size of the star must be known in order to measure the planet's size accurately. To learn more about the properties of the star Kepler-37, scientists examined sound waves generated by the boiling motion beneath the surface of the star. They probed the interior structure of Kepler-37's star just as geologists use seismic waves generated by earthquakes to probe the interior structure of Earth. The science is called asteroseismology. Asteroseismology also known as stellar seismology is the science that studies the internal structure of pulsating stars by the interpretation of their frequency spectra. Different oscillation modes penetrate to different depths inside the star. These oscillations provide information about the otherwise unobservable interiors of stars in a manner similar to how seismologists study the interior of Earth and other solid planets through the use of earthquake oscillations The sound waves travel into the star and bring information back up to the surface. The waves cause oscillations that Kepler observes as a rapid flickering of the star's brightness. Like bells in a steeple, small stars ring at high tones while larger stars boom in lower tones. The barely discernible, high-frequency oscillations in the brightness of small stars are the most difficult to measure. This is why most objects previously subjected to asteroseismic analysis are larger than the sun. With the very high precision of the Kepler instrument, astronomers have reached a new milestone. The star Kepler-37, with a radius just three-quarters of the sun, now is the smallest bell in the asteroseismology steeple. The radius of the star is known to 3 percent accuracy, which translates to exceptional accuracy in the planet's size. For further information see Kepler b. Artist Concept image NASA/Ames/JPL-Caltech.
639
At Tebtunis public bathhouses have been excavated, the oldest dating to the third century BCE. They had showers, stone basins and a stove to heat the bathwater. While a few bathrooms and tubs have been discovered most Egyptians seem to have been content with cleaning themselves by aspersion or by a dip in a canal or the river. They had wash basins and probably filled them with a natron solution from jugs with spouts and used sand as a scouring agent. They washed after rising and both before and after the main meals. As mouth wash they used another salt solution
892
Definition of Amylo- Amylo-: (Amyl- before a vowel.) A prefix pertaining to starch. From the Greek amylon, meaning starch. Last Editorial Review: 6/14/2012 Back to MedTerms online medical dictionary A-Z List Need help identifying pills and medications? Get the latest health and medical information delivered direct to your inbox FREE!
836
web page http://www.amperefitz.com Ampere's 1824 Laws Theserelative motion laws greatly simply all of science: These laws are essentially Ampere's simple 1824 long wire laws with a frequency modification. .These are universal laws that unify all the forces by seeing all forces as space-time creations similar to the way it's done in general relativity. . These laws, though, visualize different space-time intervals (different gauges) being created at various different spin/orbit frequencies. Despite the fact that quantum theory does not see our type of spin causing angular momentum in the microcosm, these laws show it is there nevertheless but at a different space-time interval (different gauge - - different spin/orbit frequency level). The "A" Laws [The reason these "A" Laws work relates to the superposition principle that there is no repulsive force with in phase waves but a repulsive force (space) is always generated by out of phase waves]. You must also understand Dr. Milo Wolff's concept that particles (andtime) are manufactured by Spinning, Scalar, Standing Wave, Resonances with an immense, finite number of similar surrounding SSSWRs (Mach's principle). Force (space) exists not DIRECTLY via scalar resonances but because of individual vector spin and orbital resonances between two similar Spinning, Scalar Standing Wave Resonances. Remember, these"A" Laws (Ampere or Aufbau) have unified ALL the forces so these are now the NEW laws for everything, from the smallest spinning particle to the largest spinning super cluster of galaxies even where high relativistic speeds and mass are encountered. For simplicity, we must return to the Bohr concept of the electron. I have shown why in numerous other papers. *The 1st. "A" Law shows us where all SSSWRs in relative motion produce the least space-time between themselves: The space time interval is created theleast between any two SSSWRs, the closest sides of which "see" themselves spinning or moving on parallel paths in the same direction at the same frequency (like gears meshing) or a close harmonic thereof. You can also say these two objects will attract each other. *The 2nd. "A" Law shows us where all SSSWRs in relative motion produce the most space-time between themselves: Both space and time are created themost between any two SSSWRs, the closest sides of which "see" themselves spinning or moving on parallel paths in opposite directions at the same frequency (like gears clashing) or a close harmonic thereof. You can also say these two objects will repel each other. I use the quoted word "see" to emphasize the particular spacetime realm in which these entities actually find themselves although this will NOT be the way it is seen from our particular spacetime reference frame realm. Of great importance, in the two preceding laws, is that these laws arefrequency laws and they work separately for each separate spin/orbit frequency level which means these individual wave-particles must "see" themselves doing these things from their viewpoint in their local gauge environment. It does not matter how some other spin/orbit frequency level views these things because space and time and indeed the average space time interval is entirely different for each different spin/orbit frequency level (gauge). These two laws look equal and opposite but they are not: The 1st"A" law "locks on" while its opposite 2nd sister law never does. This is because the total force is generally centralized and you can feel this 1st "A" law "lock on" when two magnets come together. These two laws result in limits of aggregation being established all throughout this universe: This is why there are limits to the size of atoms and limits to the size of stars as well. *The Aufbau or Ampere Corollary The aforementioned forces, or space-time intervals, between two SSSWRs will vary proportionally with the cosine of the angle of their paths. And they will have a torque that will tend to make the paths parallel and to become oriented so that SSSWRs on both paths will be traveling in the same direction. All SSSWRs that "see" themselves traveling in the same direction on parallel paths at the same frequency will attract and/or space and time, at that frequency, between them is created the least. All SSSWRs that "see" themselves traveling in opposite directions on parallel paths at the same frequency will repel and/or space and time between them, at that frequency, increases or is created the most. And please don't forget this: Why electrons, stars & galaxies repel each other Remember, we have completely chucked out all those invisible forces you are familiar with and all we have now are these two"A" Laws. Please remember, in thisnew "big picture" of everything, ALL FORCES ARE NOW UNIFIED so there are no such things as gravity, magnetic lines of force or plus and minus charges or for that matter even the strong force. Pleasepay particular attention to the following. Electrons can exhibit either an attraction or repulsion when they are "locked" spin up or spin down on orbitals such as like or unlike charges; like or unlike poles OR they may even display a gyroscopic type repulsive behavior when they are "free". Our"A" Laws show us why this is so and in the next 6 paragraphs you have the best explanation of why electrons and even stars & galaxies repel each other. Lets look at these free electrons first: They spin and hence they have inertial qualities and this includes gyroscopic inertia which always provides this force 90 degrees to any external force acting on such a spinning item. Completely forget about charge now and only look at our new"A" Laws and what they say. The 1st "A" Law tells us that there is a possibility that two free electrons can attract each other providing that any portion of their closest sides are spinning in the same direction at the same frequency. This means either their sides can be spinning in the same directions or they can be lined up so that both of their poles can be spinning in the same directions: Any such two electrons will attract each other (magnetism also sigma and pi bonding). Then we see that there is something else: This torque twisting force - on BOTH free items - depends on the cosine of the angle of their respective spin planes. As this force begins to act, it in turn causes this 90-degree gyroscopic torque to twist both of those totally free electrons away from this initial attracting position, doesn't it? So because of this gyro torque, two free electrons can never remain in a full attracting position and they will therefore be forced to stay more in arepelling position. Therefore free electrons will always end up repelling each other and this repelling is not explained by using this thing called charge: it is explained only by simply using global inertial qualities and our new global "A" Laws. The above6 paragraphs explain not only why electrons repel each other but they also explain why any two perfectly free similar spinning SSSWRs of the same size must repel each other. So now you know why both electrons and galaxies stay well away from each other. Thisis Einstein's cosmological constant. Something somewhere has to be"locked" in place and synchronized in frequency (such as the electron's spin with another electron's spin) or a close subharmonic to get any kind of attracting force: Yes, the proton attracts an electron but instead of charge please see it this way: When two up quarks combine with one down quark to form a proton then the two up quarks are able to synchronize in with the electron's spin frequency and This is why aggregations come together(gravity) and larger aggregations come together and accumulate because as these things grow in size there are more things "locked" in place strengthening the attractive force of the 1st "A" Law. Once we knew about quarks then we should have realized how those two 'up quarks' in the proton are set up spin up-spin down (The 'up quark' does not signify orientation). Those two spin up--spin down 'up quarks' are spinning - in the same equatorial plane - at a higher frequency but all 'up quarks' spin at a harmonic of the electron's spin frequency allowing a spin up and spin down electron to be attracted to them in the same equatorial plane. We will soon know even more about theattractive quark strong force binding functions. Attraction is always a synchronized frequency attraction and it is not simply the old idea of plus and minus charges. Allattractions in this theory must be synchronized frequency attractions. Both light and inertial mass are caused by these synchronized frequency attractions. As quantum theory shows us, the orbital of an electron on a distant star goes down a certain amount while the orbital of the electron receiving this quantum of energy---in your eye---goes up the exact same amount. But what quantum mechanics does not tell you is that these two energy-exchanging orbitals must be in the same exact plane. Not only that but each orbital must be a mirror image of the other with the electrons in each rotating and revolving in the exact opposite directions so that at the time the energy exchange takes place the closest sides of both electrons are going in the same direction. You can see from this that this energy change is merely a MOMENTARY DIRECT PULL from the electron, on the star, to the electron in your eye. These electrons will make many revolutions, rotations and wobbling oscillations during each change of those orbitals giving you the light that you see. If two distant quarks are lined up so that their closest sides are in the same directions as the two aforementioned electrons then they too will momentarily bind with each other---even from a vast distance---and cause what we see as inertial mass. But since the quarks in the proton and neutron tri-quark entities do not oscillate and wobble quite like the electron then this pull of the two quarks is a steady momentary binding pull where BOTH quarks are pulled away from the other two quarks but NO PERMANENTEnergy CHANGE is made in either tri-quark entity (neutron or proton). When you spin a flywheel and notice the gyroscopic inertia, you should also notice that the gyroscopic torque that is always 90 degrees to the axis of rotationcan also be seen as a linkage with the rim of the rapidly spinning flywheel to a path projected in the sky (macrocosm surroundings). The rim tries to stay in this path. This is showing you that you do have an absolute reference frame, which is Mach's principle. Billions of quarks in BOTH the flywheel and in the macrocosm are both being momentarily extended more than normal thus giving you this added gyroscopic inertia. You might have to read the long TOE athttp://www.rbduncan.com/TOEbyFitzpatrick.htm to get the full picture of what happens when you crank up a gyroscope or a flywheel or ride a bicycle and produce gyroscopic inertia. It's similar to the reason you need cyclic pitch on a helicopter. When a helicopter moves forward then the blades on one side travel through the air faster than the blades on the other side and this tries to tip the helicopter over. (Igor Sikorsky had to invent cyclic pitch to prevent this). The same thing happens to certain quarks whose rims line up with the rim of the gyroscope, flywheel or bicycle wheels. The speed that these items are turning---in respect to the macrocosm---now adds to portions of the quark rim speed which before was close to the speed of light and now gets even closer to the speed of light (becoming more massive hence at a higher frequency). So you are moving up an asymptotic curve close to that unsurpassable speed of c. And this---even with a miniscule number of quarks involved---gives us this gyroscopic inertia. It does this because the mass of these few quarks increase tremendously as portions of their rim speed approach the speed of light. As Einstein has shown us, mass increases with speed and especially increases when on that asymptotic portion of the curve. Of available electrons, only the smallest fraction link with others a distance away to transfer light and heat. The same with the spinning quark that causes gyroscopic inertia. All spinning quarks link to cause inertial mass. All these binding linkages are momentary with the electron's oscillations causing a permanent transfer of energy and the various momentary quark bindings causing inertial mass. This could be seen---in gyroscopic inertia---as only a temporary transfer of inertial mass. But if you could increase our surroundings---as will be the case when our Milky Way galaxy finally collides with the Andromeda galaxy---then anyone here on earth will find both inertial mass, gyroscopic inertia and centrifugal force have all become stronger with the more crowded surroundings. Now let's go to the stars and you will see the same"A" Laws apply there as well and, as you can see, these too will always have to remain in a repelling position with each other. Recently Perlmutter discovered this acceleration and showed we must have Einstein'scosmological constant---a repulsive force---between all the stars and galaxies. Scientists have been recently wracking their brains to figure out why we have Perlmutter's acceleration because nothing in our present science has even predicted such a thing. But read those preceding blue sentences again! Now I hope you can finally see that our "A" Laws tell you exactly why we have Einstein's "cosmological constant" not only in the sky but in the microcosm as well. And they tell you why we have gravity too. Your present science doesn't even do this. The reason these "A" Laws work is that this universe is built on an extraordinarily simple principle via an endless supply of vector wave resonances producing lower frequency spherical standing wave, scalar wave resonances that, in turn, produce space-time by spinning, orbiting and precessing. Aminimum of space (at that particular frequency) is produced between the closest sides of spinning entities that are in the same scalar phase. Scalar phaseis more like a movie frame than voltage phase, which pertains to a waveform. These "A" Laws show us the production of the most important vector forces between the closest sides of such spinning spherical resonances and in the direction of the axis of each spin. There are also vector forces via orbits and spin and orbital and spin precessions. This universe equalizes the energy vector force input to vector force output of these scalar wave resonances by balancing them on specific spin and orbital geodesics. These vector forces, in turn, combine to produce lower frequency, hence lower energy, scalar resonances, which in turn, spin, precess and orbit producing still lower frequency space-time and its related vector forces and this goes on and on: Thus is our universe built from the microcosm to the macrocosm and may continue indefinitely because higher frequency waves would always be producing lower frequency, lower energy scalar wave resonances and they, in turn, would be producing even lower energy, lower frequency resonances. This seems to be an infinite frequency universe with each spin/orbit frequency having inertial and gyroscopic qualities but yet with each spin/orbit frequency having its own distinct symmetry laws. Daniel P. Fitzpatrick Jr. Over 4 Decades of Fitzpatrick's Books, Papers & Thoughts: And here's this page duplicated in Adobe.pdf: Over 4 Decades of Fitzpatrick's Books, Papers & Thoughts:http://www.amperefitz.com/4.decades.htm Fitzpatrick's website is athttp://www.amperefitz.com Another older website carrying Fitzpatrick's works FREE is:http://www.rbduncan.com Thank you, World Scientist Database - - Daniel P. Fitzpatrick Jr. Have a good day & visit my site at goodreads: Click ANY of these links to get what you want Read my latest book FREE:(these two links below) http://www.amperefitz.com/ua_20071020_ck_ds_jm_ds.pdf (This is the book in Adobe) http://www.amperefitz.com/unvasleep.htm (This book link opens faster if you have dial up.) While all the links on this page are OK and presently working, unfortunately only about two thirds(2/3) of the links I gave, years ago, as proof (click & see: http://www.amperefitz.com/presskit.html) for statements in this latest book, published in the year MMVl, are now still working BUT your search engine will probably take you to a similar area where you should be able to read similar proof material.
815
Philosophy and Religion The Philosophy and Religion Department at Montclair State University is home to the "big questions": the nature of truth, knowledge, art, morality, social justice, life, death, God, the universe, and Being itself. In our department, you will learn what human beings have thought about these big questions throughout history, and how they have sought to live their lives in relation to the answers. You will also be encouraged to address these questions yourself and make them relevant to your own life. The discipline of Philosophy puts you in conversation with thinkers from ancient times to the present who have asked questions such as: What is the nature of truth, and justice? Is there a natural law that governs the universe? How can I know reality—through reason or through sense experience? How is my mind related to my body? How do I make ethical judgments when there is no clear "right" or "wrong" answer? What makes something beautiful? Is democracy always the best form of government? The study of Religion aims to understand how people have lived out their central beliefs about this world and the next, the secular and the sacred, humanity and the divine. In itself, religion has been an arena for struggling with questions of meaning and reality. It has also been a powerful influence in law, government, family life, and the arts. One cannot adequately understand human experience or the clash of nations and empires without attending to the roles played by religion. For more information, contact:
619
Information for Students Welcome to the Flint Regional Science Fair! We look forward to seeing you at the Fair in March. Taking part in a science fair is fun, educational, and rewarding. This part of our web site provides information and links that can help you get started, conduct your research and enter the Flint Regional Science Fair. The FASF is held every Spring. This means you should begin planning in the Fall and Winter prior to the Fair to ensure you pick a good research topic, and have plenty of time to do a good job and present a quality project. Parents, teachers, and mentors are important helpers to identify projects, collect the resources required for your project, and track your progress. Ask your parents and your teachers for assistance. They are your best bet for one-on-one direction and support in your Science Fair experience. Elementary Division and Junior Division projects follow simpler rules than Senior Division projects. More Web Student Science Resources Questions? Email firstname.lastname@example.org.
820
From a hotel room in Vancouver, Krista and Diamond discuss education, cognitive neuroscience, the importance of play, and more What Adele Diamond is learning about the brain challenges basic assumptions in modern education. Her work is scientifically illustrating the educational power of things like play, sports, music, memorization and reflection. What nourishes the human spirit, the whole person, it turns out, also hones our minds. An improvisational storytelling class of 5th and 6th graders draw on Adele Diamond's educational philosophy and demonstrate three important executive functions. Pertinent Posts from the On Being Blog Previous "On Being" guest, Adele Diamond, tells a story about meeting the Dalai Lama in Dharamsala, India at a Mind and Life Institute dialogue. We highlight some of the passages Adele Diamond presented to the Dalai Lama in Dharamsala — including texts from Rabbi Heschel, Bashevis Singer, Rachel Naomi Remen, and Henri Nouwen. A bit of the backdrop for producing a slideshow about executive function. Karen Armstrong prefers Hillel's version; Adele Diamond prefers Jesus' variation. Both take away a call to action. Hear them both. A New York Times article features Adele Diamond's work the weekend before our interview. About the Image Voices on the Radio Host/Producer: Krista Tippett Managing Producer: Kate Moos Associate Producer: Nancy Rosenbaum Associate Producer: Shubha Bala Technical Director/Producer: Chris Heagle Senior Editor: Trent Gilliss Associate Web Developer: Anne Breckbill A renowned Tibetan Buddhist teacher shares his thoughts on the meaning of happiness, and how he understands spirituality as "contemplative science." Stuart Brown, a physician and director of the National Institute for Play, says that pleasurable, purposeless activity prevents violence and promotes trust, empathy, and adaptability to life's complication. He promotes cutting-edge science on human play, and draws on a rich universe of study of intelligent social animals.
645
C. Mackenzie Brown, Professor of Religion at Trinity University, One Trinity Place, San Antonio, TX 78212-7200; e-mail firstname.lastname@example.org. Avataric evolutionism is the idea that ancient Hindu myths of Vishnu's ten incarnations foreshadowed Darwinian evolution. In a previous essay I examined the late nineteenth-century origins of the theory in the works of Keshub Chunder Sen and Madame Blavatsky. Here I consider two major figures in the history of avataric evolutionism in the early twentieth century, N. B. Pavgee, a Marathi Brahmin deeply involved in the question of Aryan origins, and Aurobindo Ghose, political activist turned mystic. Pavgee, unlike Keshub, used avataric evolutionism in expounding his nationalistic goals for an independent India. His rationale was bolstered by the idea that India was the fountainhead of all science and civilization. Aurobindo saw in avataric evolutionism a possible key to understanding the involution and evolution of the supreme spirit in the realm of matter as taught in traditional Vedanta. This material-spiritual evolution represented for Aurobindo the necessary knowledge for the true liberation of India, transcending purely political independence. Such knowledge he also saw as the means for the spiritual liberation of the whole of humankind. The processes of involution and evolution he claimed were not in conflict with modern science, and Western evolutionary thinking seems to have inspired many of his own evolutionary reflections, even though in the end he rejected the Darwinian transmutation of species. I conclude with an overview and assessment of recent, post-colonial Hindu assimilations of avataric evolutionism.
764
Answer the question referring to each photograph above the question. To find the answer, pass the cursor over the photo to see the "title" of the photo. 1. Identify the number of chomatids in this cell during this stage of mitosis 2. Identify the structure indicated by the arrow: B) Zona Occludens C) Junctional complex D) Mitotic Cell E) Terminal web 3. Identify the stage of cell division in this photograph. A) Prophase of mitosis B) Anaphase I of meiosis C) Metaphase II of meiosis D) Anaphase of mitosis 4. In this section through the testes, what is the C amount of DNA in the nucleus of the cell noted by the arrow? 5) In this section through the testes, what is the stage of cell division of the cell noted by the arrow: A) Anaphase of Meiosis B) Metaphase of Mitosis C) Prophase of Mitosis D) Metaphase of Meiosis E) Prophase of Meiosis 6) Identify the process shown by the arrowhead A) Gap junction formation C) Desmosome formation D) Spindle apparatus formation E) Formation of the nuclear membrane 7. Identify the structures noted by the horizontal arrows: A) Nuclear pore complex C) Gap junctions D) Zonula occludens 8. Identify the structures supporting and moving the chromosomes. A) Actin filaments D) Intermediate filaments
556
Copyright (c) Arvin S. Quist INTRODUCTION TO CLASSIFICATION THE NEED FOR CLASSIFICATION A government is responsible for the survival of the nation and its people. To ensure that survival, a government must sometimes stringently control certain information that (1) gives the nation a significant advantage over adversaries or (2) prevents adversaries from having an advantage that could significantly damage the nation. Governments protect that special information by classifying it; that is, by giving it a special designation, such as "Secret," and then restricting access to it (e.g., by need-to-know requirements and physical security measures). This right of a government to keep certain information concerning national security (secrets) from most of the nation's citizens is nearly universally accepted. Since antiquity, governments have protected information that gave them an advantage over adversaries. In wartime, when a nation's survival is at stake, the reasons for secrecy are most apparent, the secrecy restrictions imposed by the government are most widespread,[*] and acceptance of those restrictions by the citizens is broadest.[†] In peacetime, there are fewer reasons for secrecy in government, generally the government classifies less information, and citizens are less willing to accept security restrictions on information. MAJOR AREAS OF CLASSIFIED INFORMATION The information that is classified by most democracies, whether in peacetime or wartime, is usually limited to information that concerns the nation's defense or its foreign relations--military and diplomatic information. Most of that information falls within five major areas: (1) military operations, (2) weapons technology, (3) diplomatic activities, (4) intelligence activities, and (5) cryptology. The latter two areas might be considered to be special parts of the first three areas. That is, intelligence and cryptology are "service" functions for the primary areas--military operations, weapons technology, and diplomatic activities. From a historical perspective, the classification of weapons technology became widespread only in the 20th century. Classification of information about military operations and diplomatic activities has been practiced for millennia. Examples of military-operations information that is frequently classified include information concerning the strength and deployment of forces, troop movements, ship sailings, the location and timing of planned attacks, tactics and strategy, and supply logistics. Obviously, if an enemy learned the major details of an impending attack, that attack would be less successful than if it came as a surprise to the enemy.* Information possessed by a government about an adversary's military activities or capabilities must be protected to preserve the ability to predict those activities or to neutralize those capabilities. If the adversary knew that the government had this information, the adversary would change those plans or capabilities. Military-operations information is usually classified for only a limited time. After an operation is over, most of the important information is known to the enemy. Weapons technology is classified to preserve the advantage of surprise in the first use of a new weapon,† to prevent an adversary from developing effective countermeasures against a new weapon,‡ or to prevent an adversary from using that technology against its originator (by developing a similar weapon). A major factor in that latter reason for classifying weapons technology is "lead time." Classifying advanced weapons-technology information prevents an adversary from using that information to shorten the time required to produce similar weapons systems for its own use. Consequently, assuming continued advancements in a weapons technology by the initial developer of that technology, the adversary's weapons systems will not be as effective as those of the nation that initially developed that technology, and the adversary will be at a disadvantage. With respect to lead time, when weapons systems can be significantly improved, then information on "obsolete" weapons is much less sensitive than information on newer weapons. Thus, information on muzzle-loading rifle technology was not as sensitive as that on breech-loading rifle technology, which was not as sensitive as information on lever-action rifle technology, . . . semiautomatic rifle . . . automatic rifle . . . machine gun. However, with respect to nuclear weapons, a "rogue" nation or terrorist group can probably achieve its objectives just as easily with "crude" kiloton nuclear weapons that might require a ship or truck to transport as with sophisticated megaton nuclear weapons that might fit into a (large) suitcase. Thus, "obsolete" nuclear-weapons technology should be continue to be protected, especially with respect to technologies concerning production of highly enriched uranium or other nuclear-weapon materials. Weapons technology includes scientific and technical information related to that technology. World War I marked the start of the "modern" period when science and technology affected the development of weapons systems to a greater degree than any time previously. That interrelationship became even more pronounced in World War II, with notable scientific and technological successes: the atomic bomb, radar, and the proximity fuse. World War II, particularly with respect to the atomic bomb, marked the first time that the progress of military technology was significantly influenced by scientists, as contrasted to advances by engineers or by scientists working as engineers. With respect to classification, the more that applied scientific or technical information is uniquely applicable to weapons, the more likely that this information will be classified. Generally, basic research is not classified unless it represents a major breakthrough leading to a completely new weapons system. An example of that circumstance was the rigid classification during World War II, and for several years thereafter, of much basic scientific research related to atomic energy (nuclear weapons). The need for secrecy in diplomatic negotiations and relations has long been recognized. A nation's ability to obtain favorable terms in negotiations with other countries would be diminished if its negotiating strategy and goals were known in advance to the other countries.* The effectiveness of military-assistance agreements between nations would be impaired if an adversary knew of them and could plan to neutralize them. In New York Times v. United States, the "Pentagon Papers" case, U.S. Supreme Court Justice Stewart recognized the importance of secrecy in foreign policy and national defense matters: It is elementary that the successful conduct of international diplomacy and the maintenance of an effective national defense requires both confidentiality and secrecy. Other nations can hardly deal with this Nation in an atmosphere of mutual trust unless they know that their confidences will be kept . . .. In the area of basic national defense the frequent need for absolute secrecy is, of course, self evident. During the term of the first president, it was established that some need for secrecy in diplomatic matters would remain even after negotiations were completed. President Washington, in 1796, refused a request by the House of Representatives for documents prepared for treaty negotiations with U.S. and gave the following as one reason for refusal: England The nature of foreign negotiations requires caution, and their success must often depend on secrecy; and even when brought to a conclusion a full disclosure of all the measures, demands, or eventual concessions which may have been proposed or contemplated would be extremely impolitic; for this might have a pernicious influence on future negotiations, or produce immediate inconvenience, perhaps danger and mischief, in relation to other powers. It has been said that President Nixon initially was not going to attempt to stop the New York Times and other newspapers from publishing the "Pentagon Papers." However, the executive branch was then in secret diplomatic negotiations with , and Henry Kissinger "is said to have persuaded the president that the Chinese wouldn't continue their secret parleys if they saw that China couldn't keep its secrets." Washington Intelligence information includes information gathering and covert operations. Collecting military and diplomatic information about other nations involves the use of photoreconnaissance airplanes and satellites, communication intercepts, the review of documents obtained openly, and other overt methods. However, information gathering also includes the use of undercover agents, confidential sources, and other covert methods. For those covert activities, secrecy is usually imposed on the identity of agents or sources, on information about intelligence methods and capabilities, and on much of the information received from the covert sources. Few clandestine agents could be recruited (or, in some instances, would live long) if their identity were not a closely guarded secret. Information provided by a clandestine agent must frequently be classified because, if a government knew that some of its information was compromised, it might be able to determine the identity of the person (agent) who provided the information to its adversary. Successful intelligence-gathering methods must be protected so that the adversary does not know the degree of their success and is not stimulated to develop countermeasures to stop the flow of information. Intelligence information from friendly nations is generally classified by the recipient country. Allies would be less willing to share intelligence information if they knew that it would not be protected against disclosure. Cryptology encompasses methods to code and transmit secret messages and methods to intercept and decode messages. Writing messages in code, or cryptography,* has been practiced for thousands of years. One of the earliest preserved texts of a coded message is an inscription carved on an Egyptian tomb in about 1900 B.C. The earliest known pottery glaze formula was written in code on a Mesopotamian cuneiform tablet in about 1500 B.C. The Spartans established a system of military cryptography by the 5th century B.C. Persia later used cryptography for political purposes. Cryptography began its steady development in western civilization starting about the 13th century, primarily in . By the early 16th century, Italy 's ruling Council of Ten had an elaborate organization for enciphering and deciphering messages. Venice Restrictions on cryptologic information are necessary to protect communications. Diplomatic negotiations could not successfully be conducted at locations other than the seat of government if safe communications could not be established. Cryptologic information must also be protected to prevent an adversary from learning of a nation's capabilities to intercept and decode messages. If an adversary learns that its communications are not secure, it will use another method, which will require additional time and effort to defeat.[‡] The Allies' World War II success in breaking the German codes contributed to shortening that war. That success was kept secret until 1974, about 34 years after the German code had been broken and about 29 years after World War II had ended. The U.S. Army's success in breaking a World War II U.S.S.R. code (the Venona project, which began in 1943 and continued until 1980) was not made public until about 1995. That was about 50 years after the first such message had been deciphered (and about 45 years after the U.S.S.R. had learned through espionage of the Army's success). U.S. BASIS FOR CLASSIFICATION IN THE UNITED STATES The need for governmental secrecy was directly recognized in the U.S. Constitution. Article I, Sect. 5, of the Constitution explicitly authorizes secrecy in government by stating that "Each House shall keep a Journal of its Proceedings, and from time to time publish the same, excepting such Parts as in their Judgment require Secrecy." Also included in the Constitution, in Article I, Sect. 9, is a statement that "a regular Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time." A U.S. Court of Appeals has determined that the phrase "from time to time" was intended to authorize expenditures for certain military or foreign relations matters that were intended to be kept secret for a time. The Constitution does not explicitly provide for secrecy by the Executive Branch of the U.S. Government. However, the authority of that Executive Branch to keep certain information secret from most citizens is implicit in its executive responsibilities, which include the national defense and foreign relations. This presidential authority has been upheld by the Supreme Court in a number of cases. Judicial decisions have also relied on a common-law privilege for a government to withhold information concerning national defense and foreign relations. Congress, by two statutes, the Freedom of Information Act and the Internal Security Act of 1950, has implicitly recognized the president's authority to classify information (see Chapter 3). U.S. At this time in the , information is classified either by presidential authority, currently Executive Order 12958, or by statute, the Atomic Energy Act of 1954, as amended (Atomic Energy Act). Classification under Executive Orders and under the Atomic Energy Act is extensively discussed in Chapters 3 and 4, respectively. United States CLASSIFICATION AND SECURITY Classification has been variously described as the "cornerstone" of national security, the "mother" of security, and the "kingpin" of an information security system.,,, Classification identifies the information that must be protected against unauthorized disclosure. Security determines how to protect information after it is classified. Security includes both personnel security and physical security. The initial classification determination, establishing what should not be disclosed to adversaries and the level of protection required, is probably the most important single factor in the security of all classified projects and programs., None of the expensive personnel-clearance and information-control provisions (physical security aspects) of an information security system comes into effect until information has been classified; classification is the pivot on which the whole subsequent security system turns (excluding security for other reasons, such as to prevent theft of materials). 19 Therefore, it is important to classify only information that truly warrants protection in the interest of national security. Since the mid 1970s, several classification experts have remarked on the increasing emphasis by some government agencies on physical-security matters, which has been accompanied by a decreased emphasis on the classification function. One of the founders (and the first chairman) of the National Classification Management Society (NCMS), who was also an Atomic Energy Commission Contractor Classification Officer, has expressed concern about the tendency to emphasize the word "security" at the expense of the word "classification" with respect to security classification of information.17 In the mid 1980s another charter member of the NCMS pointed out that, although the status of classification still remained high in the Department of Energy (DOE), the situation had changed within the Department of Defense, where Classification Management had been organizationally placed under Security. Even the NCMS, founded as a classification organization, appears to be changing to become increasingly oriented towards security matters rather than classification matters. It is noteworthy that the marked emphasis by the U.S. Government in recent years on physical-security measures has not been accompanied by any significant increased emphasis on classification matters. The previous paragraph was written in 1989, and the trend described in that paragraph has continued. The classification function at DOE headquarters is now a part of the security organization as is the classification function at many DOE operations offices and DOE-contractor organizations. That function generally used to be part of a technical or other non-security organization. The NCMS has also continued to become more security-oriented. With respect to classification as a profession (or lack of recognition thereof), it is interesting to note some comments and a recommendation in the Report of the Commission on Protecting and Reducing Government Secrecy. In this 1997 report, that Commission noted the "all-important initial decision of whether to classify at all," and that "this first step of the classification management process . . . tends to be the weakest link in the process of identifying, marking, and then protecting the information." The Commission further stated that "the importance of the initial decision to classify cannot be overstated." However, the Commission then stated that "classification and declassification policy and oversight . . . should be viewed primarily as information management issues which require personnel with subject matter and records management expertise." Although recommending that "The Federal Government . . . [should] create, support, and promote an information systems security career field within the Government," the Commission made no similar recommendation for security classification of information as a profession or career. Res ipsa loquitur. [*] "When a nation is at war many things that might be said in time of peace are such a hindrance to its effort that their utterance will not be endured so long as men fight and that no Court could regard them as protected by any constitutional right" [Schenck v. United States, 249 U.S. 47, 52 (1919) (J. Holmes)]. [†] Since the September 11, 2001, terrorist attacks against the World Trade Center towers and the Pentagon, the United States considers itself to be in a war against terrorism. One consequence has been a significant shift in opinion, not only of the general public but also of some strong supporters of freedom-of-information matters, towards favoring more control of information that might aid terrorists. This increased control, especially pertaining to weapons of mass destruction, includes (1) establishing broader criteria for identifying information that is classified or "sensitive"; (2) permitting reclassification of declassified information, and (3) restricting further governmental distribution of documents already released to the public. *However, during the Greek and Roman eras in the Mediterranean, when the infantry was paramount and both sides were approximately equally equipped with respect to weapons, many battles were fought without attempts to maintain secrecy of troop movements or with respect to surprise attacks (B. and F. M. Brodie, From Crossbow to H-Bomb, Indiana University Press, Bloomington, Ind., 1973, p. 17). †"Secret" weapons have proven decisive in warfare. One example of the decisive impact of a new weapon was at the battle of Crecy in 1346. At this battle, the English used their "secret" weapon, the longbow, to defeat the French decisively. Although the French had a two-to-one superiority in numbers (about 40,000 to 20,000), the French lost about 11,500 men, while the English lost only about 100 men (W. S. Churchill, A History of the English-Speaking Peoples, Vol. 1, Dodd, Mead and Co., New York, 1961, pp. 332-351; B. and F. M. Brodie, From Crossbow to H-Bomb, Indiana University Press, Bloomington, Ind., 1973, pp. 37-40). ‡In World War II, the Germans developed an acoustic torpedo designed to home in on a ship's propellers. However, the Allies obtained advance information about this torpedo so that when it was first used by the Germans, countermeasures were already in place (B. and F. M. Brodie, From Crossbows to H-Bombs, Indiana University Press, Bloomington, Ind., 1973, p. 222). *In 1921, the United States, Britain, France, Italy, and Japan held a conference to limit their naval armaments. The United States had broken Japan's diplomatic code and thereby knew the lowest naval armaments that Japan would accept. Therefore, U.S. negotiators had merely to wait out Japan's negotiators to reach terms favorable to the United States (J. Bamford, The Puzzle Palace, Houghton, Mifflin Co., Boston, 1982, pp. 9-10). *The breaking of codes is termed cryptanalysis. [‡] Even "friendly" nations get upset if they know that one of their codes has been broken. As noted earlier in this chapter, the United States deciphered Japan's diplomatic code in 1921. Herbert O. Yardley, who was principally responsible for breaking this code, wrote a book, The American Black Chamber, published in 1931, which included information on this matter. Yardley's book did not contribute to developing friendly United States-Japanese relations. A consequence of this revelation was enactment of a U.S. statute that made it a crime for anyone who, by virtue of his employment by the United States, obtained access to a diplomatic code or a message in such code and published or furnished to another such code or message, "or any matter which was obtained while in the process of transmission between any foreign government and its diplomatic mission in the United States" (48 Stat. 122, June 10, 1933, codified at 18 U.S.C. Sect. 952.) B. and F. M. Brodie, From Crossbow to H-Bomb, Indiana University Press, Bloomington, Ind., 1973, p. 172. Hereafter this book is cited as "Brodie." Brodie, p. 233. New York Times v. United States, 403 U.S. 713, 728 (1971). J. D. Richardson, A Compilation of Messages and Papers of the Presidents. 1789-1897, U.S. Government Printing Office, Washington, D.C., Vol. I, at 194-195 (1896). Richard Gid Powers, "Introduction," in Secrecy--The American Experience, by Daniel Patrick Moynihan, Yale University Press, New Haven, Conn., 1998, p. 32. D. Kahn, The Codebreakers, MacMillan, Inc., New York, 1967, p. 71. Hereafter cited as "Kahn." Kahn, p. 75. Kahn, p. 82. Kahn, p. 86. Kahn, p. 106. Kahn, p. 109. See, for example, F. W. Winterbotham, The Ultra Secret, Harper & Row, New York, 1974. Halperin v. CIA, 629 F.2d 144, 154-162 (D.C. Cir., 1980). U.S. Constitution, Article II, sect. 2. See, for example, Totten v. United States, 92 U.S. 105 (1875); United States v. Reynolds, 345 U.S. 1 (1952); Weinberger v. Catholic Action of Hawaii, 454 U.S. 139 (1981). F. E. Rourke, Secrecy and Publicity: Dilemmas of Democracy, Johns Hopkins Press, Baltimore, 1961, pp. 63-64. D. B. Woodbridge, "Footnotes," J. Natl. Class. Mgmt. Soc. 12 (2), 120-124 (1977), p.122. R. J. Boberg, "Panel--Classification Management Today," J. Natl. Class. Mgmt. Soc. 5 (2), 56-60 (1969), p. 57. E. J. Suto, "History of Classification," J. Natl. Class. Mgmt. Soc. 12 (1), 9-17 (1976), p.13. James J. Bagley, "NCMS - Now and the Future," J. Natl. Class. Mgmt. Soc. 25, 20-29 (1989), p. 28. T. S. Church, "Panel--Science and Technology, and Classification Management," J. Natl. Class. Mgmt. Soc. 2, 39-45 (1966), p. 40. W. N. Thompson, "Security Classification Management Coordination Between Industry and DOD," J. Natl. Class. Mgmt. Soc. 4 (2), 121-128 (1969), p. 121. W. N. Thompson, "User Agency Security Classification Management and Program Security," J. Natl. Class. Mgmt. Soc. 8, 52-53 (1972), p. 52. Department of Defense Handbook for Writing Security Classification Guidance, DoD 5200.1-H, U.S. Department of Defense, Mar. 1986, p. 1-1. F. J. Daigle, "Woodbridge Award Acceptance Remarks," J. Natl. Class. Mgmt. Soc. 21, 110-112 (1985), p. 111. D. C. Richardson, "Management or Enforcement," J. Natl. Class. Mgmt. Soc. 23, 13-20 (1987). Report of the Commission on Protecting and Reducing Government Secrecy, S. Doc. 105-2, Daniel Patrick Moynihan, Chairman; Larry Combest, Vice Chairman, Commission on Protecting and Reducing Government Secrecy, U.S. Government Printing Office, Washington, D.C., 1997. Hereafter cited as the "Moynihan Report." Moynihan Report, p. 19. Moynihan Report, p. 35. Moynihan Report, p. 44. Moynihan Report, p. 111.
863
The continuing Texas drought has taken an enormous and growing toll on trees, killing as many as half a billion – 10 percent of the state’s 4.9 billion trees – this year alone, the Texas Forest Service estimates. That calculation did not include trees claimed by this year’s deadly and extensive wildfires, even if they were drought-related, Burl Carraway, who heads the agency’s Sustainable Forestry Department, told Texas Climate News. (Previously, the Forest Service estimated that about 1.5 million trees were lost on 34,000 charred acres in the Bastrop County fire, most destructive in Texas history. In another damage assessment, the agency said more than 2,000 fires in East Texas had charred more than 200,000 acres. Texas has about 63 million acres of forestlands.) The estimate that up to half a billion trees have been lost to drought in 2011 was issued Monday by the Forest Service. It was based on statistics tabulated by agency foresters after they canvassed local forestry professionals in their regions, developed estimated percentages of drought-killed trees, and applied them to regional tree inventories. The resulting estimate was that 100 million to 500 million trees with a diameter of at least five inches had perished because of the drought – two to 10 percent of the nearly five billion trees of that size in the state. In 2011, Texas experienced an exceptional drought, prolonged high winds and record-setting temperatures. Together, those conditions took a severe toll on trees across the state. Large numbers of trees in both urban communities and rural forests have died or are struggling to survive. The impacts are numerous and widespread. The agency found that trees in three areas appeared to be hurt the most by the drought: - An area in West Texas including Sutton, Crockett, western Kimble and eastern Pecos counties, with extensive death of Ashe junipers. - An area in Southeast Texas including Harris, Montgomery, Grimes, Madison and Leon counties, where many loblolly pines succumbed. - An area southeast of Austin, including western Bastrop and eastern Caldwell counties as well as neighboring areas, which had widespread mortality among cedars and post oaks. Also, the agency said, “localized pockets of heavy mortality were reported for many other areas. The Forest Service plans to use aerial imagery in a more detailed analysis next spring, when trees that entered early dormancy because of the drought may start to recover. In addition, the agency said, “a more scientific, long-term study” of tree losses will be carried out through its Forest Inventory and Analysis program’s census of the state’s trees. Carraway said Forest Service officials “fully expect mortality percentages to increase if the drought continues.” Texas state climatologist John Nielsen-Gammon has said that a second year of drought in 2012 is “likely,” perhaps with more dry conditions following that. Nielsen-Gammon has estimated that about a tenth of the excess heat this past summer was attributable to manmade climate change. He and other climate experts have said hotter, drier conditions are expected to increase in Texas in decades ahead as concentrations of human-created greenhouse gases accumulate in the atmosphere. What the warming average temperature of the planet could mean for forests and other ecosystems was the focus of research findings announced last week by NASA. The study, carried out by researchers from NASA’s Jet Propulsion Laboratory and the California Institute of Technology, used a computer model that projected massive changes in plant communities across nearly half of the earth’s land surface, with “the conversion of nearly 40 percent of land-based ecosystems from one major ecological community type – such as forest, grassland or tundra – toward another.” The NASA announcement added: The model projections paint a portrait of increasing ecological change and stress in Earth’s biosphere, with many plant and animal species facing increasing competition for survival, as well as significant species turnover, as some species invade areas occupied by other species. Most of Earth’s land that is not covered by ice or desert is projected to undergo at least a 30 percent change in plant cover – changes that will require humans and animals to adapt and often relocate. In addition to altering plant communities, the study predicts climate change will disrupt the ecological balance between interdependent and often endangered plant and animal species, reduce biodiversity and adversely affect Earth’s water, energy, carbon and other element cycles. “For more than 25 years, scientists have warned of the dangers of human-induced climate change,” said Jon Bergengren, a scientist who led the study while a postdoctoral scholar at Caltech. “Our study introduces a new view of climate change, exploring the ecological implications of a few degrees of global warming. While warnings of melting glaciers, rising sea levels and other environmental changes are illustrative and important, ultimately, it’s the ecological consequences that matter most.” When faced with climate change, plant species often must “migrate” over multiple generations, as they can only survive, compete and reproduce within the range of climates to which they are evolutionarily and physiologically adapted. While Earth’s plants and animals have evolved to migrate in response to seasonal environmental changes and to even larger transitions, such as the end of the last ice age, they often are not equipped to keep up with the rapidity of modern climate changes that are currently taking place. Human activities, such as agriculture and urbanization, are increasingly destroying Earth’s natural habitats, and frequently block plants and animals from successfully migrating. – Bill Dawson Image credits: Photos – Texas Forest Service; Map – NASA
577
Feral cats occur right across the continent in every habitat type including deserts, forests and grasslands. Total population estimates vary from 5 million to 18 million feral cats. Each feral cat kills between 5-30 animals per day. Taking the lower figure in that range (five) – and multiplying it by a conservative population estimate of 15 million cats – gives a minimum estimate of 75 million native animals killed daily by feral cats. It is clear that cats are playing a critical role in the decline of our native fauna. They are recognised as a primary cause of several early mammal extinctions and are identified as a factor in the current declines of at least 80 threatened species. AWC has developed a practical strategy designed to minimise their impacts and facilitate the development of a long-term solution. This includes: - GROUND COVER: impairing the hunting efficiency of cats in grasslands and woodlands by manipulating ground cover through: - minimising the frequency and extent of late-season wildfires;and - reducing the density of feral herbivores. - DINGOES AS A BIOLOGICAL CONTROL: reducing the density of cats and affect hunting behaviour by promoting a stable Dingo population. - FERAL CAT-FREE AREAS: establishing feral cat-free areas to protect core populations of species most vulnerable to cats. AWC’s Scotia contains the largest cat-free area on the mainland; in total, AWC manages more feral cat-free land on mainland Australia than any other organisation. - STRATEGIC CONTROL: strategic implementation of control measures such as shooting and baiting to protect highly threatened species. - RESEARCH: generating scientific knowledge that will help design a long-term solution enabling the control of cats and their impacts across landscapes and, ideally, the eradication of feral cats. We need your help in the battle to save our wildlife from feral cats. Please make a tax deductible donation to support practical land management that will limit the impact of cats. Your donation will help protect native animals at risk from feral cats, such as the Bilby, the Mala and a host of our small northern mammals. To donate, please click here. To learn more about this project, please read pages 4-7 of the Summer 2012-2013 issue of Wildlife Matters here. Find out more at Australian Wildlife Conservancy
85
S.A.D.D. is an acronym used to described the symptoms/signs/strong behavioral patterns that a nonwhite person is having sex with a white person and/or having frivolous contact with white people (not using the time with white people for constructive benefit). S- Space for white people. The nonwhite/victim of racism person will seek space for white people in discussion of racism. Often seeking space for white people who are not racist and have a high problem of not suspecting white people they come in contact with of being racist/white supremacist especially the white person they are having sex/sexual contact with. The C.O.W.S. w/ Dr. Eddie Moore, Jr. "Your definition sounds like it is not leaving space for white people who don't practice racism or is working against racism" Dr. Eddie Moore, Jr., after hearing Gus T. Renegade/Victim of Racism definition of racism/white supremacy. A- Abstract thought and speech. The nonwhite/victim of racism "They won't talk about racism as if it's live and experienced everyday" but they will talk about it as if it's "out there". They will use words like institutional racism as if institution's are not buildings and are run by the people inside them. The C.O.W.S. w/ W. Kamau Bell Victim of Racism known as W. Kamau Bell used the abstract term Halls of Power six times to describe the system of white supremacy. D- Defend white people/Divided loyalties Defend White People/Defend making white space The nonwhite person will ferociously defend white people and that space they make for white people in discussion of racism. They will often experience from Cognitive Dissonance when they start to think about pondering that the white person they are sleeping with might be a racist/white supremacist. The C.O.W.S. w/ Cynthia McKinney A victim of racism called in after Cynthia McKinney left defending the space for white people. D- Divided loyalties. A metaphor of divided loyalties. The nonwhite person will be on the battlefield of countering racism/white supremacy and instead of going full charge at their enemies, they will be discouraging nonwhite persons from fighting, causing conflict and confusion within the tents making it easy for the white supremacists to come slaughter them.
242
Nutrition basics (19) - Dietary fiber: Essential for a healthy diet - Artificial sweeteners and other sugar substitutes - Added sugar: Don't get sabotaged by sweeteners - see all in Nutrition basics Healthy diets (12) - DASH diet: Tips for dining out - DASH diet: Tips for shopping and cooking - DASH diet: Healthy eating to lower your blood pressure - see all in Healthy diets Healthy cooking (14) - Healthy chicken recipes - Meatless meals: The benefits of eating less meat - Healthy cooking for 1 or 2 - see all in Healthy cooking Healthy menus and shopping strategies (13) - Free range and other meat and poultry terms - Mayo Clinic Healthy Weight Pyramid: A sample menu - Healthy breakfast: Quick, flexible options to grab at home - see all in Healthy menus and shopping strategies Nutritional supplements (3) - Herbal supplements: What to know before you buy - Calcium and calcium supplements: Achieving the right balance - Supplements: Nutrition in a pill? Whole grains: Hearty options for a healthy diet Find out why whole grains are better than refined grains and how to add more whole grains to your diet.By Mayo Clinic staff Grains, especially whole grains, are an essential part of a healthy diet. All types of grains are good sources of complex carbohydrates and some key vitamins and minerals. Grains are also naturally low in fat. All of this makes grains a healthy option. Better yet, they've been linked to a lower risk of heart disease, diabetes, certain cancers and other health problems. The healthiest kinds of grains are whole grains. The 2010 Dietary Guidelines for Americans recommends that at least half of all the grains you eat are whole grains. Chances are you eat lots of grains already. But are they whole grains? If you're like most, you're not getting enough whole grains in your diet. See how to make whole grains a part of your healthy diet. Types of grains Also called cereals, grains and whole grains are the seeds of grasses cultivated for food. Grains and whole grains come in many shapes and sizes, from large kernels of popcorn to small quinoa seeds. - Whole grains. These are unrefined grains that haven't had their bran and germ removed by milling. Whole grains are better sources of fiber and other important nutrients, such as selenium, potassium and magnesium. Whole grains are either single foods, such as brown rice and popcorn, or ingredients in products, such as buckwheat in pancakes or whole wheat in bread. - Refined grains. Refined grains are milled, a process that strips out both the bran and germ to give them a finer texture and extend their shelf life. The refining process also removes many nutrients, including fiber. Refined grains include white flour, white rice, white bread and degermed cornflower. Many breads, cereals, crackers, desserts and pastries are made with refined grains, too. - Enriched grains. Enriched means that some of the nutrients lost during processing are added back in. Some enriched grains are grains that have lost B vitamins added back in — but not the lost fiber. Fortifying means adding in nutrients that don't occur naturally in the food. Most refined grains are enriched, and many enriched grains also are fortified with other vitamins and minerals, such as folic acid and iron. Some countries require certain refined grains to be enriched. Whole grains may or may not be fortified. (1 of 2) - Dietary Guidelines for Americans, 2010. U.S. Department of Health and Human Services. http://www.cnpp.usda.gov/DGAs2010-PolicyDocument.htm. Accessed June 30, 2011. - Dole Food Company, et al. Encyclopedia of Foods: A Guide to Healthy Nutrition. San Diego, Calif.: Academic Press; 2002. - Whole white wheat FAQ. Whole Grains Council. http://www.wholegrainscouncil.org/whole-grains-101/whole-white-wheat-faq. Accessed June 30, 2011. - Maras JE, et al. Whole grain intake: The Baltimore longitudinal study of aging. Journal of Food Composition and Analysis. 2009;22:53. - Choosing whole grains FAQ. Eat right Ontario. http://www.eatrightontario.ca/en/viewdocument.aspx?id=39. Accessed June 30, 2011. - Duyff RL. American Dietetic Association Complete Food and Nutrition Guide. 3rd edition. Hoboken, N.J.:John Wiley & Sons; 2006. - What foods are in the grains group? U.S. Department of Agriculture. http://www.choosemyplate.gov/foodgroups/grains.html. Accessed June 30, 2011. - O'Neil C., et al. Whole-grain consumption is associated with diet quality and nutrient intake in adults: The National Health and Nutrition Examination Survey, 1999-2004. Journal of the American Dietetic Association. 2010;110:1461. - Zeratsky KA (expert opinion). Mayo Clinic, Rochester, Minn. July 5, 2011. - Nelson JK (expert opinion). Mayo Clinic, Rochester, Minn. July 7, 2011.
698
Information contained on this page is provided by NewsUSA, an independent third-party content provider. WorldNow and this Station make no warranties or representations in connection therewith. /National Children's Cancer Society) - The good news is that some drug shortages have been resolved, and many essential childhood cancer drugs are now more available. The root of the problem, however, persists. For the past two years, hospitals have been hit with significant shortages of many generic drugs for several diseases, including cancer. In 2011 alone, there were at least 250 different shortages. Pharmaceutical companies and drug manufacturers make less money off many of the older, generic drugs used for cancer treatment. But generally, most shortages are a result of manufacturing snags and production problems. Improvements are being made; cooperation between drug manufacturers and the FDA has helped prevent many new shortages. But at large, oncology drugs for both children and adults are severely lacking. Sandra Kweder, deputy director of the FDA's Office of New Drugs, wholeheartedly agrees, "With regard to oncology drugs we remain extremely concerned about the shortages," she said at a press conference held by the American Society of Clinical Oncology. Although there are ongoing shortages of anesthesia, pain medicines and antibiotics, the scarcity of cancer drugs means that any shortage has a huge impact. Last year broke records -- more than 200 cancer drugs were unavailable. When an Ohio plant was shut down due to manufacturing problems last November, production halted on several critical drugs, including a preservative-free version of methotrexate -- the key treatment for the common pediatric cancer acute lymphoblastic leukemia. "The childhood cancer community was very concerned," says Angie Hayes, Case Manager for The National Children's Cancer Society (NCCS). "Childhood cancer patients and families shouldn't have to delay treatment because hospitals and cancer treatment centers ran out of medication." The Fight Against Childhood Cancer Continues The NCCS has provided assistance to more than 30,000 children in the U.S. For 25 years, NCCS has grown and evolved with programs such as the Pediatric Oncology Program (POP), which has distributed over $54 million to families, and Beyond the Cure. This year alone, Beyond the Cure -- a survivorship program designed to educate children and their families about the challenges they may face as childhood cancer survivors -- has awarded $125,000 in college scholarships to 38 cancer survivors. To learn more about the resources offered by NCCS, visit www.theNCCS.org
159
Requests: If you need specific information on this remedy - e.g. a proving or a case info on toxicology or whatsoever, please post a message in the Request area www.homeovision.org/forum/ so that all users may contribute. The opium poppy is indigenous to Asia Minor, and awareness of the euphoriant effect of some part of the poppy plant is implicit in the Sumerian records of 4000 BC. Clear accounts exist of its use in the Egyptian, Greek, and Roman cultures. Paracelsus was aware of its usefulness and prepared the first tincture of opium, called laudanum, subsequently simplified by Sydenham. Friedrich Sertürner (1783-1841) isolated morphine from opium and demonstrated for the first time that a single purified chemical substance could account for the pharmacological effects of a natural product. Sertürner, a reluctant apprentice to a pharmacist in Prussia, was disturbed by the variable potency of available opium preparations and set out to purify and standardize it. Working at a time when neither experimental pharmacology nor the chemistr of natural products were recognized fields of endeavor, Sertürner succeeded in isolating morphine from opium. Using a bioassay in dogs, he established that morphine, as he named the alkaline substance, was the somnifacient principle in opium. His early reports 1803 were either rejected by editors or ignored after publication. He eventually tested his purified preparation on himself and 3 friends, administering 3 doses of 30mg in 45 minutes and observing vomiting, flush, and near coma. This work was finally published in 1817 and attracted the interest of the influential French chemist Gay-Lussac. The work of Sertürner influenced Pelletier and Caventou, and in the same year other pure principles from plant sources were successfully isolated. The addicting properties of opium have also been important in its history. In China, opium was used only in the treatment of dysentery until the mid-1700s. The English, Portuguese, and Dutch built up a large trade supplying opium to China, and addiction had become so much of a problem by the early 1800s, that the Chinese government acted to bar the importation of opium and reduce the amount of opium-smoking. These acts precipitated a 3-year war terminated by the treaty of Nanking 1842, which gave England Hong-Kong, opened 5 ports to English traders, and specifically authorized continued trade in opium. The risk of addiction was probably underestimated in the West for some time thereafter. Opium and morphine were widely prescribed and easily available in many patent medicines. Misuse was common until about 1920, but the predominant pattern was one of oral use. Since the laws were changed at that time, the number of people involved has become comparatively small but the narcotic is injected. Morphine and other naturally occurring narcotics are isolated from opium. Opium is collected from only one variety of poppy, Papaver somniferum. A few days after the petals fall, the unripe, still succulent seed pod is lighthly incised. A day later the sticky brown gum that has collected is scraped from the surface of the pod. As much as 25% of this crude opium may be made up ofalkaloids. The content of morphine varies from 9 to 14% and is adjusted to 10% in the standardized preparations. The legal production of opium is regulated by the United Nations. India, Turkey, and Russia are the largest producers of opium from which morphine and other alkaloids are isolated. Morphine is used as such, however, a larger amount is converted chemically into codeine, which occurs in opium in amounts insufficient to meet the needs of medicine. The illegal production of opium is huge. The large amounts of opium still used in the Orient are produced mostly in Southeast Asia. For many years, the illegal heroin that reached the USA originated as opium in Turkey. Mexican opium is now the primary source of heroin, with smaller contributions from the Orient and Iran.
524
June 22, 1976. North Atlantic. At 21:13 GMT a pale orange glow behind a bank of towering cumulus to the west was observed. Two minutes later a white disc was observed while the glow from behind the cloud persisted. High probability that this may have been caused by interferometry using 3-dimensional artificial scalar wave? Fourier expansions? as the interferers. Marine Observer. 47(256), Apr. 1977. p. 66-68. "Unidentified phenomenon, off Barbados, West Indies." August 22, 1969. West Indies. Luminous area bearing 310 degrees grew in size and rose in altitude, then turned into an arch or crescent. High probability that this may have been caused by interferometry using artificial scalar wave? ((Fourier expansions.)) Marine Observer. 40(229), July, 1970. p. 107-108. "Optical phenomenon: Caribbean Sea; Western North Atlantic." Mar. 20, 1969. Caribbean Sea and Western North Atlantic. At 23:15 GMT, a semicircle of bright, milky-white light became visible in the western sky and rapidly expanded upward and outward during the next 10 minutes, dimming as it expanded. High probability that this may be caused by interferometry using artificial scalar wave? Fourier expansions?. Marine Observer, 40(227), Jan. 1970. p.17; p. 17-18. 7B.21 - Electricity 13.06 - Triple Currents of Electricity 14.35 - Teslas 3 6 and 9 ((16.04 - Nikola Nikola Tesla describing what electricity is)) 16.07 - Electricity is a Polar Exchange 16.10 - Positive Electricity 16.16 - Negative Electricity - Russell 16.17 - Negative Electricity - Tesla 16.29 - Triple Currents of Electricity ((Figure 16.04.05 and Figure 16.04.06 - Nikola Nikola Tesla and Lord Kelvin)) Part 16 - Electricity and Magnetism Tesla - Electricity from Space What Electricity Is - Bloomfield Moore Page last modified on Wednesday 19 of May, 2010 05:23:05 MDT
35
Biochemical Conversion Processes The diagram below depicts a high-level view of the primary units of operation in the biochemical conversion process. Specific process operation conditions, and inputs and outputs within and between each unit, vary in practice. These process variations can impact the key performance outcomes (titer, rate, and yield), which determine economic viability when the process is scaled up. The following descriptions highlight issues in each key process step. During pretreatment, biomass feedstock undergoes a process to mechanically or chemically fractionate the lignocellulosic complex into soluble and insoluble components. Soluble components include mixtures of five- and six-carbon sugars (mainly xylose, arabinose, mannose, galactose, and glucose) and some sugars oligomers. Insoluble components include cellulosic polymers and oligomers and lignin (and any other components that may be linked to the constituents). Depending on the exact chemistry chosen for this step, variable amounts of the biomass may be solubilized. The main purpose of this step is to open up the physical structure of the plant cell walls to permit further deconstruction during the hydrolysis stepyea. The more open structure of the resulting material makes the remaining carbohydrate polymers more accessible for hydrolytic conversion to soluble sugars by enzymes or chemicals. The specific mix of sugars and oligomers released depends on the feedstock used and the pretreatment technology employed. In some process configurations, the pretreated material goes through a hydrolysate conditioning and/or neutralization process to adjust the pH of the biomass slurry and remove undesirable by-product from pretreatment that are toxic to the downstream fermenting microorganism. In some cases, this step and hydrolysis, the next step, are combined into a single process. In hydrolysis, the pretreated material, with the remaining solid carbohydrate fraction, primarily cellulose, is guided through a chemical reaction that releases the readily fermentable sugar, glucose. This can be accomplished with enzymes, such as cellulases, or with strong acids. Addition of other enzymes in this step, such as xylanases, may allow for less severe pretreatment conditions, potentially resulting in a reduced overall pretreatment and hydrolysis cost. Depending on the process design, enzymatic hydrolysis requires several hours to several days, after which the mixture of sugars and any unreacted cellulose is transferred to the fermenter. Current processes use purchased enzymes or enzymes manufactured on site, based on the economics of the specific process. For technologies using strong acids, acid recovery is important for the economics to be viable. Currently, the most common approach to biological processing is to employ a fermentation step, wherein an inoculum of a fermenting microorganism is added to the biomass hydrolysates. Fermentation of all sugars is then carried out, and after a few days of continued saccharification and fermentation, nearly all of the sugars are converted to biofuels or other chemicals of interest. The resulting aqueous mixture or two-phase broth is sent to product recovery. Some processes combine the hydrolysis and fermentation steps (i.e., simultaneous saccharification and fermentation [SSF]). Chemical or catalytic conversion can be used in place of, or in addition to, fermentation to convert the hydrolysis products, such as sugars, alcohols, or a variety of other stable oxygenates, to desired end products. The addition of a catalyst makes the reaction less energy intensive, thus making the entire process more efficient. Different reactions achieve different yields and intermediates while targeting different end fuels and chemicals, so current research is aimed at identifying optimal process combinations with respect to efficiency, feedstock utilization, cost, sustainability, finished product characteristics, and anticipated market demands. Product Upgrading and Recovery Product upgrading and recovery varies based on the type of conversion used and the type of product generated, but in general, involves any biological and chemical transformations, distillation or any other separation and recovery method, and some cleanup processes to separate the fuel from the water and residual solids. Residual solids are composed primarily of lignin, which can be burned for combined heat and power generation or chemically converted to intermediate chemicals or intermediates for other uses.
540
In Think: The Life of the Mind and the Love of God John piper shows us the intricate continuity between the Christian faith and intellectual development. Focusing on the life of the mind helps us to know God better, love him more, and care for the world around us in thoughtful ways. While we must value the experiential an emotional elements of our faith, the intellectual aspects are far too often neglected and as Piper says in this book, we also need to practice careful thinking about God. Piper contends that "thinking is indispensable on the path to passion for God." So how are we to maintain a healthy balance of mind and heart, thinking and feeling? Piper urges us to think for the glory of God. He demonstrates from Scripture that glorifying God with our minds and hearts is not either-or, but both-and. Thinking carefully about God fuels passion and affections for God. Likewise, Christ-exalting emotion leads to disciplined thinking.
170
Baltimore Gas and Electric Co. is about to replace all residential gas and electric meters with smart meters, saying the meters manage consumers' energy usage better, and in time, save money. Some customers are questioning whether the new technology would be bad for your health. Smart meters basically reads itself then sends the energy usage reading wirelessly to BGE. The utility company has touted the benefits to customers, eventually saving customers money. "You are going to be able to actually go onto your computer, you are going to be able to see in near real time -- 24 hours, roughly -- your energy usage," BGE spokesman Rob Gould said. But not everyone is looking forward to smart meter technology, including Junghie Elky, who said she suffers from something called electromaganetic sensitivity, and believes radio frequencies make her sick. "I'm a little bit nervous about that, a bit worried about the health effects," Elky said. "At my worst, I was so sensitive, I could not touch things with electricity without feeling pain. I couldn't watch TV because the radiation from the screen would make me dizzy." Elky said she would experience headaches, fatigue and ear aches that forced her to take a leave of absence from work. Magda Havas, an associate professor of environmental and resource studies at Trent University in Ontario, Canada, has studied electromagnetic sensitivity for decades. She said 3 to 5 percent of the population suffer severe symptoms, and up to a third of the population have mild to moderate symptoms. "There are so many sources of radiation, so smart meters are just one additional source," Havas said. "We have a lot of it in our home. We live near cell phone antennas and they all emit radio frequency. What seems to happen is once the smart meters go in, people seem to be fine, and then a few of them become quite ill."
654
When set to hair mode, particle system creates only static particles, which may be used for hair, fur, grass and the like. The first step is to create the hair, specifying the amount of hair strands and their lengths. The complete path of the particles is calculated in advance. So everything a particle does a hair may do also. A hair is as long as the particle path would be for a particle with a lifetime of 100 frames. Instead of rendering every frame of the particle animation point by point there are calculated control points with an interpolation, the segments. The next step is to style the hair. You can change the look of base hairs by changing the Physics Settings. A more advanced way of changing the hair appearance is to use Children. This adds child hairs to the original ones, and has settings for giving them different types of shapes. You can also interactively style hairs in Particle Mode. In this mode, the particle settings become disabled, and you can comb, trim, lengthen, etc. the hair curves. Hair can now be made dynamic using the cloth solver. This is covered in the Hair Dynamics page. Blender can render hairs in several different ways. Materials have a Strand section, which is covered in the materials section in the Strands Page. Hair can also be used as a basis for the Particle Instance modifier, which allows you to have a mesh be deformed along the curves, which is useful for thicker strands, or things like grass, or feathers, which may have a more specific look. - Regrow Hair for each frame. - Enables advanced settings which reflect the same ones as working in Emitter mode. - Set the amount of hair strands. Use as little particles as possible, especially if you plan to use softbody animation later. But you need enough particles to have good control. For a “normal” haircut I found some thousand (very roughly 2000) particles to give enough control. You may need a lot more particles if you plan to cover a body with fur. Volume will be produced later with Children. Settings for adding movement to hair see Hair Dynamics. - Draw hair as curves. - Draw just the end points if the hairs. - The number of segments (control points minus 1) of the hair strand. In between the control points the segments are interpolated. The number of control points is important: - for the softbody animation, because the control points are animated like vertices, so more control points mean longer calculation times. - for the interactive editing, because you can only move the control points (but you may recalculate the number of control points in Particle Mode). - 10 Segments should be sufficient even for very long hair, 5 Segments are enough for shorter hair, and 2 or 3 segments should be enough for short fur. Hair can be rendered as a Path, Object, or Group. See Particle Visualization for descriptions. - Fur Tutorial, which produced (Image 4b). It deals especially with short hair. - Blender Hair Basics, a thorough overview of all of the hair particle settings.
773
Given that this is the first U.S. presidential election since apps have made their way onto most electronic devices, you might think there would be dozens of worthwhile products available on the topic designed for students. Think again. Sure, there are plenty of apps devoted to November’s election, there just aren’t that many that explain the process to those too young to cast a ballot. Those listed here should get the conversation rolling about how we elect a President and the men who have held that the office. For older students look for apps produced by mainstream media outlets with a focus on election coverage. Start with the major newspapers. In addition to hourly news updates, The Washington Post’s W P Politics includes “campaign files,” an interactive polling map, and a fact checker that “accesses the veracity of candidates’ statements,” awarding “one to four Pinocchios” when deemed necessary. Viewers can also watch videos of candidate’s ads—these alone will generate some lively classroom conversations. It doesn’t get much better than this one, and it’s available for free. The NY Times Election 2012 (The New York Times) app promises all readers access to a half dozen “top” news stories. However, only subscribers can view candidate pages and videos and photos from the campaign trail, read the latest polling news, and receive live election results. High school students who love politics are probably already following Mike Allen’s Politico Playbook (Politico) on their iPhones or iPads. Right now the daily news from this Washington insider is full of election-related coverage, and it’s all for free. To drive home discussions about the Electoral College consider downloading the Electoral-Vote.com (Dubbele.com; Gr 9 Up; Free) app, which will bring users to the website. The site, which has been tracking elections for a number of years, includes detailed maps and commentary (sometimes snarky) on the presidential and senate races. It includes current poll results, graphs, and news features, and links to articles from a range of periodicals and blogs. The 2012 Map: The Presidential Election App is a better choice for younger students (Cory Renzella; Gr 5-9; $1.99), and it’s available in 12 languages. The projected electoral map is easy-to-read and there are daily updates and brief notes on where current presidential polls are in place. Users can create maps with their own Electoral College projections and share them with friends via Facebook, Twitter, and email. As they scroll through the archive of electoral maps from 1789 through 2008 they’ll see the borders of the country change, watch as third parties pop up, discover the shrunken map of 1864, and read the embedded notes on each election. For a simple Electoral College map that can be manipulated for classroom use Election Map 2012 (Teq; Gr 4 Up; $1.99) will also work. A look at the last four election maps is included. Most of the apps for younger students feature lists of the men who have held the office of Chief Executive and provide a few facts about each of them. U.S. Presidents (Encyclopaedia Britannica/MEDL Mobile; Gr 3-6; $1.99) opens with a rendition of “Hail to the Chief” and a photo of President Barack Obama. Beyond this screen viewers can access a page of images of the presidents in chronological order. A tap to any portrait brings up information on the subject along with additional tabs leading to facts about that president’s vice president, First Lady, and birth date, and a bit of trivia. Information on national landmarks, and the lyrics of “Hail to the Chief” are also provided. After exploring the app viewers can take a quiz to test their knowledge of presidential facts answering such questions as “Who was the first U.S. president to be elected with no prior political experience?” and “Who was the only president to serve two terms that weren’t back to back?” The “clear interface” of The American Presidents and First Ladies (Multieducator, Inc.; Gr 4-8; $.99) allows users to sort the lists of leaders and their spouses either alphabetically or chronologically. Each entry includes personal facts, along with a page of information on the president’s early years, family, election, “presidential promises.” The full text of each man’s inaugural text is also included. Information on the First Ladies includes the years before and after each woman’s spouse was in office. Highlights of the app are the embedded videos, which include photos and audio clips. Unfortunately, some out-of-date information and typos mar the overall presentation. What Does the President Look Like? (Kane Miller; Gr 4-8; $2.99 ), based on the book by Jane Hampton Cook and illustrated by Adam Ziskie, takes a different approach to presidential history. It offers a visual survey of the men who have held that office, along the way providing “succinct history of visual media, from portrait making through digital imaging.” Here’s what our reviewer, Erin Sehorn, had to say about the app’s options: “The “timeline” chronicles major events in presidential history, as well as the technological evolution of photographs, movies, television, and the Internet. On each page, glowing stars allow users to learn more about the technological advances of presidential image making through pop-up pictures, early political cartoons, and newsreel footage. “Resources” links to the websites used as source material. There are a few glitches—for example, in the “Gallery” portraits appear only briefly, making it difficult to study an image. Overall, though, kids will enjoy this production.” Our youngest students may not know the ins and outs of how someone makes it into the White House, but they do know that a visit to that famous abode is cause for excitement. While conversation of the election swirls around them, share Marc Brown’s Arthur Meets the President (ScrollMotion, Inc.; $2.99), based on the author’s picture book. In this story, the aardvark’s essay on “How I Can Help Make America Great” wins him and his classmates a trip to the White House to meet the president. En route the characters (and viewers) see and learn about a few other famous Washington, DC landmarks, and perhaps, take a moment to ponder what their contribution to our country might be. Eds. note: After a brief hiatus during the transition to our new website, our app reviews are back. —moving from School Library Journal’s blog roll into a column, and pushing out in our Extra Helping enewsletter. Archived reviews can be found on the SLJ website under “Blogs and Columns.” However, to ensure you receive all of our postings, be sure to add “Touch and Go” to your RSS feed. This article was featured in School Library Journal's Extra Helping enewsletter. Subscribe today to have more articles like this delivered to your inbox for free.
765
Open source software is becoming more popular and its price is certainly drawing raves. Open source software is software that is distributed along with its source code. This is usually done free and it is done with the purpose of allowing the improvement of the software be driven by the general user and developer base. This way, the software can definitely apply to user needs and interests as well as draw closer to perfection in the most efficient way possible. Many organizations have adopted the open source philosophy in order to produce the premier software of their markets and many nonprofit organizations have coalesced in support of open source software. Even governments have heralded their support of open source software and some have even gone as far as to initiate mandates that render the distribution of open source software all but compulsory. Free? Free? Free? Corporations benefit from open source software as it gives them the capacity to penetrate the market in one of the most effective and cost-efficient ways. These entities can garner loyalty, market presence, and increased stability with the proliferation of open source as opposed to proprietary software. Developers benefit from open source software in that they have the capacity to learn and perfect their methods as well as apply their methodologies and innovation to other large-scale enterprises. This can benefit them as both something they can put on their resume as well as something that they can apply to later projects. Individual users benefit from open source software in that they desires and needs are what drives the development of software that is usually free, especially during its initial stages. Managing Free Software? Open source management is usually handled cooperatively, but can also be taken care of through several software tools. On one hand, the Open Source Initiative and the debian-legal mailing list offer networking solutions to open source software management, while on the other, individual teams and firms can track the development of open source software through tools created for that purpose. There is a multitude that is available on the market, many of which also happen to be open source. The Asian open source software movement has been gaining steam almost at a faster rate than in the Western World. Not only are there many organizations that are dedicated to the development and distribution of open source software, many Asian governments have spearheaded initiatives to make open source software the most-used software development approach. These initiatives have largely been due to a desire to become technologically independent of proprietary software from United States-based corporations. As a result of these pursuits, though, many essential sectors of Asian society, including education and healthcare, have been improved through open source software. Open source software differs from free software or “freeware” in that it is distributed with the source code and with the intention to initiate the gradual improvement of such software. As a result, open source software is often distributed to a larger segment of the market and is not normally used as an advertising method, while freeware is. Accordingly, open source software is available to and can be used by absolutely anyone. The development of open source software is contingent on each respective program or platform’s developers. Alternately, its presentation, purpose, and distribution are based on the open source philosophy, which may alter how the software is initially presented. For example, while proprietary software is often produced as a “finished version,” open source software is presented as a work-in-progress, with the expectation that members of the market base will work to gradually improve the software.
851
Mount Desert Rock About 20 miles southeast of Mount Desert Island, this is one of the most remote and lonely lights. Even mild storms scour the island, denuding it of anything not firmly secured, and frequently submerge it entirely. Yet each spring keepers transported soil from the mainland, tucking it into crevices on this otherwise barren rock in hopes of cultivating flowers and vegetables. For a few months in summer and early fall, the rock became a colorful garden in the middle of the sea. But late fall and winter storms always washed it all away. It's a challenge to see this lighthouse, even from a boat, but whale-watching cruises from Bar Harbor sometimes pass by. Mount Desert Rock Lighthouse was built in 1830.
733
The Syrian Flag : color in the Syrian flag refers to a definite meaning or a period as follows: 1. Red Color: The blood of the martyrs. 2. Black Color: The Abbasids. 3. White Color: The Umayyad. 4. The Green: The Rashidun or the Fatimid. 5. The two stars represent the previous union between Egypt and Syria. Syrian flag is also found as a Shield in the middle of the Syrian Eagle's heart which is derived from the Arabic history, which referred to the flag of "Khaled Bin Al Waleed" that was held at when he conquered Damascus in 635 AD. At the bottom of the Shield , there are two wheat spikes to represent the country's first crop and its agricultural nature. The eagle grabs in his claws a stripe that has the words "Syrian Arab Republic" written on it in Kufic (an Arabic type of writing). Syrian National Anthem: "Homat el Diyar" (translated Guardians of the Homeland) is the national anthem of Syria, with lyrics written by "Khalil Mardam Bey" and the music by "Mohammed Flayfel", who also composed the national anthem of the Palestinian National Authority, as well as many other Arab folk songs. was adopted in 1936 and temporarily fell from use when Syria joined the United Arab Republic with Egypt in 1958. It was decided that the national anthem of the UAR would be a combination of the then-Egyptian anthem and "Homat elDiyar" When Syria seceded from the union in 1961, the anthem was completely restored. Translation of Syrian National Anthem: Defenders of our home, Peace be upon you; The proud spirits had refused to subdue. The lion-abode of Arabism, A hallowed sanctuary; The seat of the stars, An inviolable preserve. Our hopes and our hearts, Are entwined with the flag, Which unites our country... Listen Syrian National Anthem...Click Here
95
The New Genetics of Mental Illness; June/July 2008; Scientific American Mind; by Edmund S. Higgins; 8 Page(s) Throughout history shamans, clerics and physicians have tried to pin down what goes awry when a person slips into sadness, insanity or psychosis. Theorists have variously blamed mental illness on an imbalance of bodily fluids, the movement of planets, unconscious mental conflict and unfortunate life experiences. Today many researchers believe that psychiatric disorders arise in large part from a person¿s genetic makeup. Genes, after all, are the blueprints for the proteins that create and control the brain. And yet genetics cannot be the whole story: identical twins, who have virtually the same DNA, do not always develop the same mental disorders. For example, if one identical twin acquires schizophrenia, the other stands just a 50 percent chance of also suffering from the disease. Indeed, abundant data suggest that psychiatric ailments typically result from a complex interplay between the environment and a number of different genes [see ¿The Character Code,¿ by Turhan Canli; Scientific American Mind, February/March 2008]. But only recently have scientists begun to grasp how the environment affects the brain to produce psychological changes.
563
For the past decade I’ve been developing forensic techniques for determining if an image is a forgery. The general philosophy that I have adopted is to first concede that there is no single technique that can detect all forms of digital manipulation. I have, therefore, been developing a number of different forensic tools each tailored to detecting specific forms of photo manipulation – some of these techniques operate on subtle pixel-level statistics that are invisible to the human eye, and others operate on geometric properties that can sometimes be seen with a trained eye. For example, in the image shown below, the bottle’s cast shadow is clearly incongruous with the shape of the bottle (as is the shadow on this cover of Time Magazine). Such obvious errors in a shadow are easy to spot, but more subtle differences can be harder to detect. Shown below are two images in which the bottle and its cast shadow are slightly different (the rest of the scene is identical). Can you tell which is consistent with the lighting in the rest of the scene? The geometry of cast shadows is dictated by the 3-D shape and location of an object and the illuminating light(s). It turns out, perhaps somewhat surprisingly, that there is a simple and intuitive 2-D image-based geometric analysis that can verify the authenticity of shadows. Locate any point on a shadow and its corresponding point on the object, and draw a line through them. The best points to use are the corners of an object for which it is easier to match shadow and object. Repeat for as many clearly defined shadow and object points as possible. As you do this, you will find that all of the lines should intersect at one point – the location of the illuminating light. Here is the basic intution for why this image-based construction works. Since light travels in a straight line, a point on a shadow, its corresponding point on the object, and the light source must all lie on a single line. Therefore, the light source will always lie on a line that connects every point on a shadow with its corresponding point on an object. Because under the rules of perspective projection, straight lines project to straight lines, this basic geometry is preserved in the 2-D image of a scene. Notice that this constraint holds regardless of the shape or orientation of the surface onto which a shadow is cast. Shown below are the results of this simple geometric analysis, which clearly reveals the second bottle to be the fake. In practice, there are some limitations to a manual application of this geometric analysis. Care must be taken to select appropriately matched points on the shadow and the object. This is best achieved when the object has a distinct shape (the corner of a cube or the tip of a cone). In addition, if the dominant light is the sun, then the lines may be nearly parallel, making the computation of their intersection vulnerable to slight errors in selecting matched points. And, it is necessary to remove any lens distortion in the image which causes straight lines to be imaged as curved lines which will then no longer intersect at a single point. We are developing a suite of forensic tools that will automate and simplify the detection of fakes, one of which will almost certainly rely on the analysis of shadows. [CGI model credit to Jeremy Birn, Lighting and Rendering in Maya]
240
run the given instructions Major Section: PROOF-CHECKER-COMMANDS Example: (do-all induct p prove)Run the indicated instructions until there is a hard ``failure''. The instruction ``succeeds'' if and only if each instruction in General Form: (do-all &rest instruction-list) instruction-listdoes. (See the documentation for sequencefor an explanation of ``success'' and ``failure.'') As each instruction is executed, the system will print the usual prompt followed by that instruction, unless the global state variable do-all ``fails'', then the failure is hard if and only if the last instruction it runs has a hard ``failure''. Obscure point: For the record, (do-all ins_1 ins_2 ... ins_k) is the same as (sequence (ins_1 ins_2 ... ins_k)).
742
Restoring a Waterway for the 21st Century A resource for the local community and a habitat for wildlife You may not have heard of the Wilts & Berks Canal as it has been derelict for nearly a century. There are ambitious plans to bring the waterway back to life. Its called the Wilts & Berks because it is said that a lazy draughtsman could not be bothered to write in full the county names! Swindon is mid point of the waterway which linked the Thames at Abingdon with the Kennet & Avon Canal near Melksham. A further link was made from Swindon to the Thames & Severn Canal at Cricklade (The North Wilts Canal). There were also branches to Chippenham, Calne, Longcot (near Shrivenham) and Wantage. The waterway now is located in Wiltshire Swindon and Oxfordshire since Local Government boundary changes in the 1970's. The Wilts and Berks Trust is committed to returning this historic waterway to a navigable state. The canal will restore an important link in the national waterways network, but the project will be part of the green infrastructure of the region, creating connections between existing wildlife habitats and creating aquatic and wetland habitats for endangered species. Working with our partners we aim to restore the canal route, provide wildlife habitats, cycleways and routes for walkers along the restored towpath. The Trust would like your help to complete this exciting project- please explore this website to find out more. Latest news follow us on Twitter Donate on line
574
As a holistic veterinarian, I feel it is incredibly important to take the whole animal into consideration when it comes to nutrition. And, whenever practical, my preference is to provide nutrients, minerals and vitamins in their natural forms. In this post, I’d like to talk to you specifically about vitamin E, to review both the strengths and weaknesses of natural and synthetic forms. Vitamin E is an incredibly complex and important nutrient that, among other things, functions as an antioxidant. Antioxidants are naturally occurring nutrients that promote health by slowing the destructive aging process of cells (a breakdown called “peroxidation”). In peroxidation, damaged molecules known as free radicals steal pieces from other cells, like fat, protein or DNA. The damage can spread, damaging and killing entire groups of cells. While peroxidation can be useful to destroy old cells or germs and parasites, when left unchecked, free radicals produced by peroxidation also damages healthy cells. Antioxidants can help to stem the tide of peroxidation, thus stabilizing free radicals. Antioxidants like vitamin E are crucial to the health of companion animals of any age. They can improve the quality of the immune response and the effectiveness of vaccines in young pets, and help maintain a vital immune system in seniors. More...
690
Bipolar disorder, also known as manic-depressive illness, is a brain disorder that causes unusual shifts in mood, energy, activity levels, and the ability to carry out day-to-day tasks. Symptoms of bipolar disorder are severe. They are different from the normal ups and downs that everyone goes through from time to time. Bipolar disorder symptoms can result in damaged relationships, poor job or school performance, and even suicide. But bipolar disorder can be treated, and people with this illness can lead full and productive lives. Bipolar disorder often develops in a person's late teens or early adult years. At least half of all cases start before age 25. Some people have their first symptoms during childhood, while others may develop symptoms late in life. Bipolar disorder is not easy to spot when it starts. The symptoms may seem like separate problems, not recognized as parts of a larger problem. Some people suffer for years before they are properly diagnosed and treated. Like diabetes or heart disease, bipolar disorder is a long-term illness that must be carefully managed throughout a person's life.
460
Studies of a thigh bone fossil have uncovered strong evidence that a human-like creature walked upright six million years ago rather than the widely accepted four million years ago. Walking on two feet, bipedalism , is one of the characteristics that distinguishes apes from humans. ‘Dating the beginnings of bipedalism is very important in the human story,' said Chris Stringer, human evolution expert at the Natural History Museum. 'Because, for many experts, it would mark a clear divergence from the ancestral/ape pattern and show that the human lineage had really begun.' ‘Bipedalism probably does represent a fundamental first step in human evolution,’ Stringer said. ‘As Darwin recognised, walking on two legs frees up the arms and hands for tasks like carrying, tool making, and tool use. And much of what happened in human evolution later on stemmed from it.’ CT (Computed tomography) scans were carried out on the fossil thigh bone and the density patterns of the bone were much closer to a modern human than to an ape. Reported on the National Geographic website, Robert Eckhardt, who lead the research team at Pennsylvania State University, said ‘In present-day chimps and gorillas, the thicknesses in the upper and lower parts of that bone are approximately equal. In modern humans, the bone on top is thinner than on the bottom by a ratio of one to four or more.’ The ratio in the fossil was one to three, showing a bone formation more in line for an upright walker. Eckhardt and the research team report their findings in the journal Science .
553
UK hydrogen cars are coming - if you can fill up LONDON (Reuters) - Britain's hydrogen fuel cell car fleet may hit top gear within five years, but only if there is enough investment in filling stations, the UK Hydrogen and Fuel Cells Association (UK HFCA) told Reuters on Friday. Fuel cells convert hydrogen into electricity, with heat and water being the only by-products, with a number of car makers including Toyota, Ford, and Hyundai, pushing to commercialize the low-carbon hybrid fuel cell vehicle by 2015. "Somewhere around 2015 to 2017 we'll be over threshold and I think we'll see a larger and growing fleet," HFC chairman Dennis Hayter said. "It's all aligned with the rollout of the infrastructure. In order to get to a semi-ubiquitous availability of hydrogen, then yes, you're talking may billions of pounds, but it doesn't have to come at once." Hayter said fuel cell cars only take minutes to refill with a range of around 250 miles range. Plug-in electric vehicles take hours to recharge with a range of around 100 miles. Existing petrol filling stations could be converted, with hydrogen companies possibly leasing some of the pumps, while current hydrogen production capacity is seen as adequate for the next decade. "You may find there's a deal to be made between the hydrogen gas and petroleum companies. Things are happening in the background and gradually a network is starting to appear," Hayter said. "At present, the majority of hydrogen is derived from reforming of natural gas for industrial purposes such as refining and in chemicals. "The quantities currently used and likely to be needed for transport in the next five to 10 years would still be minimal alongside hydrogen consumed for industrial use." For the long term estimates of hydrogen costs, Hayter believes it will be competitive with petrol, or cheaper. Using U.S. hydrogen prices of $8 a kilogram, it would cost around $32 to fill up fuel cell car with a 250 mile range, he said. "It's not comparing apples with apples, but if they're the long term costs, then it could be significantly cheaper but it depends on the fuel duty," Hayter added. The UK HFCA is calling for hydrogen not to be taxed as a transport fuel, as petrol is, to help incentivize uptake. Britain has around 30 hydrogen fuel vehicles, mostly buses and taxi in London, with two filling stations in the city and another four expected by 2012, the UK HFCA said. Seen as a way to decarbonise the transport sector, Britain's former Labour government planned to subsidise low-carbon vehicle purchases from 2011, with a grant worth up to 5,000 pounds ($7,584). (Editing by William Hardy) - Tweet this - Share this - Digg this
799
Even if, internationally, Austria is not considered to be a special case, there is still widespread agreement on the fact that cooperation and the coordination of interests between the federations is one of this country’s distinctive features. The common definition for this type of cooperation is “social partnership”. The federations and chambers work in close contact with one or other of the two political parties, the Austrian People’s Party or the Social Democratic Party of Austria. The considerable economic growth and rise in employment and wages during the 1950s and 1960s created a favourable basis for the exchange of economic and socio-political interests. All this contributed to the wide-spread establishment of the Austrian system of social partnership in the 1960s. If the 1970s could be regarded as its heyday, the 1990s, in particular, have witnessed a change in this system’s significance. Social partnership is neither anchored in the Austrian constitution nor laid down in any specific act. It is rooted in the free will of the players concerned. To a large extent, it is implemented informally and confidentially and is not normally accessible to the general public. The umbrella federations of the social partners wield great influence as regards political opinion-forming and decision-making. Their co-operation has thus often been criticised as a “secondary government”, although the political omnicompetence often attributed to the social partners has, in fact, never existed as such. The co-operation and coordination of interests among the associations and with the government have only ever applied to specific fields of politics, such as income policies and certain aspects of economic and social policies, (e.g. industrial safety regulations, agrarian market legislation, labour market policies and principles of equal treatment). In these areas, during the past decades the social partners have substantially contributed to Austria’s economic, social and political stability – evidence of which can be found in economic growth, in the rise of employment, in the expansion of the welfare state and also in the often quoted “social peace”. Several avenues for political decision-making are open to the large national federations. A traditionally used channel is their close relationship with one or the other of the long-standing government parties, i.e. the Social Democratic Party or the Austrian People’s Party. In addition, the federations are incorporated, both formally and informally, into the political opinion-forming process of the relevant ministries, as evidenced by their participation in a number of committees, advisory boards and commissions. Even at the parliamentary level, involvement of experts from the federations and chambers is a normal practice. Austria’s accession to the European Union has expanded the federations’ scope in that they not only have privileged access to relevant information and documentation. Of even greater importance are their possibilities for influencing the Austrian position in proposing EU legislation. All in all, by comparison with many other countries, this means that the large national federations in Austria have excellent possibilities for shaping the policies relating to their interests. However, social partnership in the true sense of the word goes beyond this: its core task consists of the balancing of opposing interests in the aforementioned political fields through contextual compromises among federations or between the federations and the government. Since the 1980s, economic, social and political changes have become apparent in Austria, too. Evidence of this lies in reduced economic growth, rising budgetary deficits, increasing competition and unemployment, and an expanding rivalry between the political parties. Against this backdrop, it has not only become more difficult for the federations to align the different interests of their members to a common denominator: reduced turnout in elections to the chambers and the general calling into question of compulsory membership are symptoms of change. In addition, it is not only becoming increasingly difficult, but also rarer, to strike a balance between the federations’ interests. Well-known institutions, such as the Paritätische Kommission für Lohn- und Preisfragen (Parity Commission for Wages and Prices), which – particularly in the comments of foreign observers – has been widely recognised as a central institution of the Austrian social partnership, have lost some of their significance. The changes are mainly manifest in the re-weighting of the influence of the players involved in the political decision-making process; the government has gained formative power and influence. In important budgetary, economic and socio-political questions it decides both the procedure and the core contents. Austria’s accession to the European Union has reinforced this development. At the same time, however, EU membership also entails a loss of terrain for the federations. Decisions on topics such as agricultural, competition and monetary policies are decided at EU level. Here, the influence of the federations is essentially limited to formulating the Austrian position, which is just one out of 15. All this does not currently mean that the system of social partnership has come to an end. There are also visible signs of continuity. The privileged position of the national federations remains unchanged. In the political decision-making process a balance of interests can still be achieved. However, the influence has lessened. Not the end, but certainly changes and reforms of the social partnership, are currently on the agenda.
206
March 1996 – The United Kingdom's Human Fertilisation and Embryology Authority, the main government agency responsible for licensing U.K. embryo research, issues its first license for human embryonic stem cell research to the University of Edinburgh's Institute for Stem Cell Research. February 1997 – Ian Wilmut and other scientists from Scotland's Roslin Institute announce the creation of the sheep Dolly, the world's first successful clone of an adult mammal. March 1997 – Don P. Wolf and a team of researchers at the federally-funded Oregon National Primate Research Center announce that they have produced rhesus monkeys from cloned embryos, the first successful use of cloning-related technology in primates. March 1997 – Citing the technology used to create Dolly as raising "profound ethical issues," President Clinton prohibits the allocation of federal funds for human cloning. June 1997 – The Group of Eight, consisting of the United States, Canada, France, Germany, Italy, Japan, Russia and the United Kingdom, adopts a resolution agreeing on "the need for appropriate domestic measures and close international cooperation to prohibit the use of somatic cell nuclear transfer to create a child." November 1997 – UNESCO adopts the Universal Declaration on the Human Genome and Human Rights. Article 11 specifically prohibits the reproductive cloning of human beings. January 1998 – Physicist Richard Seed announces he has formed a team to attempt human cloning before the advent of legislation banning the technology. January 1998 – The Council of Europe amends its Convention on Human Rights and Biomedicine to prohibit reproductive and therapeutic cloning of human beings. To date, the revised convention has been ratified and implemented by 14 countries. January 1998 – The U.S. Food and Drug Administration claims authority to regulate, and pre-approve, experiments involving human cloning in the United States. June 1998 – Michigan becomes the first state to enact a law prohibiting human cloning. November 1998 – Two separate teams of scientists, one led by James A. Thomson of the University of Wisconsin and Joseph Itskovitz-Eldor of Israel's Rambam Medical Center, the other by John D. Gearhart of the Johns Hopkins University School of Medicine, announce that they have successfully isolated human embryonic stem cells for the first time. November and December 1998 – In claims greeted with skepticism, researchers at Advanced Cell Technology in Massachusetts and Kyunghee University Hospital in South Korea separately announce the successful creation of the first cloned human embryos. Neither organization has ever provided proof to verify their claims. November 2000 – Japan becomes the first Asian country to pass comprehensive legislation outlawing human reproductive cloning. January 2001 – Italian fertility specialist Severino Antinori and U.S. scientist Panayiotis Zavos provoke world-wide condemnation following the announcement of their goal to be the first scientists to clone a human being. January 2001 – Gerald P. Schatten and a team of researchers at the federally-funded Oregon National Primate Research Center create a rhesus monkey named "ANDi," the world's first genetically altered primate. July 2001 – The U.S. House of Representatives passes the Human Cloning Prohibition Act to outlaw both reproductive and therapeutic cloning, but the bill dies in the Senate, the closest any national ban has yet come to enactment in America. August 2001 – President Bush restricts federally funded human embryonic stem cell research to existing stem cell lines. The announcement receives criticism both from pro-life advocates opposed to any use of human embryos and from health and research advocates who claim that it will severely limit the development of treatments for various diseases including Alzheimer's, Parkinson's and juvenile diabetes. November 2001 – Scientists at Massachusetts-based Advanced Cell Technology announce that they have successfully created human embryos using the process of somatic cell nuclear transfer. December 2001 – Britain passes the Human Reproductive Cloning Act, outlawing reproductive cloning. December 2001 – The United Nations General Assembly passes a resolution creating a committee to address the issues of reproductive and therapeutic cloning. March 2002 – An article published in the Wall Street Journal details various advances in human cloning research being carried out in China. Chinese scientists, led by Lu Guangxiu of Xiangya Medical College, have been successfully cloning human embryos for two years while Sheng Huizen of Shanghai No. 2 Medical University has created embryonic stem cells from human-animal hybrids. September 2002 – California becomes the first state to approve a law legalizing therapeutic cloning. October 2003 – Both Costa Rica and Belgium introduce competing resolutions addressing cloning in a United Nations committee. The Costa Rican resolution, backed by the United States and 43 other countries, calls for an international treaty banning all cloning while the resolution sponsored by Belgium and 13 other countries seeks a treaty that would allow for the possibility of cloning for research. In December, the General Assembly moved to address the issue in its Autumn 2004 session. December 2003 – Clonaid, the biotechnology company founded by the leader of the Raelian religious movement, announces the birth of its first successful clone, a girl, Eve, but provides no proof to substantiate its claims. January 2004 – China and South Korea each adopt regulations that would ban reproductive cloning but would permit human embryo cloning for research. February 2004 – Veterinary professor Hwang Woo-suk of Seoul National University in South Korea and a team of researchers announce that they have succeeded in cloning human embryos and extracting stem cells from them. May 2004 – Singapore unveils a draft law that would allow embryo cloning for research but would ban attempts at reproductive cloning, providing a maximum possible penalty of five years imprisonment and a fine of S$100,000. May 2004 – The world's first embryonic stem cell bank opens in Britain. The government's Human Fertilisation and Embryology Authority also announces that a developer of one of the bank's stem cell lines, the Newcastle Fertility Centre at Life, has filed the first-ever application for permission to conduct therapeutic cloning research. The HFEA has yet to issue a license for therapeutic cloning. Alexander Cohen put together this timeline.
392
By Isaac Sever, Cypress Semiconductor Stepper motors convert electrical energy into discrete mechanical rotation. They are ideally suited for many measurement and control applications where positional accuracy is important. Stepping motors have the following advantages: - Full torque when rotation is stopped. This is in contrast to brushed and brushless DC motors, which cannot provide full torque continuously when the rotor is stopped. This aids in maintaining the current position. - Precise open-loop positioning and repetition. Stepper motors move in discrete steps as long as the motor stays under the maximum torque and current limits. This allows the rotor position to be determined by the control sequence without additional tracking or feedback. High quality stepping motors have three to five percent precision within a single step. - Quick starts, stop, and reverse capability. - High reliability because there is no brush or physical contact required for commutation. The life span of a stepping motor is dependent on the performance of the bearings. - Microstepping mode can be used allowing direct connection to a load without intermediate gearing. - A wide speed range can be controlled by varying the drive signal timing. - Inherent resonance can cause noise, jerky rotation, and at extreme levels, loss of position. - It is possible to lose position control in some situations, because no feedback is natively provided. - Power consumption does not decrease to zero, even if load is absent. - Stepping motors have low-power density and lower maximum speed compared to brushed and brushless DC motors. Typical loaded maximum operating speeds for stepper motors are around 1000 RPM. - Complex electronic controls are required. Figure 1: Structure of motors. Types of stepping motors There are several basic types of stepping motors: - Variable reluctance motors with metal teeth. - Permanent magnet motors. - Hybrid motors with both permanent magnets and metal teeth. Variable reluctance stepping motors have three to five windings and a common terminal connection, creating several phases on the stator. The rotor is toothed and made of metal, but is not permanently magnetized. A simplified variable reluctance stepping motor is shown in Figure 2. In this figure, the rotor has four teeth and the stator has three independent windings (six phases), creating 30 degree steps. Figure 2: Simple variable reluctance stepping motor. The rotation of a variable reluctance stepping motor is produced by energizing individual windings. When a winding is energized, current flows and magnetic poles are created, which attracts the metal teeth of the rotor. The rotor moves one step to align the offset teeth to the energized winding. At this position, the next adjacent windings can be energized to continue rotation to another step, or the current winding can remain energized to hold the motor at its current position. When the phases are turned on sequentially, the rotor rotates continuously. The described rotation is identical to a typical BLDC motor. The fundamental difference between a stepper and BLDC motor is that the stepper is designed to operate continuously stalled without overheating or damage. Rotation for a variable reluctance stepping motor with three windings and four rotor teeth is illustrated in Figure 3. 1, 2, 3, 1 → 3 steps → quarter turn 12 steps per rotation As shown in Figure 3, energizing each of the windings in sequence moves the rotor a quarter turn, 12 steps are required for a full rotation. Table 1: Variable reluctance stepper motor in Figure 3. The three steps shown in Figure 3 move the rotor a quarter turn. A full rotation requires 12 steps for a variable reluctance stepper motor. Typical variable reluctance motors have more teeth and use a tooth pole along with a toothed rotor to produce step angles near one degree. Figure 3: Rotation control of variable reluctance stepping motor. Permanent magnet stepping motor A permanent magnet stepping motor consists of a stator with windings and a rotor with permanent magnet poles. Alternate rotor poles have rectilinear forms parallel to the motor axis. Stepping motors with magnetized rotors provide greater flux and torque than motors with variable reluctance. The motor, shown in Figure 4, has three rotor pole pairs and two independent stator windings, creating 30 degree steps. Motors with permanent magnets are subjected to influence from the back-EMF of the rotor, which limits the maximum speed. Therefore, when high speeds are required, motors with variable reluctance are preferred over motors with permanent magnets. Figure 4: Permanent magnet stepping motor. Rotation of a permanent magnet stepping motor is produced by energizing individual windings in a positive or negative direction. When a winding is energized, a north and south pole are created, depending on the polarity of the current flowing. These generated poles attract the permanent poles of the rotor. The rotor moves one step to align the offset permanent poles to the corresponding energized windings. At this position, the next adjacent windings can be energized to continue rotation to another step, or the current winding can remain energized to hold the motor at its current position. When the phases are turned on sequentially the rotor is continuously rotated. Rotation for a permanent magnet stepping motor with two windings and three pairs of permanent rotor poles (six poles) is shown in Figure 5. Winding in sequence: 1 +/-, 2 +/-, 1 -/+, 2 -/+ → 3 steps → quarter turn 12 steps per rotation Table 2: Permanent magnet stepping motor in Figure 5. With one winding energized, the three steps move the rotor a quarter turn. A full rotation requires 12 steps for a permanent magnet stepper motor (bipolar) with both windings energized in each step. As shown in Figure 5, energizing each winding in sequence through each polarity moves the rotor a quarter turn. As before, 12 steps are required for a full rotation. Figure 5: Rotation control of permanent magnet stepping motor, sequencing individual windings. Another alternative to rotate a permanent magnet rotor is to energize both windings in each step. The vector torque generated by each of the coils is additive; this doubles the current flowing in the motor, and increases the torque. More complex control is also required to sequence the turning on and off of both windings. As shown in Figure 6, energizing two windings in each step, sequencing through each combination of polarities moves the rotor a quarter turn. As before, 12 steps are required for a full rotation. Table 3: Permanent magnet stepping motor in Figure 6. Figure 6: Rotation control of permanent magnet stepping motor using both windings together. Typical permanent magnet motors have more poles to create smaller steps. To make significantly smaller steps down to one degree, permanent magnet rotors can add metal teeth and toothed windings. This hybrid motor is described in the next section. Hybrid stepping motor Hybrid stepping motors combine a permanent magnet and a rotor with metal teeth to provide features of the variable reluctance and permanent magnet motors. Hybrid motors are more expensive than motors with permanent magnets, but they use smaller steps, have greater torque, and have greater maximum speeds. A hybrid motor rotor has teeth placed on the directional axes. The rotor is divided into parts between constant magnet poles. The number of rotor pole pairs is equal to the number of teeth on one of the rotor’s parts. The hybrid motor stator has teeth creating more poles than just the main poles containing windings. The rotor teeth provide a smaller magnetic circuit resistance in some rotor positions, which improves static and dynamic torque. This is provided by corresponding teeth positioning; some parts of the rotor teeth are placed opposite the stator teeth and the remaining rotor teeth are placed between the stator teeth. Dependence between the number of rotor poles, the stator equivalent poles, and the phase number define step angle: Figure 7: Hybrid stepping motor. Rotation of a hybrid stepping motor is produced with the same control method as a permanent magnet stepping motor, by energizing individual windings in a positive or negative direction. When a winding is energized, a north and south pole are created, depending on the polarity of the current flowing. These generated poles attract the permanent poles of the rotor and the finer metal rotor teeth. The rotor moves one step to align the offset magnetized rotor teeth to the corresponding energized windings. Stepping motor control A step motor is a synchronous electric motor. Its fixed rotor equilibrium position occurs when aligned with the stator magnetic field. When the stator changes position, the rotor rotates to occupy a new equilibrium position. There are several stepper motor drive modes: - Full-step mode. - Double-step mode. - Half-step mode. - Microstep Mode. Stepping motors can be controlled in a variety of ways, trading off implementation requirements with greater accuracy and smoother transitions. Rotation control with full-steps, half-steps, and microsteps is described as follows: Full-step mode for a permanent magnet and hybrid stepping motor is detailed in the Stepper Motor Introduction. Figure 5 illustrates one-phase full-step mode in which only one winding is turned on at a time. In this mode, the rotor’s balanced position for each step is in line with the stator poles. With only half of the motor coils used at a given time, the full torque obtained is limited. Two-phase, full-step mode shown in Figure 6 uses both windings energized in each step. This doubles the current through the motor and provides 40 percent more torque than when only one phase is used at a time. With two windings energized, the rotor’s balanced position for each step is halfway between the two energized stator poles. The full-step and double-step drive modes can be combined to generate half-steps of rotation for half-step mode. First one winding is turned on, and then the second winding is energized, moving the rotor half a step towards the second, as shown in Figure 8. A half-step with the combination of one and two windings energized in full-step mode produces higher resolution, but does not provide constant torque throughout rotation. Figure 8: Three half steps, 1/8 of a rotation. Microstepping mode is an extension of the half-step drive mode. Instead of switching the current in a winding from on to off, the current is scaled up and down in smaller steps. When two phases are turned on and the current of each phase is not equal, the rotor position is determined by the current phase ratio. This changing current ratio creates discrete steps in the torque exerted on the rotor and results in smaller fractional steps of rotation between each full-step. Microstep mode reduces torque ripple and low-speed resonance present in the other modes and is required in many situations. Microstepping creates rotation of the rotor by scaling the contributions of the two additive torque vectors from the two stepper motor windings. Figure 9: Torque in microstepping control mode. The total torque exerted on the rotor is the vector addition of the torque from the two rotors. Each of the torques is proportional to the position of the rotor and the sine/cosine of the step angle. These equations can be combined and solved for the position of the rotor. Fractional steps are created by scaling torque contributions between windings. Because torque is proportional to magnetic flux that is proportional to the current in the winding, the position of the rotor can be controlled by controlling the current flowing in each winding. To create smooth microsteps between full-steps, the current is varied sinusoidally with a 90 degree phase shift between the two windings as shown in Figure 10. The current is scaled by controlling the root mean square (RMS) current using a current mode buck converter, commonly called a chopper drive when used with stepper motors. The phase current is converted into a voltage using a sense resistor in each phase ground path. This voltage is routed to a comparator that disables the output whenever the phase current rises above a reference. The comparator reference is provided by a voltage digital-to-analog converter (VDAC). By changing the VDAC supplied current limit for each microstep, the total motor torque remains approximately constant for each step of the sinusoidal current waveform. Figure 10: VDAC current limit for microstep mode. Microstepping allows the rotor position to be controlled with more accuracy and also has advantages in the rotation. Advantages of microstepping are: - Position is controlled with more accuracy. - Rotation can be stopped at specific fraction of a step. - Transitions are smoother. - Damp resonance creates fewer oscillations as motor steps (especially at startup and slowdown). Figure 11: Smooth transitions between steps and limited oscillations and settling in microstep mode. PSoC 3 introduction The CY8C3866AXI device is in the PSoC 3 architecture. A block diagram of the device is shown in Figure 12 with the blocks used in the stepper application highlighted. Figure 12: PSoC 3 (CY8C3866AXI) block diagram. The PSoC 3 digital subsystem provides unique configurability of functions and interconnects. The stepper motor control uses these digital resources to implement timers, pulse width modulator (PWM) blocks, control registers, and a hardware lookup table (LUT). The PSoC 3 analog subsystem provides the device the second half of its unique configurability. The stepper motor uses dedicated comparators, voltage DACs, and programmable gain amplifiers (PGA). Stepper motor control based on PSoC 3 The block diagram of the stepper motor control based on the CY8C3866AXI is shown in Figure 13. The PSoC Creator™ schematic is shown in Figure 14. Figure 13: Block diagram of PSoC 3 stepper motor controller. Input control signals to the PSoC 3 device are: - Motor Current Sensing: Analog input pins to detect motor phase current on shunt resistor. Used to limit current of the motor phases. See details in the following section. - User Interface Pins - User Input: Analog pin to read potentiometer for parameter input. Two digital pins for Menu control buttons. - Character LCD: Digital output port (seven pins) to drive the character LCD on the DVK for menu options and user feedback. - PWM signals to the high-side drivers (four digital output pins). - PWM signals to the low-side drivers (four digital output pins). Figure 14: PSoC Creator schematic for stepper motor control. The PWMs are not used to produce the typical pulse width modulation output used with other motors. Instead, the PWMs act more as a timer to ensure a maximum chopping frequency to avoid overheating the drivers. Additionally, the PWM ‘kill circuit’ natively includes the cycle kill mode that implements the chopper drive method by disabling the drive outputs for the remainder of the current PWM cycle after the comparator trips. The PWM signals are routed to a look up table (LUT) logic block, along with the current stepping stage index. This logic block implements a LUT using the PLD capabilities of a universal digital block (UDB) and routes the PWM signals to the eight legal output control combinations based on the current polarity of each phase. These control signals are routed through GPIOs to the external power driver circuits that drive the stepper motor. In the demonstrated chopper drive topology, transistors or MOSFETs are typically used to switch the high voltages and currents used to drive the stepper motors. The sequencing of the PWM control signals on the external power drivers produces the step by step rotation of the motor. A timer generates periodic interrupts that generate each step (or microstep) of the motor. This timer can be used to run the motor at a specific speed, or to a specific position (exact number of steps). To set the speed of the motor, the interrupt period of the timer is updated by firmware. PSoC 3 also implements current limiting for motor overcurrent protection and microstepping in hardware. This is described in the following section. Microstepping and current protection implementation Microstepping limits the current flowing in the motor windings to create smooth and well-controlled transitions between full-steps. This functionality also builds protection in hardware for overcurrent that shields the motor from damage. The block diagram of the system with the current feedback sensing paths is shown in Figure 15. Figure 15: Overcurrent protection block diagram for microstepping. Motor current is measured with two shunt resistors in the ground paths of the power driver MOSFETS (R1 and R2 in Figure 15). This voltage is low-pass filtered on the board and connected to two analog pins on PSoC 3 (labeled Curr_A and Curr_B). The input voltages are fed into programmable gain amplifiers (PGA) implemented with the analog continuous time (CT) blocks. The PGA buffers the input voltage and drives it to a continuous time comparator. This voltage level from the sense resistor is compared to the current limit, set by an 8-bit voltage DAC. For microstepping the DACs’ output, sine and cosine waveforms are generated from a software lookup table. This limits the motor current sinusoidally for smooth microstepping. The output of the comparator is connected to the PWM block and kills the PWM output when the current limit threshold is exceeded. This provides cycle-by-cycle current limiting to the motor and creates smooth microstepping transitions. The implementation of the current limiting protection in PSoC Creator is shown in Figure 16. Figure 16: PSoC Creator schematic implementation of current limiting block for microstepping. The PSoC 3 resources used in current limiting are: - Two continuous time (CT) blocks implement the PGAs. - Two fixed analog comparators are dedicated analog resources and do not use any SC/CT blocks. - Two 8-bit PWMs implemented in UDBs (the same PWMs used to control the power device drivers). The output of the comparator triggers the kill input to the PWM when a current limiting condition is detected. - Two 8-bit VDACs. These built in 8-bit voltage DACs are used to set the threshold for the comparator current limit. Shown in Figure 17 are the settings for each DAC and rotation index, and the microstep pointer (ramping 1-128). Figure 17: Current limit versus time step for 128 step microstepping. The currents flowing in the two windings are measured with small sense resistors between the power devices and ground. The value of the current detection shunt resistor is a trade-off between power efficiency and robustness of the detection blocks. For a given current limit, enough change in voltage must be generated by the motor current to accurately detect the change with the comparator, but increasing the resistor increases heat and reduces efficiency. The current limiting protection mechanism implemented in PSoC 3 hardware is an on-chip low-cost solution. The output PWM drivers are controlled by a hardware lookup table. The table takes inputs from the two PWM blocks and a control register that holds the rotation index (as shown in Figure 18). In Table 4, the PWM control hardware LUT receives the stage index and PWM signals as inputs and outputs the eight PWM driver signals. Table 4: PWM control hardware LUT. Figure 18: PSoC Creator schematic LUT implementation of PWM output control. When operating with microstep drive mode, the PWM outputs for PWM_A and PWM_B cycle through 01, 10, and 11. When the stepper motor operates under full-step mode, both PWMs are on (11). In this case, the LUT simplifies to the following table the rotation sequence described in the full-step descriptions earlier. In Table 5, the simplified MPhase output control hardware LUT receives the stage index and PWM signals as inputs and outputs the eight PWM driver signals. Table 5: MPhase output control hardware LUT. The stepper motor can be run at a fixed speed or to a desired position. To run at a fixed speed, the timer period that triggers each step (or microstep) is adjusted. The 16-bit timer terminal count triggers an interrupt that is used to initiate each step. Input frequency of the timer is 100 kHz to ensure precision speed control. PSoC 3 is also able to receive step pulse commands from an external controller such as a PLC. In Figure 19, the timer terminal count triggers an interrupt that initiates each step. Figure 19: PSoC Creator schematic implementation of speed control timer. To run in position control mode, the stepper motor turns a specific number of steps and then stops. (Position control mode is not supported with the user interface in the stepper motor demo). An internal counter is used to count the desired steps. When the desired position is reached, the step control from the timer interrupt is masked until the user requests another action. When the motor stops, phase current is lowered automatically to save power and reduce heating. The ability to control the position in an open loop configuration by counting steps (or microsteps) is dependent on the stepper motor operating within the torque and motor load limits. If the torque/load limits are exceeded, the motor can miss steps and the absolute rotational position information is lost. There is one main loop and one interrupt service routine (ISR) for control of the motor, the timer ISR. The timer ISR generates an interrupt that triggers the step control function (see Figure 19). Each time the step function is called, the motor takes one step (or microstep). The step function looks up the sinusoidal values from a table and sets the DAC output voltage to control the phase currents. A flow chart of the firmware operation is shown in Figure 20. Other ISRs for the UART and ADC are also used for the demo project UI and GUI interfaces. Figure 20: Stepper motor control firmware flow chart. PSoC resource utilization The stepper motor uses resources from the digital and analog portions of the PSoC 3 device. The highest use of resources stem from the VDACs and comparators. Two VDACs and two comparators are used for the stepper motor microstepping control. This constraint limits the CY8C3866AXI-040 device to a maximum of two stepper motor controllers. Table 6: Stepper motor demo CY8C3866AXI-040 resource utilization (blocks with none used are not shown). Table 7: Stepper motor demo on CY8C3866AXI-040 memory utilization (Keil™ Complier, Level-5 optimization) Cypress’ stepper motor control with PSoC 3 incorporates current limiting and microstepping control for an optimized solution. Up to 128 microsteps is suitable for precision position control. The PSoC 3 stepper motor control solution has low total system cost and leaves significant PSoC 3 resources available for additional system functions. - Cypress Application Note AN2229, “Motor Control - Multi-Functional Stepping Motor Driver” by Victor Kremin and Ruslan Bachinsky. Discover the benefits of becoming a My Digi-Key registered user. • Enjoy faster, easier ordering with your information preloaded. • View your order status, web order history • Use our BOM Manager tool • Import a text file into a RoHS query
397
Register New Player Welcome to our world of fun trivia quizzes and quiz games: The Battle of Badr "The Battle of Badr was a key battle in Islam's struggle for independence. Outnumbered greatly, the victory at Badr gave the army great hope and is mentioned in detail in the Quran." 15 Points Per Correct Answer - No time limit Let's start out with the setting of Badr: The Muslims were fighting, but against who were they fighting? There were only 300 Muslims fighting in the Battle of Badr. How many men did the Meccans have fighting? During the heat of the Battle of Badr, the Prophet Muhammad was said to have done what? Thrown sand into the air in the direction of the enemy Killed those who fled The Battle of Badr is a battle in Islamic history in which it is said that angels participated. One angel who revealed the Quran to the Prophet also commanded the thousands of angels at Badr. Who was this angel? At the beginning of the Battle of Badr, three of the Meccans challenged three of the Muslims for a small skirmish between them. One of the future Caliphs (He was also one of the key issues of contreversy which resulted in the first major schism in Islam) was in the skirmish, along with two other soldiers. Who was this Caliph? What else is God said to have done to the Meccan's the night before the Battle of Badr? Heavy winds blew away their arrows Comets raced through the night sky scaring the camels, making them useless for battle Lightning killed the commanders Heavy rain fell on the hill where they were camped What key resource at Badr were the Muslims able to cut off from the enemy? How many Muslims died in the Battle of Badr? Two hundred and fifty How many Meccans did the Muslims capture in the Battle of Badr? What happened to the majority of the captured Meccans at the Battle of Badr? They were let go They were tortured They were executed They were sold into slavery Copyright, FunTrivia.com. All Rights Reserved. Legal / Conditions of Use Compiled Jun 28 12
766
(Phys.org)—Controlling "mixing" between acceptor and donor layers, or domains, in polymer-based solar cells could increase their efficiency, according to a team of researchers that included physicists from North Carolina State University. Their findings shed light on the inner workings of these solar cells, and could lead to further improvements in efficiency. Polymer-based solar cells consist of two domains, known as the acceptor and the donor layers. Excitons, the energy particles created by solar cells, must be able to travel quickly to the interface of the donor and acceptor domains in order to be harnessed as an energy source. Researchers had believed that keeping the donor and acceptor layers as pure as possible was the best way to ensure that the excitons could travel unimpeded, so that solar cells could capture the maximum amount of energy. NC State physicist Harald Ade and his group worked with teams of scientists from the United Kingdom, Australia and China to examine the physical structure and improve the production of polymer-based solar cells. In findings published in two separate papers appearing this month online in Advanced Energy Materials and Advanced Materials, the researchers show that some mixing of the two domains may not be a bad thing. In fact, if the morphology, or structure, of the mixed domains is small, the solar cell can still be quite efficient. According to Ade, "We had previously found that the domains in these solar cells weren't pure. So we looked at how additives affected the production of these cells. When you manufacture the cell, the relative rate of evaporation of the solvents and additives determines how the active layer forms and the donor and acceptor mix. Ideally, you want the solvent to evaporate slowly enough so that the materials have time to separate – otherwise the layers 'gum up' and lower the cell's efficiency. We utilized an additive that slowed evaporation. This controlled the mixing and domain size of the active layer, and the portions that mixed were small." The efficiency of those mixed layers was excellent, leading to speculation that perhaps some mixing of the donor and acceptor isn't a problem, as long as the domains are small. "We're looking for the perfect mix here, both in terms of the solvents and additives we might use in order to manufacture polymer-based solar cells, and in terms of the physical mixing of the domains and how that may affect efficiency," Ade says. Explore further: Femtosecond 'snapshots' reveal a dramatic bond tightening in photo-excited gold complexes More information: "From Binary to Ternary Solvent: Morphology Fine-tuning of D/A Blend in PDPP3T-based Polymer Solar Cells", Advanced Materials, 2012. In the past decade, great success has been achieved in bulk hetero-junction (BHJ) polymer solar cells (PSCs) in which donor/acceptor (D/A) bi-continuous interpenetrating networks can be formed and in some recent reports, power conversion efficiency (PCE) even approach 8%. In addition to the intrinsic properties of active layer materials, such as band gaps and molecular energy levels, morphological properties of the D/A blends including crystallinity of polymers, domain size, materials miscibility, hierarchical structures, and molecular orientation, are also of great importance for photovoltaic performance of the devices. Therefore, several strategies including slow growth, solvent annealing, thermal annealing, selection of solvent or mixed solvent have been applied to modify or control of the morphology of the D/A blends. Among these, binary solvent mixtures have been successfully used in morphology control. For example, the dichlorobenzene (DCB) or chlorobenzene (CB)/1, 8-diiodooctane (DIO) binary solvent system has been widely applied in PSC device fabrication process. By mixing a few volume percent of DIO with the host solvent (DCB or CB), efficiencies of many kinds of polymers can be improved dramatically. Besides DIO, other solvents, like 1, 8-octanedithiol (OT), N-methyl-2-pyrrolidone (NMP), 1-chloronaphthalene (CN), chloroform (CF), can also be used. According to these works, it can be concluded that crystallinity, as well as domain size in the blends can be tuned effectively by using binary solvent mixtures, and thus binary solvent mixtures play a very important role in high performance PSCs.
217
BJU Press' Spelling Grade 4 introduces new words to master for the 4th grade level. Word lists are primarily grouped by pattern, with lessons including the list of words plus two challenge words, and a "word sort" section that allows students to sort words by pattern. Additional exercises help students to master syllables, vocabulary knowledge, dictionary skills and more through a variety of fun activities. Thirty-two weekly spelling lists are included, and each list contains approximately 20 words. This 4th Edition has been completely revised to include colorful worktext pages with activities designed to strengthen spelling and communication skills. This resource is also known as Bob Jones Spelling Grade 4 Student Worktext, 2nd Edition. - Type: Other () - Category: > Home Schooling - ISBN / UPC: 9781606821923/160682192X - Publish Date: 1/1/2010 - Item No: 319704 - Vendor: Bob Jones University Press
4
a programme established by the US government in 1947 to give economic help to Europe after World War II. It was named after George C. Marshall, who was the US Secretary of State. Thousands of millions of dollars were provided for rebuilding cities, roads, industries etc Definition from the Longman Dictionary of Contemporary English Advanced Learner's Dictionary. Dictionary pictures of the day Do you know what each of these is called? Click on any of the pictures above to find out what it is called.
642
Gramsci and education Antonio Gramsci is one of the major social and political theorists of the 20th century whose work has had an enormous influence on several fields, including educational theory and practice. Gramsci and Education demonstrates the relevance of Antonio Gramsci’s thought for contemporary educational debates. The essays are written by scholars located in different parts of the world, a number of whom are well known internationally for their contributions to Gramscian scholarship and/or educational research. The collection deals with a broad range of topics, including schooling, adult education in general, popular education, workers’ education, cultural studies, critical pedagogy, multicultural education, and the role of intellectuals in contemporary society.
809
Forget the batteries for a second, thats just one of a thousand analogies you could use to describe voltage/current and the reason that no current flows has nothing to do with the electro-chemical properties of batteries, its far simpler. The easiest way to think of it is this: Current will only ever flow in a loop, even in very complex circuits you can always break it down into loops of current, if there is no path for current to return to its source, there will be no current flow. In your battery example, there is no return current path so no current will flow. There is obviously a more deep physics reason for why this works but as the question asked for a simple answer I'll skip the math, google Maxwell's Equations and how they are used in the derivation of Kirchhoff's voltage law. Batteries do make a good example for this simply because they are current sources with completely isolated grounds. This example would be equally true of any other power source with a completely isolated "ground". However, this is not an easy thing to find, for instance doing this with 2 bench supplies would likely make one of the bench supplies very unhappy, but thats not because the effect is different, the difference is that the bench supplies are likely both grounded to the electrical wiring in the building and as such there is a return path for current to flow through. The water analogy for this also effective. Think of your battery example this way: You have a water pump (battery A) connected to a pipe (the wire), and you have another water pump (battery B) connected to the same pipe (the wire) . Now in your example the there is no return path in the system so imagine that the pipe is full of water but capped off on both ends. You hit the power switch on the pumps, what happens? The answer is nothing, there is no where to move the water to, the pumps don't even spin. (ignore water turbulence like effects for this analogy). Now if you were to connect the pipe in a loop and hit the switch the pumps would spin up (voltage) and water would flow (current). If you used 2 difference speed pumps (different voltage batteries) and faced them toward each other one will over power and cause the other to spin in the wrong direction (burn out just like connecting a 9V and 6V battery in parallel). If you connected both pumps pointing in the same direction you would get more water pressure (voltage) because the pumps are helping each other out (2 batteries in series).
612
E Numbers: Emulsifiers Sunday, Oct 16 2011 Emulsification is the word scientists give to mixing liquids together. Specifically, it is the process by which immiscible liquids, liquids that cannot normally mix, are mixed together. The chemical agent used is called an emulsifier. Emulsifiers are not unique to food, we use emulsifiers to clean ourselves (soap), and in order to produce medical injections. Emulsifiers in Food The most familiar emulsifier is probably the egg. Eggs are used as an emulsifier in everything from cakes and custard, to mayonnaise (Figure 1) and from hollandaise, to soufflés. What the egg is doing chemically is allowing the other ingredients to form a stable emulsion (mix). Interestingly, egg itself actually contains two types of emulsifier: one is protein, the other is lecithin (Figure 2). They both have chemical properties that are shared by both water and fatty substances. This helps mix things up sufficiently well to make a homogeneous mixture of water, fat and lecithin, that looks not unlike baby sick. There are others that are used regularly in cooking too. As well as the egg in mayonnaise, mustard powder is also added to many recipes. This also helps it to stay homogeneous. In industrial food production, several harmless emulsifiers, not common-place in the home, have been used for some time. One such is xanthan gum. The name is perhaps misleading as it is not a gum as such when bought, but an off-white powder that is usually accused of being either cocaine or flour. The usage of the word gum is perhaps clearer when we consider the properties it has on being mixed with fat and water. Figure 3 shows the transformation. Here, I include weights and volumes so you can do this yourself if you want to. In part A we see the oil (clear yellow layer, 10 mL) above the water (colourless layer, 25 mL). Adding the xanthan gum (B, 5 g) appears to do little initially (C), however a brief agitation of the system leads to homogenesis of the three substances (D). Not only do we have an homogenous mixture, but also one that is thicker than it was previously – squeezing it out gives the appearance of an off-white turd (E). Said turd goes brilliantly with a sprig of basil or rosemary and glass of chilled white Sancerre. 2 Responses to “E Numbers: Emulsifiers” Leave a Reply You must be logged in to post a comment.
738
A mobile phone user outside a mobile payment facility in Port-au-Prince, Haiti. our goal: to alleviate poverty by expanding access to digitally-based financial tools and services. At A Glance Increasing poor people’s access to financial services can help them weather personal financial crises and increase their chances of climbing out of poverty. About 80 percent of the world’s poor adults do not have a bank account or use other formal financial services—not only because of poverty but also due to costs, travel distance, and other barriers. Our strategy aims to capitalize on rapid advances in mobile communications and digital payment systems to connect poor households to affordable and reliable financial tools. Our Financial Services for the Poor strategy, updated in 2012, is led by Rodger Voorhies, director, and is part of the foundation’s Global Development Division. Poor people do not live in a static state of poverty. Every year, many millions of people transition out of poverty by successfully adopting new farming technologies, investing in new business opportunities, or finding new jobs. At the same time, large numbers of people fall back into poverty due to health problems, financial setbacks, and other shocks. If available at critical moments, effective tools for savings, payment, credit, and insurance can help households capture an opportunity to climb out of poverty or weather a crisis or emergency without falling deeper into poverty. Worldwide, approximately 2.5 billion people do not have a formal account at a financial institution, according to the World Bank’s Global Financial Inclusion Database. As a result, most poor households operate almost entirely in the cash economy, particularly in the developing world. This means they use cash, physical assets (such as jewelry and livestock), or informal providers (such as money lenders and payment couriers) to meet their financial needs—from receiving wages to saving money for fertilizer. However, these informal mechanisms tend to be insecure, expensive, and complicated to use. And they offer limited recourse when major problems arise, such as a serious illness in the family. A growing body of evidence suggests that increasing poor people’s access to better financial tools can help accelerate the rate at which they move out of poverty and help them hold on to economic gains. However, it is costly to serve poor people with financial services, in part because most of their transactions are conducted in cash. Storing, transporting, and processing cash is expensive for banks, insurance companies, utility companies, and other institutions, and they pass on those costs to customers. A foundation-supported initiative allows Rwandan farmers to access markets for their beans and maize using mobile phones. In wealthier countries, people conduct most of their financial activities in digital form, and value is stored virtually and transferred instantaneously. The global revolution in mobile communications, along with rapid advances in digital payment systems, is creating opportunities to connect poor households to affordable and reliable financial tools through mobile phones and other digital interfaces. In fact, research has shown that the most effective way to significantly expand poor people’s access to formal financial services is through digital means. In addition to cost savings, digital financial services offer a wide array of benefits: - They connect poor people to the formal financial sector and enable them to become customers and suppliers within the wider economy. - Financial flows can be accurately tracked, resulting in safer and speedier transactions and less corruption and theft. - Providers can use financial histories to develop products that are better suited to customers’ needs, cash flow, and risk profiles, including fee-for-service offerings and smaller-unit transactions. - Direct deposits (including wages and government assistance) allow money to “bypass” the home, helping users save rather than spend and often giving women more financial authority within the family. - Automatic reminders, positive default options, and other choices offered via mobile phone menus offer convenience and save time. The Bill & Melinda Gates Foundation’s Financial Services for the Poor program aims to play a catalytic role in broadening the reach of digital payment systems, particularly in poor and rural areas, and expanding the range of services available on these platforms. Until the infrastructure and customer base are well established, this might involve a combination of mobile banking services that are accessible via cell phones and brick-and-mortar stores where subscribers can convert cash they earn into digital money (and vice-versa). Our approach has three mutually reinforcing objectives: - Reducing the amount of time and money that poor people must spend to conduct financial transactions - Increasing poor people’s capacity to weather financial shocks and capture income-generating opportunities - Generating economy-wide efficiencies by digitally connecting large numbers of poor people to one another, financial services providers, government services, and businesses A tea vendor in Uttar Pradesh, India, checks her bank balance on her mobile phone. We are not focused on a particular product or distribution channel, but rather on innovative ways to expand access and encourage markets. At the same time, we are aware that interventions in this and other areas too often involve technologies that are made available to the intended users but are not adopted. To address this demand-side challenge, we are supporting research and product design experiments to identify design features, price incentives, and marketing messages that will encourage poor people to adopt and actively use digital financial services. We are also supporting policymakers as they work to develop policies and regulations that facilitate these developments. We believe that the combined effect of these interventions will accelerate the rate at which poor people transition out of poverty and decrease the rate at which they fall back into poverty. Our strategy also recognizes that countries are at different stages in developing an inclusive digital financial system and that we must tailor our interventions accordingly. Areas of Focus Our work falls into four areas: Digital payment systems In countries with a minimum level of connectivity in poor and rural areas, we work with in-country providers to extend the reach of digital payment systems into those communities and encourage poor people to adopt these systems through a mobile phone or other digital interface. Payment systems are crucial because they enable people to collect payments from customers, buy goods, pay for water and electricity, and send money to friends, family, and business partners. They also enable governments to collect taxes and disburse social payments. When these transactions are costly and inconvenient, economic activity is impeded. Digital financial services In countries where digital payment systems have taken hold in poor and rural communities, we work with banks, insurance companies, and other providers to increase the range of financial services that people can access in digital form. Many of these services are designed to meet the specific household management needs of low-income people, particularly smallholder farmers and women. We work at the global level with governments, donors, financial standards-setting bodies, and the private sector to maximize our collective impact on poor people’s access to financial services. Research and innovation We collect data to measure the impact of our grants and interventions and to help key stakeholders make better decisions. We also conduct research and nurture innovations that could lead to longer-term improvements in delivering digital financial services on a broad scale.
546
There was no question that the monarch was in charge. Elizabeth I (1558-1603) and James I (1603-25) both made it very clear that they ruled the country. They made the laws, they fought the wars, they appointed the top ministers and so on. However, the monarchy worked on the basis of cooperation between the monarch and the political nation. The political nation was the nobles and gentry. The nobles were the very rich landowners. The gentry were the wealthy smaller landowners and also rich merchants in the towns. They were a tiny minority of the population, but they held most of the wealth and power in the kingdom. They saw themselves as the protectors of the ordinary people. They were the Members of Parliament and they collected the king’s taxes. The local gentleman or merchant was usually the local magistrate. The gentry were in charge of the local militia (armed forces). They were also in charge of law and order because there was no police at this time. Their estates and businesses made them the biggest employers. The court was the gathering of people around the monarch. Wherever the monarch was, the court would also be. It included all the top nobles and officials. James I ran a very informal court. In fact he had a reputation for being too informal – he sometimes got drunk and there were lots of stories of very bad behaviour. It was quite easy for a top noble to get to talk to the king. The noble might want to do this because he was concerned about a particular law or policy that he did not agree with. James was usually prepared to explain his policies, and he often made compromises with the nobles. The court was one way that the top nobles could communicate with the king. Parliament was made up of the House of Lords and the House of Commons. The House of Lords contained the great nobles from the richest and most powerful families. The MPs in the House of Commons were elected. Members of the gentry and wealthy merchant classes elected men from their own class to sit in Parliament. Parliament was an important way that the political nation and the monarch could communicate. The monarch made speeches to Parliament. MPs also made speeches and wrote letters of advice to the monarch. James I and most other monarchs preferred to rule without Parliament. However, James often ran short of money. He needed the support of MPs to agree to new taxes to raise money. He also needed the cooperation of MPs to collect these taxes. As a result, he usually agreed to listen to the concerns of MPs and accepted some of their advice. Charles I was James I’s son. He came to the throne in 1625. He was the ruler of three kingdoms – England, Ireland and Scotland. It was a hard job to rule three kingdoms. They had different laws, churches, languages and traditions. Charles spent most of his time in his richest and most powerful kingdom – England. His biggest problem was working with the MPs in England’s Parliament. Charles believed very strongly in the Divine Right of kings. This meant that the right to rule was based on the law of God. The King was responsible to God alone therefore nobody could question the King or disobey him. Unfortunately for Charles, the political nation was not happy about such views. They expected to be able to talk to the monarch, discuss policies and reach agreements. Charles was not suited to this approach. He was a private man who did not speak much and who liked order and discipline in all aspects of his life. He never developed his father’s political skill in working out compromises to tricky problems. Charles was shy, small and had a stammer. This probably made him insecure. This may help to explain why his court was very grand and formal. It was held in magnificent palaces like Whitehall, full of riches. Charles invited great painters like Anthony van Dyck to his court. The people there wore expensive, impressive clothes. There were rules about how everyone should behave. Charles also restricted the number of people who could come and talk to him. They had to have an appointment. His right-hand man, the Duke of Buckingham, controlled access to Charles. Charles tended to rely too heavily on one or two officials he trusted. His closest minister was George Villers, usually known as Buckingham. Buckingham used his position of power and influence with Charles to make himself and his family rich. He made sure that his friends and family were appointed to the top jobs in the government. As a result, he was widely hated by many Lords and MPs. What was worse, Buckingham was not particularly good at his job. Charles I gave him command of a military expedition against Spain in 1625. It was a failure, with many of the troops being killed by disease or made ill by cheap Spanish wine. He led another disastrous military campaign in 1627. The following year Buckingham was assassinated.
725
'Old Earth Scientists'... I've never heard that before ... You aren't suggesting that there are 'new earth scientists' are you ? Well sort of, there are commonly two types of scientists - old earth (who believe that the earth is billions of years old) and then young earth who believe that the earth is around 6,000 years old.As far as Science is concerned the big bang occured between approx 14 - 18 billion years ago As stated above, there are both old earth scientists and young earth scientists. Old earth scientists believe in the big bang theory and that the age of the earth is in the order of billions of years. Having said that perhaps the above statement should read "As far as old earth scientists believe, the big bang occurred between approx 14 - 18 billion years ago." Furthermore, when you say "concerned" it makes the assumption that the big bang actually did happen. The big bang is a theory and unless scientists can replicate it, it will forever remain a theory.thats not a theory formed by 'old earth scientists' that is calculated using every method we have at our disposal Unfortunately your statement falls short from the beginning - remember, the big bang theory is just that, a theory.measuring the expansion rate of the universe, measuring light from distant stars etc..there are too many to mention. The Bible also confirms that the universe is expanding. Isaiah 40:22 teaches that God “stretches out the heavens like a curtain, and spreads them out like a tent to dwell in.” This verse was written thousands of years before secular scientists accepted an expanding universe. It was until more recently that scientists changed their mind from the universe being constant to actually expanding. There are a few theories floating around with respect to the apparent red shift of stellar objects. old earth scientists believe it to be a result of bodies moving away from earth. As such, they have suggested that there should be no fully formed stellar bodies further away than about 8 billion light years. Astronomers have pointed telescopes into supposed redshift deserts (I.E. locations in space where there should be no fully formed bodies) and they found a sky full of fully formed galaxies. Measuring light from distant stars relies on the assumption that light has always moved at a constant rate, which unfortunately has not been proven.1. The moon moves away from the earth at around 4cm per year. If the earth was billions of years old, the moon could not be as close to the earth as it is. That suggests that the moon has always been in orbit around the earth for the 4.5 billion years...it hasn't Unfortunately this is not what old earth scientists believe. They believe that the earth and moon have been around for over 4 billion years.2. Oil deposits in the earth are under extreme pressure. If the earth was billions of years old this pressure would have caused the oil to have seeped through the rock layers and eventually the pressure would all be gone - I.E. there would be no oil under pressure today The oil deposits aren't 4.5 billion years old either... they are from rotting animal/vegetable sources from much later .. millions of years not billions I should have written this statement differently I.E. millions of years. The problem still stands however that if oil has was around millions of years ago, then it could not be under pressure today.3. The sun is shrinking at a rate of five feet per hour. this means that the sun would have been touching the earth a mere 11million years ago (let alone billions of years ago) No, that asssumes a constant state universe...the universe is very far from constant...its expanding and has been since the beginning. Nobody has ever suggested that the earth - moon - sun position has been in existance let alone constant since the big bang. Don't old earth scientists make assumptions also? If you look above, old earth scientists make the assumption that the speed of light is constant. Furthermore they still hold to the assumption that the earth, moon and sun have been around for over 4 billion years.4. Helium is added to the atmosphere everyday. Basically there is not enough helium in the atmosphere to support billions of years. Helium hasn't been added for 4.5 billion years ... again the earth wouldn't have had an atmosphere until recently ( recent related to its 4.5 billion age ) According to old earth scientists. The oxygen enriched atmosphere (basically as we know it today) was formed around 2.7 billion years ago. The amount of helium contained within our atmosphere today is only enough to support thousands of years, certainly not billions.5. Comets lose mass over time, there would be no comets left if the universe was billions of years old. (because comets were apparently a by product of the big bang) Thats misleading. The origin and time of origin of comets is not claimed to be the big bang. Thats a straw man. (I am guessing that a straw is another way of saying clutching at straws?) Again with this one I should not have just skimmed over it but should have elaborated. Comets have long been a good evidence due to their fragile nature and life expectancy. Comets are commonly huge chunks of ice traveling at tremendous speeds through space, when they come close to a star, they begin to melt and so form a trail of moisture. This can't last forever and it will eventually disintegrate. Here in-lies a problem for old earth scientists because there should be no comets left - they should all have been disintegrated by now (giving the billions of years). And if we are talking about clutching at straws - here's a good one for you. Old earth scientists have come up with another theory to try and explain why we still have comets today. So in comes the Oort Cloud. The Oort Cloud is a hypothetical spherical cloud of comets which may lie roughly 1 light year away from our sun. Apparently, these comets become dislodged from the Oort Cloud by the gravitational pull of passing stars and the milky way itself (due to it apparently being at the outer edges of our milky way) These comets are then free to move about and disintegrate (which is how we see comets today) Now this Oort Cloud has not been detected or seen it is another theory - it is just a hypothetical cloud to try and fit in with the mold of an old universe.6. The earths magnetic field decays by approximately 5% every century, this means that a mere 10,000 years ago, the earths magnetic field would have been so strong that the heat it would have produced would have made life on earth impossible. No doubt taken from Barnes's magnetic field argument 1973. The decay rate he stated has been debunked and stated as flawed. How has it been debunked?7. fossilized dinosaur bones - these bones have been found and it is impossible for them to have lasted for millions of years. Why not ? They have The evidence available suggests an asteroid hit the earth approx 65 million years ago leading to a catastrpohic global event. There is a layer of iridium in the earths stratography that supports this theory. Speaking of clutching at straws - "Why not? they have" This goes against what old earth scientists have been telling us for years! Blood cells decay at a much faster rate than the rate at which bones can fossilize. How then can you have a fossilized dinosaur bone which contain blood cells? If we are talking about debunking theories or practices - radio carbon dating techniques have terrible flaws and rely on many assumptions. Therefor how can you be sure that your 65 million years is accurate?8. Salt is added everyday to the dead sea by inflows. Since it has no outlet - the salt content continues to grow. The amount of salt contained within it is not enough to support billions of years. The dead Sea didn't spring into existance billions of years ago. Its a result of millions of years of constant change on the earth by volcanic, tectonic, atmospheric activity. The dead sea is a baby compared the age of the earth I would have thought that you would line up the forming of the seas as we know them now with the catastrophic global event that wiped out the dinosaurs. If not that, then what are you basing your idea that the dead sea is a baby compared to the age of the earth? are we talking thousands of years, hundreds of thousands, millions or perhaps billions?9. The earths population doubles every 50 years (approx) it would take around about 4,000 years to reach the number of people that are on earth today (Lines up nicely with the world wide flood of Noah's day) if we use this figure for millions of years - the earth could not contain the amount of people. Also that matches for the evolution model. The expansion in the earths population is also linked to the expansion of civilisation .. .not just the existance of humans and their descendants. Could you expand on which evolution model you're referring to?10. Spiral galaxies appear this way due to their 'rotation' this rotation would eventually cause them to straighten out I.E. lose their spiral. There should be no spiral galaxies if the universe was actually billions of years old. That again is a straw man. The big bang theory doesn't suggest spiral galaxies popping into existance at the moment of the big bang. They are formed over many millions of years Why not? The big bang suggests that everything else popped into existence at the moment of the big bang. If this is not the case - then how did they form?The earth, the universe and everything in it was brought about in creation week. It was a divine event brought about by a supernatural creator. No it wasn't ( that which can be asserted without evidence can also be dismissed without evidence ) We have just been discussing a page full of evidences!And faith ..... Would you build an electronic project based on faith ? would you cross the road by faith ? But you yourself are obviously a man of great faith. You believe that the universe and all it contains was brought about by a supposed big bang. To put it lightly - 'Nothing became something and the something exploded' Where did this matter come from in the first place? doesn't the big bang go against the law of conservation of mass and energy? If you are dismissing faith, then you must have proof of the big bang. You obviously weren't there when the supposed big bang took place so therefor it would stand to reason that you can replicate the big bang - after all, we are dismissing faith here.If I am sick I see a Doctor, If I have trouble seeing I go to an optician etc etc. Faith would not heal me or make me see. Rather countless selfless individuals who over thousands of years have devoted their lives to bettering mankind. Yes indeed! Isn't it interesting how even though we apparently all stemmed from a common singularity we are all unique and have our own special gifts and talents? If we look to God's word though, we find that we all have been given these unique gifts and talents - some to be doctors, some to be opticians, some to make super pong tables and some to be astronauts! But back on topic, isn't there an underlying reason that you go to a doctor? You go specifically to a doctor because you have faith in him. If you didn't have faith in him and all his years of training then you would just go to anyone wouldn't you?Its just not the case at all. For a start evolution doesn't need a set of ready to be assembled parts lying around. Its a process beginning with the smallest building blocks at chemical level and taking millions and millions of years to progress. Fair enough, Let's walk though this one step at a time starting from the beginning - how did the very first building block get here? Also a 747 ( or an LED pong table ) isn't carrying about obselete parts of earlier less successful aircraft in its frame like we are. Could you list these supposed obsolete parts and explain why they are not required (I think you'll find that every part of our body plays it's own important role) You say that you have faith in fellow humans. Why is that? If we are just a result of random chemical reactions then why do you trust in them? On that note, why does anyone have morals? why do we have laws and rules? if we are the by product of natural selection in that it is survival of the fittest, who is to say that I can't go out and kill someone - after all this is how we supposedly came to be! Do you feel sorrow when a family member or close friend dies? I am guessing that you would, but hold on a second - why on earth would you get sad if this is simply what you are arguing for in motion? To expand, If we are brought about by the strongest cells living on and the weaker ones dying off, isn't it good that your family member or friend has died because it means that the strong have survived and the weak are now dead? you should be sitting there giving hi five's to everyone shouting "Way to go natural selection!" And finally, Why on earth would scientists use evidence from the past to predict the future? If the universe came about by disorder and random chemical reactions then how on earth could we use this information to reliably predict the future. Uniformity does not make any sense in a universe created by random chance and disorder. Of course this is not the case, we find that the universes history is very much ordered because God designed it that way.
683
Many of us are inclined not to talk about things that upset us. We try to put a lid on our feelings and hope that saying nothing will be for the best. But not talking about something doesnt mean we arent communicating. Children are great observers. They read messages on our faces and in the way we walk or hold our hands. We express ourselves by what we do, by what we say, and by what we do not say. When we avoid talking about something that is obviously upsetting, children often hesitate to bring up the subject or ask questions about it. To a child, avoidance can be a message - If Mummy and Daddy cant talk about it, it really must be bad, so I better not talk about it either. In effect, instead of protecting our children by avoiding talk, we sometimes cause them more worry and also keep them from telling us how they feel. On the other hand, it also isnt wise to confront children with information that they may not yet understand or want to know. As with any sensitive subject, we must seek a delicate balance that encourages children to communicate - a balance that lies somewhere between avoidance and confrontation, a balance that isnt easy to achieve. It involves: - trying to be sensitive to their desire to communicate when theyre ready - trying not to put up barriers that may inhibit their attempts to communicate - offering them honest explanations when we are obviously upset - listening to and accepting their feelings - not putting off their questions by telling them they are too young - trying to find brief and simple answers that are appropriate to their questions; answers that they can understand and that do not overwhelm them with too many words. Perhaps most difficult of all, it involves examining our own feelings and beliefs so that we can talk to them as naturally as possible when the opportunities arise. Not Having All the Answers When talking with children, many of us feel uncomfortable if we dont have all the answers. Young children, in particular, seem to expect parents to be all knowing - even about death. But death, the one certainty in all life, if lifes greatest uncertainty. Coming to terms with death can be a lifelong process. We may find different answers at different stages of our lives, or we may always feel a sense of uncertainty and fear. If we have unresolved fears and questions, we may wonder how to provide comforting answers for our children. While not all our answers may be comforting, we can share what we truly believe. Where we have doubts, an honest, I just dont know the answer to that one, may be more comforting than an explanation which we dont quite believe. Children usually sense our doubts. White lies, no matter how well intended, can create uneasiness and distrust. Besides, sooner, or later, our children will learn that we are not all knowing, and maybe we can make that discovery easier for them if we calmly and matter-of-factly tell them we dont have all the answers. Our non-defensive and accepting attitude may help them feel better about not knowing everything also. It may help to tell our children that different people believe different things and that not everyone believes as we do, e.g., some people believe in an afterlife; some do not. By indicating our acceptance and respect for others beliefs, we may make it easier for our children to choose beliefs different from our own but more comforting to them. Last reviewed: By John M. Grohol, Psy.D. on 26 Aug 2010 Published on PsychCentral.com. All rights reserved. Nobody can make you feel inferior without your consent. -- Eleanor Roosevelt
98
If genealogy books were rated by the pound, the book I examined this week would be number one. I don't recall ever picking up a single genealogy book as thick and heavy as this one. Of course, genealogy books are not graded by heft. Nonetheless, this particular book is the definitive guide to descents from the Magna Carta Barons of 1215 A.D. for over 200 individuals who emigrated from the British Isles to the North American colonies in the 17th century. Magna Carta Ancestry: A Study in Colonial and Medieval Families, by Douglas Richardson is a 1,099-page reference to those descents, combining both research in original records with the use of published literature to provide well-documented ancestral lines for American colonists with Magna Carta ancestry. Yes, that is one thousand ninety-nine pages. Best of all, nearly every page is full of high-quality, well-researched genealogy information. Magna Carta (Latin for "Great Charter") is a document written in 1215 A.D. that serves as the charter of England which limited the power of English monarchs. King John was the ruler at the time, and he ruled with an iron fist, much to the chagrin of his noblemen. The barons of England organized numerous uprisings. In the face of such strong and well-organized opposition, King John was forced to renounce certain rights and to grant a charter of liberties. This document stated that the King would respect certain legal procedures and accept the premise that the will of the king could be bound by law. Magna Carta is widely considered to be the first step in a long historical process leading to the rule of constitutional law. Magna Carta was signed in the meadow at Runnymede on June 15, 1215. Numerous disagreements arose immediately, and King John soon refuted it. A civil war then erupted. In the midst of this war, King John died of dysentery on October 18, 1216. His death quickly changed the nature of the war. His nine-year-old son, King Henry III, was soon crowned King of England. The civil war stopped, and a somewhat modified Magna Carta was issued. When he turned eighteen in 1225, Henry III himself reissued Magna Carta a third time, this time in a shorter version with only 37 articles. The twenty-five barons who signed the Magna Carta were the leading nobles of England at the time. Most were married, and many had large families. Hundreds of thousands, possibly millions, of people alive today can trace their ancestry back to one or more of these twenty-five barons. Indeed, this book, Magna Carta Ancestry: A Study in Colonial and Medieval Families, lists 238 Colonial-era immigrants to the United States with proven descent from the Magna Carta barons. If one of these immigrants is in your family tree, this book will trace your ancestry back to the meadow at Runnymede in 1215 A.D. This scholarly book features thousands of biographical sketches of people who lived in medieval England and their descendants through to those who immigrated to America. The book also contains more than 28,000 source citations to published materials, making it the most documented source book of its kind. In fact, the extensive cross-referencing makes the text simple to follow. The book also contains a 93-page bibliography, probably the most exhaustive listing of medieval genealogy and history ever published. Finally, Magna Carta Ancestry contains an index of over 18,000 entries. Author Doug Richardson has refuted numerous published genealogies in this new book, pointing to source citations that disprove many lineages that have been accepted for decades. If you believe that you have royal or noble English ancestry, you need to check this book! Many new additions to the book show the lineages of colonial immigrants that previously were unknown. As a result, many people will be able to claim noble ancestry for the first time. While the primary audience for this book is anyone with American ancestry in the Colonial era, the book contains extensive biographical information about thousands of individuals who lived in England between 1215 and the 17th century. As such, the information will be of interest to anyone with ancestry in England, even if their later ancestors moved to Canada, Australia, New Zealand, or elsewhere. A complete list of the 17th-century American immigrants with proven Magna Carta ancestry can be found on the publisher's web site at http://www.genealogical.com/item_detail.asp?afid=&ID=4887 Magna Carta Ancestry: A Study in Colonial and Medieval Families, by Douglas Richardson will be the standard reference for many, many years. This huge scholarly work with more than 28,000 source citations belongs on the shelf at every genealogy library as well as in many private collections. Magna Carta Ancestry sells for $100 and is available directly from Genealogical Publishing Company at http://www.genealogical.com/item_detail.asp?afid=&ID=4887 as well as from Amazon.com and from many other bookstores. You can order it by specifying ISBN 0806317590.
727
The National Museum of the American Indian on the National Mall opened in September 2004. Fifteen years in the making, it is the first national museum in the country dedicated exclusively to Native Americans. The five-story, 250,000-square-foot, curvilinear building is clad in a golden-colored Kasota limestone that is designed to evoke natural rock formations that have been shaped by wind and water over thousands of years. The museum is set in a 4.25-acre site and is surrounded by simulated wetlands. The museum's east-facing entrance, its prism window and its 120-foot-high space for contemporary Native performances are direct results of extensive consultations with Native peoples. The museum's architect and project designer is the Canadian Douglas Cardinal (Blackfoot); its design architects are GBQC Architects of Philadelphia and architect Johnpaul Jones (Cherokee/Choctaw). Disagreements during construction led to Cardinal being removed from the project, but the building retains his original design intent, and his continued input enabled its completion. The museum's project architects are Jones & Jones Architects and Landscape Architects Ltd. of Seattle and SmithGroup of Washington, D.C., in association with Lou Weller (Caddo), the Native American Design Collaborative, and Polshek Partnership Architects of New York City; Ramona Sakiestewa (Hopi) and Donna House (Navajo/Oneida) also served as design consultants. The landscape architects are Jones & Jones Architects and Landscape Architects Ltd. of Seattle and EDAW Inc., of Alexandria, Virginia.
573
ingredients & lore blended with decaf ceylon tea, natural peach flavor, marigold flowers, apple pieces and apricots Cultivation of peaches began in China as early as 2000BC. The winds of trade brought peaches to Greece and Persia, where they were instantly accepted. The juicy fruit was also a big hit with the Romans, who cultivated it throughout the empire. From Italy, peaches made the leap across the Atlantic, where early settlers planted them throughout the East Coast. Soon they were so plentiful, local botanists thought of them as native fruits.
406
Hanukkah begins this year on December 1st, at sundown. Be honest. When I say “Hanukkah,” the first thing you think of is the Adam Sandler song, talking about “eight crazy nights.” If you are a little more connected to Jewish culture, you may also think about a dreidel or potato latkes (pancakes). While it’s commonly called the “Festival of Lights,” a better translation is “Dedication.” Being Jewish (circumcised at 8 days, Bar Mitzvah at age 13) and a Christ-follower (for over 15 years), I’d like to give a brief explanation of this holiday, and why it’s a meaningful opportunity to help me worship the Lord. Here’s the story of Hanukkah: In the 2nd century BC, Antiochus Epiphanes gained control over parts of the Middle East, including Judea (Israel). He erected an altar to Zeus in the Temple in Jerusalem, and sacrificed pigs there, which are unclean to Jews. The Maccabee family led a revolt, finally liberating Jerusalem and the Temple in 165 BC. Before God could be properly worshiped in the Temple, it had to be cleaned and dedicated. The menorah (lamp) had to burn continuously for 8 days for the purification process. Despite there only being enough olive oil for one day, the oil miraculously lasted for 8 days and nights. That is why Hanukkah is celebrated for 8 nights. Most people consider this miracle to be the end in itself, and I think the bigger meaning is missed. The point isn’t just that God did a miracle, but that the miracle was the means to allow Him to be properly worshiped. The Temple needed to be purified in order for Yahweh to be worshiped, but it couldn’t be purified unless He worked a miracle. God worked a miracle so that His people could be near Him in worship. Let us not miss that meaning, as we celebrate the Advent of Jesus Christ, the Light of the world (John 8:12). I don’t think we need merely to reflect on the birth of Jesus, but we need to consider why the Father sent His Son. God performed a miracle (the Incarnation) not as an end to itself, but as a means to allow us to be near Him in worship (through Christ’s redemptive sacrifice for our sins). Jesus did not come only to be marveled at as a baby, but to pour out His life and blood, to open the way for a new covenant with Him.
502
Human nature and the nature of war Twenty years ago, Canadian military historian and journalist Gwynne Dyer fascinated the public in 45 countries with his award-winning television series on the nature of war. He followed up that success with an equally remarkable book that grew out of the research that went into the television series. Now Dr. Dyer has published, in his words, "a completely rewritten and updated new edition" of War. People who have never read the book should certainly do so, but should people who read it back in the 1980s reread it? Yes. It cannot be said too often that modern warfare threatens the very existence of human life on this planet. Dr. Dyer argues that, with the proliferation of nuclear weapons, there is every reason to suppose that sooner or later a country made desperate by its fear of defeat in a conventional war will resort to the ultimate weapon. In so doing, that country has the potential to destroy or contaminate much of the rest of the world. Every adult needs to understand the nature of war, its evolution into an all-engulfing process, why it is so difficult to stop once it has begun, and what our best hopes are for ending war or at least limiting its dangers. Several important developments have taken place since Dr. Dyer published the first edition. The Cold War has ended, and the United States is now the world's only superpower. However, the author notes that it won't be long before China and India develop into superpowers, and the danger of an all-out war is bound to increase when some powers are in decline while others, hitherto excluded from the inner circle, are coming to the fore. Nuclear proliferation, the development of chemical and bacterial weapons and the spread of ballistic missile technology means that poor nations or even guerrilla groups can now be a threat to world peace. Finally, the U.S. government decision to adopt unilateral measures, bypassing the United Nations and the International Court of Justice, has jeopardized the slow, tentative process of creating a world body that can prevent war or limit its destructiveness. New social developments also played a role in Dr. Dyer's decision to write a new edition of War. Recently published findings on primates and the nature of early man, especially the behaviour of hunter-gatherer societies, have caused him to change some of his views about the nature of modern man. The new research has overturned previous assumptions that early man was an essentially peaceful fellow who respected his environment, and the prospect of changing our ingrained behaviour patterns now appears more difficult and challenging. But, he writes, we have to understand the nature of that beast, man, if we are to avoid the path to extinction. For the moment, extinction seems to be the route we have chosen. Several chapters shed light on today's situation. One chapter describes how the U.S. Marine Corps trains young recruits to become killers and willingly offer themselves up as cannon fodder. Even mature men, given the right kind of indoctrination, will put themselves in the forefront of battle. Despite such pessimistic accounts of how easy it is to train people to do what is irrational, Dr. Dyer argues that human beings do not enjoy the prospect of killing their fellow humans. Humans have a strong egalitarian, democratic streak; if it's allowed expression, we opt for rational solutions to the disputes that inevitably crop up between nations, ethnic groups, religions and clans. Consequently, Dr. Dyer believes that spreading democracy is one way of checking the tendency to go to war, since democracies generally do not wage war on other democracies. He argues that those who best understand the problem - diplomats and soldiers - are the ones most likely to see the value of strengthening international bodies like the UN and the International Court at the expense of national sovereignty. The problem is to convince the peoples of the world and the politicians in charge that this is what we must learn to do. The end of the Cold War has given us a temporary reprieve. But if we do not use this opportunity to strengthen the UN, we may not be able to avoid the doom that surely awaits us. War by Gwynne Dyer, Toronto, 2004, 484 pages, $39.95, cloth. Dr. Benazon is a retired English professor from Champlain Regional College, Quebec.
47
Eric Weisz, better known to the world as Harry Houdini, was born on this date in 1874. Famous for his feats as an escape artist and magician, Houdini also became one of the most crusading anti-spiritualists of the 1920s. Because of his familiarity with the illusions of stage magic and sleight of hand, Houdini was particularly adept at spotting the trickery that the so-called psychics and spirit mediums then hawking their services as conduits to the afterlife to the credulous grieving public commonly used. (Can you imagine anyone being foolish enough to fall for such predatory charlatanry today?) That turn in Houdini’s career led him to collaborate with Scientific American on a lengthy exposé of spiritualism. Scientific American had offered a $5,000 reward to any medium who could satisfy its panel of investigators, which included Houdini, two of the magazine’s editors (J. Malcolm Bird and Austin C. Lescaboura) and others, that his or her paranormal gifts were genuine. Unfortunately, although the magazine’s panel did reveal many frauds during the few years of its tenure, the whole episode ended very badly. The problem—which in retrospect is quite evident to anyone who has read through the Scientific American archives of that time, as I did—was that at least one of the editors, Malcolm Bird, was not so secretly a believer in the afterlife and very much wanted a psychic to succeed. Matters came to a head in 1924 when the team was evaluating a psychic whom it called “Margery” in print, though we now know her to be Mina Crandon, the comely young wife of a Boston socialite and surgeon. You can read Houdini’s own account of the messy business that resulted, but in short: Mina Crandon’s seance tricks, perhaps aided by her personal charm, bamboozled Bird and the rest Scientific American group with the exception of Houdini, who was not initially at their meetings. They were prepared to award her the prize but Houdini protested that he needed to see for himself. At the seance, Houdini saw through the deception and called “Margery” on it, much to the annoyance of Bird, who angrily resisted exposing her con game. The arguments that followed led to the dissolution of the ghostbusting squad, and the $5,000 was never awarded. What Houdini’s account does not say, but which I have heard as a perhaps unreliable rumor, is that the dispute between Houdini and Bird actually turned into a physical brawl. (Ahh, the two-fisted SciAm editors of yesteryear….) Also, when I attended James Randi’s The Amaz!ing Meeting in Las Vegas in 2003 and talked about these events, magician and mad debunker Penn Jillette told me that he had heard directly from Mina Cranston’s granddaughter that Mina had been sleeping with several members of the team, including Bird. (Oh, the scandal!) That just might help to account for Bird’s umbrage at Houdini’s harsh quashing of Mina’s scam. But that is not the anti-spiritualist episode I would like to talk about today. Rather, turn to a particular occasion when Houdini was trying to disabuse Sir Arthur Conan Doyle of his own spiritualist inclinations. Conan Doyle was not the paragon of rationality and reason that one might assume the creator of Sherlock Holmes would be: he had a soft spot for mediums. (It tends to make one think of Holmes’s famous dictum that “when you have eliminated the impossible, whatever remains, however improbable, must be the truth” in a somewhat different light.) Nevertheless, because Houdini and Conan Doyle had come to be friends, Houdini wanted to open the author’s eyes to the psychic frauds, and so he staged a demonstration that he hoped would do the trick. Michael Shermer recently described what happened in his February “Skeptic” column for Scientific American: In the spring of 1922 Conan Doyle visited Houdini in his New York City home, whereupon the magician set out to demonstrate that slate writing—a favorite method among mediums for receiving messages from the dead, who allegedly moved a piece of chalk across a slate—could be done by perfectly prosaic means. Houdini had Conan Doyle hang a slate from anywhere in the room so that it was free to swing in space. He presented the author with four cork balls, asking him to pick one and cut it open to prove that it had not been altered. He then had Conan Doyle pick another ball and dip it into a well of white ink. While it was soaking, Houdini asked his visitor to go down the street in any direction, take out a piece of paper and pencil, write a question or a sentence, put it back in his pocket and return to the house. Conan Doyle complied, scribbling, “Mene, mene, tekel, upharsin,” a riddle from the Bible’s book of Daniel, meaning, “It has been counted and counted, weighed and divided.” How appropriate, for what happened next defied explanation, at least in Conan Doyle’s mind. Houdini had him scoop up the ink-soaked ball in a spoon and place it against the slate, where it momentarily stuck before slowly rolling across the face, spelling out “M,” “e,” “n,” “e,” and so forth until the entire phrase was completed, at which point the ball dropped to the ground. Houdini then explained that he had done the whole thing through simple trickery and implored Conan Doyle to give up his spiritualist beliefs. Alas, he failed: not only did Conan Doyle continue to believe in mediums but he suspected that Houdini knowingly or unknowingly used his own supernatural gifts in the performance of his escape acts. Here is my question for the hive mind: How did Houdini do it? Magicians are of course famously reluctant to reveal how they do their tricks, and it’s not clear that Houdini showed the secret to Conan Doyle. (Perhaps that’s why Conan Doyle refused to be convinced.) Rather than ask a magician to break his professional code, I thought I would ask you readers to suggest how Houdini accomplished his “ghostly” slate writing. Here are my own uninformed guesses about elements of the trick, which still probably don’t quite cohere into a full explanation. - My sense is that the slate hung from wherever it was placed by wires attached to its four corners so that it could swing freely but also hang level. I suspect that a marionette-like arrangement of those wires could in theory allow a ball to be rolled across the slate’s surface as required. - Conan Doyle’s cutting into one of the cork balls to prove it had not been tampered with does not preclude his selected ball from being tampered with or replaced subsequently, when he is not looking. - Sending Conan Doyle down the street to write his secret message would give him a sense of privacy, but of course it also would allow Houdini (and unknown cronies?) a chance to reset the slate and the balls as they chose. - Any good pickpocket could probably remove that page with Conan Doyle’s message from his pocket, look at it and return it without him being the wiser. That’s my best hypothesis about how Houdini did it. What’s yours? Update (added 3/25, 8:23 a.m.): Via Twitter, P. Kerim Friedman tells me that these two books (here and here) may hold the answer, although those explanations aren’t immediately available online. So I’d still like to hear your theories. The How Did Houdini Trick Conan Doyle? by Retort, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.
32
- Enter a word for the dictionary definition. From The Collaborative International Dictionary of English v.0.48: Nectarine \Nec`tar*ine"\ (n[e^]k`t[~e]r*[=e]n"), n. [Cf. F. nectarine. See Nectar.] (Bot.) A smooth-skinned variety of peach. [1913 Webster] Spanish nectarine, the plumlike fruit of the West Indian tree Chrysobalanus Icaco; -- also called cocoa plum. It is made into a sweet conserve which is largely exported from Cuba. [1913 Webster]
200
Wondering what to do with that dead cow in your backyard or those dead batteries in your flashlight? The Center for the Development of Recycling—a partnership between San Jose State University’s Environmental Studies Department and Santa Clara County—has an online tool to guide you on how and where to recycle anything from the dead cow and batteries to painted or stained wood waste and everything in between. Nov. 15 is America Recycles Day so take part in it and use the tool if you find one of the few materials the City of Milpitas does not accept. The tool is easy to use: Scroll the list of materials to find the one you wish to recycle, click on it and it will move to a box on the right. When you have your list complete fill in your ZIP code and click on the “find results” button. It will return a list of companies that accept the specific material, the company's location and/or phone number, website links, whether the material can be picked up or dropped off, and other useful information. Batteries, for example, are accepted at several places in and around Milpitas. That dead cow is a little trickier. Four places accept dead cows and two of them have restrictions on the cow’s age. Recycling is something that can be embraced as a lifetime commitment. Think the three Rs—reduce, reuse and recycle. One way to reduce is to stop junk mail from being delivered to your home. The Bay Area Outreach Coalition offers a stop junk mail kit (see the attached PDF for more detail or visit www.stopjunkmail.org) to get rid of those annoying catalogs, ads, promotional materials and other unwanted mail that winds up in your recycle bin. America Recycle Day includes events such as the one at GreenMouse Recycling in San Jose that is hosting a three-day recycle-athon between Nov. 15 and 17 with a goal of recycling 25,000 pounds of electronic waste. Recycle something with a screen, such as a laptop, monitor, television or cell phone, and recieve a Starbucks gift card. Then take that Starbucks gift card and a friend to the nearest store and take advantage of the 2-for-1 offer going on. It's a win-win!
134
How can we create more effective education systems that provide access and highquality education for all students; capitalizing on emerging technologies for learning anywhere, anytime; and responsive to the 21st century context of an interconnected global economy and rapidly emerging global society? That was an organizing question for more than a thousand people who gathered in Doha, Qatar, for the inaugural World Innovation Summit for Education (WISE) last November. Organized by the Qatar Foundation for Education, Science, and Community Development, the summit brought together key individuals from more than 120 countries on every contintent. The summit’s goal was to examine the key challenges facing education around the globe and to promote visionary thinking about new education models. The summit was opened by Her Highness Sheikha Mozah Bint Nasser al-Missned, chair of the foundation, who said, “Today, we place great faith in the power of education to prepare world citizens for a peaceful and cooperative future and to prepare citizens of our individual nations for the cultural transformations that result from globalization.” The WISE Summit was designed to be a multidisciplinary platform to promote international dialogue about best practices, to recognize exceptional impact through an annual WISE awards program, and to promote innovation and educational engagement on a global scale. The summit builds on and extends the Qatar Foundation’s domestic efforts to modernize Qatar’s own education system, including its flagship Education City, where more than eight top-flight American universities offer degree courses to students from a host of countries. This article cannot deal with every issue on the summit agenda. What follows are a few examples of the challenges and innovations under discussion. Access to Education At the most fundamental level, the world has still not achieved the commitment made by the world’s governments to provide basic education for all children. While there has been remarkable progress worldwide in giving millions of children, including girls, access to primary education, 75 million children never receive any schooling, primarily girls and children in conflict zones. Governments usually provide basic education, but since many of these children are in “weak states,” much of the discussion focused on the role of nongovernmental schools and organizations in reaching the hardest to reach and on the need for schools to be declared “protected zones” in times of conflict. Quality of Education Although the size of education systems has increased enormously in recent decades, the quality has not. Therefore, in higher education, summit participants addressed the role of accreditation systems in improving quality in higher education, making education more relevant in the dynamic global knowledge economy, and moving from measuring inputs to outputs. In elementary and secondary education, the growth of international assessments has increasingly led countries to compare themselves to emerging global standards of excellence and to examine how their education systems can emulate the effectiveness of the highest performing education systems. In keeping with the summit’s emphasis on innovation, central themes included using technology to increase access, improve performance, and promote a new conceptualization of education. For example, in Brazil, a distance learning initiative transmits live classes via videoconference to 25,000 students in 700 secondary school classrooms throughout the remote reaches of the Amazon forest. In sub-Saharan Africa, a consortium of 15 universities in nine countries delivers open educational resources to almost one quarter million teachers, many of whom are not formally trained. And in India, the provocative “hole in the wall” experiments, in which computers are installed in walls in slum areas and left for children to use unsupervised, show just how much children can learn on their own, without teachers, in “self-organized learning environments.” There was also lively debate about the potential educational uses of Twitter, videogames, and, especially, mobile phones, which now have four billion customers around the world (compared with 1.5 billion e-mail accounts). Given this huge penetration of mobile phones, could strong educational applications be developed so that poorer countries can leapfrog the lack of Internet access? Finally, there was widespread recognition that the content of education itself needs to change--that a global world requires a global education. Echoing Confucius’ sentiment that “It is better to take a journey of 10,000 li than to read 10,000 books,” there was agreement that more students need to study outside their own countries. Currently, only about 2% of higher education students do so. Migration and increasing diversity within countries also require that schools, from kindergarten on, teach tolerance and respect for other cultures and faiths. And the rapid pace of globalization is causing countries everywhere to recognize that their young people need to be more globally minded and to ask: What should all education systems teach their young people about other cultures, languages, and global challenges? How can education bring the world to students and students to the world? Overall, the summit made a convincing case that significant rethinking of education is needed. All nations share an interest in creating more effective education systems that provide access and high-quality education for all students, that capitalize on emerging technologies for learning anywhere and anytime, and that are responsive to the 21st-century context of an interconnected global economy and rapidly emerging global society. The WISE Initiative and annual summit were attempts to provoke some answers to the questions that concern all nations. Author: Vivien Stewart Originally published on pdkintl.org.
27
OLMOS CREEK (UVALDE COUNTY) OLMOS CREEK (Uvalde County). Olmos Creek rises (at 29°08' N, 100°04' W) in far southwestern Uvalde County two miles southeast of Dabney and runs south for twelve miles to its mouth (at 29°01' N, 100°04' W) on Muela Creek, fourteen miles northwest of La Pryor in northwestern Zavala County. The creek name is Spanish for "elm trees." The stream rises in flat to gently sloping terrain with local depressions or sinkholes; the land here is surfaced by very shallow to shallow and stony clays and loams that support scrub brush and grasses. The creek continues through low-rolling hills and prairie with deeper clayey and loamy soils that support chaparral, some mesquite, and grasses. Toward the mouth of Olmos Creek the terrain becomes low relief, surfaced by deep loams and clays that support pecans and other hardwoods and various grasses. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article."OLMOS CREEK (UVALDE COUNTY)," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/rbo19), accessed June 19, 2013. Published by the Texas State Historical Association.
793
Philosophy East and West 46 (2):143-164 (1996) |Abstract||The origin, content, argumentative basis, practical implication, and influence of Mencius' views of mind-heart and human nature are discussed. While the differences between Confucius and Mencius are acknowledged, it is argued that Mencius' view that human nature is good is consistent with and is a further development of basic ideas in Confucius' thinking. The basis of Mencius' view is not empirical generalization but inner reflection and personal experience, which reveal a shared natural endowment in human beings with a transcendental source. In addition to a discussion of Mencius' views, the development of his ideas in the Sung and Ming and by contemporary Neo-Confucians is also considered| |Keywords||No keywords specified (fix it)| |Through your library||Configure| Similar books and articles James Behuniak Jr (2011). Naturalizing Mencius. Philosophy East and West 61 (3):492-515. David E. Soles (1999). The Nature and Grounds of Xunzi's Disagreement with Mencius. Asian Philosophy 9 (2):123 – 133. Qingping Liu (2001). Is Mencius' Doctrine of 'Commiseration' Tenable? Asian Philosophy 11 (2):73 – 84. James Behuniak (2002). Mencius on Becoming Human. Dissertation, University of Hawaii at Manoa Liang Tao & Andrew Lambert (2009). Mencius and the Tradition of Articulating Human Nature in Terms of Growth. Frontiers of Philosophy in China 4 (2):180 - 197. Kwong-loi Shun (1997). Mencius and Early Chinese Thought. Stanford University Press. Zhang Pengwei, Guo Qiyong & Wang Bei (2008). New Insight Into Mencius' Theory of the Original Goodness in Human Nature. Frontiers of Philosophy in China 3 (1):27 - 38. Katrin Froese (2008). Organic Virtue: Reading Mencius with Rousseau. Asian Philosophy 18 (1):83 – 104. Tao Liang (2009). Mencius and the Tradition of Articulating Human Nature in Terms of Growth. Frontiers of Philosophy in China 4 (2):180-197. Added to index2009-01-28 Total downloads13 ( #88,007 of 549,196 ) Recent downloads (6 months)1 ( #63,397 of 549,196 ) How can I increase my downloads?
230
GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments. It may also be used as a batch-oriented language. Octave has extensive tools for solving common numerical linear algebra problems, finding the roots of nonlinear equations, functions written in Octave's own language, or by using dynamically loaded modules written in C, C++, Fortran, or other languages. Donations to support the software can be made at https://my.fsf.org/donate/working-together/octave. DocumentationUser manual included; User FAQ included; Printed and online user manual available from http://www.network-theory.co.uk/octave/manual/ - PARI GP - Octave-data smoothing - Octave-information theory - Octave-linear algebra - GNU Oflox This is a GNU package:octave released on 31 May 2012 |License||Verified by||Verified on||Notes| |GPLv2||Kelly Hopkins||6 December 2011| |GPLv3orlater||Kelly Hopkins||29 January 2010| Leaders and contributors |John W. Eaton||Maintainer| Resources and communication |Developer||VCS Repository Webview||http://www.octave.org/hg/octave| |Required to build||GNU make| |Required to build||a recent version of g++| |Required to build||libstdc++| |Required to build||fortran| This entry (in part or in whole) was last reviewed on 4 February 2012. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the page “GNU Free Documentation License”. The copyright and license notices on this page only apply to the text on this page. Any software described in this text has its own copyright notice and license, which can usually be found in the distribution itself.
489
People vary in their reactions to mosquito bites. Most people develop itchy, raised bumps on the skin that last several days. No treatment is necessary, but calamine lotion or over-the-counter hydrocortisone cream can reduce itching. A few people have a significant allergy to mosquito bites. The bites can result in what’s called a large local reaction: swelling, blistering, itching, and pain affecting a wide area of the body (such as an entire arm or leg). Oral antihistamines like cetirizine (Zyrtec), diphenhydramine (Benadryl), or hydroxyzine (Atarax, Vistaril) can help ease itching. Topical hydrocortisone may also help. Rarely, people with a severe allergy to mosquito bites develop anaphylaxis, a whole-body life-threatening allergic reaction. Symptoms of anaphylaxis include: • Itching or rash, especially hives, in areas of skin away from the bite. • Hoarseness or shortness of breath. Anaphylaxis requires emergency medical attention. People who have had anaphylaxis-like symptoms previously should always have injectable epinephrine (an Epi-Pen) nearby. This answer should not be considered medical advice...This answer should not be considered medical advice and should not take the place of a doctor’s visit. Please see the bottom of the page for more information or visit our Terms and Conditions. Thanks for your feedback. 37 of 42 found this helpful
772
Some people love them and some people hate them, but whatever your tastes you should probably know a little about the vignette effect in photography. A vignette photo has edges that fade to either white or black, typically gradually, although it can be an effect that is used dramatically as well. This effect can be created in camera with certain lenses that are known to produce it, sometimes undesirably. Vignettes can also be created in the darkroom during the printing process. However, in this day and age the most common way to create a vignette is during the post-processing phase in a photo editing program such as Photoshop. Generally in spite of our very wide field of view, our eyes don’t see with perfect focus at all points in a scene. We see the center of a scene with ideal light sensitivity and perfect focus and then focus and brightness will fall off increasingly toward the periphery. Therefore, a vignette more or less mimics the way our eyes actually see a scene. Although a photograph with precise focus and exposure throughout may be desirable in some scenarios, it can also seem very boring, artificial and even too contrived in others. Vignetting can act as a way to frame the intended subject in a photo in either a subtle or dramatic way and really make it stand out to great creative effect. By causing the periphery to gradually fade away, not only will the photo be more like actual vision, it can immediately draw the eye to the main subject and away from unimportant elements in the background or periphery. It can also be used as a finishing touch on a photo once the exposure and composition are perfect. Typically a vignette, as previously mentioned, is used in a gradual or subtle way since our eyes don’t actually make the borders of our vision very dark and fuzzy. When used in this way, a vignette will generally not be consciously detectable by the untrained eye. However, there are times when a more dramatic vignette is a useful effect, such as in a black and white landscape to add a touch of drama or in an intense portrait to create a darker mood. Older cheaper cameras such as Holgas often had poor optics, so many old photos have vignetting that was created in-camera unintentionally. This vintage look, which was previously not desirable, is now often intentionally recreated during post-processing with software like Photoshop. How to Create a Vignette in Photoshop There are a few ways to create a vignette in Photoshop but there are a few bits of information that are important to know first. A vignette should always be the very last step in the editing process. Deciding to crop the image post-vignette, for example, can cause the end result to be not desirable since a vignette is used to highlight the subject. If you are not completely sure that you are absolutely done editing your photo in other ways, you can choose to duplicate the original image layer so that you can go back to it if needed. The easiest way to create a vignette in Photoshop is to first create a new layer and select the elliptical marquee tool. Create a circle or oval shape in the desired area on your photo. Next you will need to choose to feather your selection anywhere from 50 to 80 pixels by going to Select>Modify>Feather. You can also choose the square marquee or even use the lasso tool to create a custom and creative shape. At this point you will go to Select>Inverse and fill the center of the selection with black or white depending on the look you ultimately want to achieve. If it is too dark at this point then you can easily lighten it by lowering the layer opacity. Another easy and quick way to add vignetting to your photos in Photoshop involves the use of the Lens Correction filter. First you need to open your photo and as previously mentioned, be sure to finish all photo edits prior to adding the vignetting. Open the Lens Correction filter via the following path Filters>Distort>Lens Correction. You will see various options within the Lens Correction menu but we are going to focus on the Vignette sliders. The Amount slider will make the vignette darker or lighter and the Midpoint slider will adjust the mid-point of the vignette. The first method of vignetting in Photoshop is definitely preferable if you want to place the vignetting in a custom place on the photo for creative effect, meaning not oval and dead center. However, the second method is great for a very precision vignette much like what would be created naturally with a lens. A Quick Note About Opinions on Vignettes One thing to be aware of is that there are photographers that “object” to the use of vignetting. Many see it as a form of cheating and as a way to make a bad photograph better. They argue that the composition itself should be what creates all of the emphasis on the subject and not a post-processing technique. However, the viewer of the photo, unless they possess a critical and trained eye, really doesn’t care about why they like a photo. They just do. Ultimately photography is indeed an art form and artists are as varied art itself. The most important thing is to produce the images that you want to produce. Rachael Towne is a photographer, digital artist and creator of photoluminary.com
378
|Born||15 Jun 1880| |Died||5 Jan 1947| Contributor: C. Peter Chen Osami Nagano was born in 1880. As a naval officer, he established a strong record in administration. He studied in the United States during the 1910s and was naval attaché to the US between 1920 and 1923. In Dec 1935, he represented Japan as the chief delegate at the Second London Naval Conference. Between May 1936 and Jun 1937 he was the Navy Minister under Prime Minister Koki Hirota. In Dec 1937, he was named commander in chief of the First Fleet and the Combined Fleet. From Apr 1941 to Feb 1944, he was chief of the Navy General Staff. Although he was a proponent of a southward expansion, he was against a war against the United States; he deduced that if even Japan was to take over British and Dutch colonies in Asia, the isolationist United States still would not enter the war against Japan, leaving Japan to establish her empire without the interference from the industrial giant. However, like he had done so a number of times before, he entrusted too much of strategic planning to Isoroku Yamamoto and his staff officers, essentially giving away control of the entire navy to the Combined Fleet. Yamamoto, although not a supporter of a war against the United States either, dutifully accepted his orders and led the Japanese Navy into the great attack at Pearl Harbor that brought the US into war. Nagano essentially lost real control of the navy after Pearl Harbor; Yamamoto had, in effect, gone as far as to tell Nagano "not to interfere too much and thus set a bad precedent in the navy." From Feb 1944 to the end of the war, he was Emperor Showa's personal naval advisor. After the war, Nagano was among the highest ranked officers interrogated by United States naval officers. He was described as "thoroughly cooperative", "keenly alert", "intelligent", and "anxious to develop American friendship". He was subsequently tried as a war criminal, but died in 1947 before the trial ended. Sources: Interrogation of Japanese Officials, Nihon Kaigun, Shattered Sword. Osami Nagano Timeline |2 Feb 1937||Osami Nagano was named the commander-in-chief of the Japanese Navy Combined Fleet.| » Third London Naval Conference » Tokyo Trial and Other Trials Against Japan » Interrogation Nav 80, Fleet Admiral Osami Nagano Advertise on ww2db.com - » 725 biographies - » 302 events - » 26816 timeline entries - » 665 ships - » 300 aircraft models - » 163 vehicle models - » 254 weapon models - » 64 historical documents - » 282 book reviews - » 209 maps - » 16063 photos, 1464 in color George Patton, 31 May 1944
288
The eye of a hurricane passes over Grand Bahama Island in a direction 60.0° north of west with a speed of 41.0 km/h. Three hours later, the course of the hurricane suddenly shifts due north, and its speed slows to 25.0 km/h. How far from Grand Bahama is the hurricane 4.50 h after it passes over the island?
418
This patient support community is for discussions relating to general health issues, adolescents, babies, child health, eating disorders, fitness, immunizations and vaccines, infectious diseases, and senior health. In October, 2010 I was told by my doctor that I had a vitamin d deficiency. My level was 13. My hair was falling out, I was having a lot of back pain, trouble sleeping at night, and I was experiencing memory and concentration problems among other things. Since then I have been taking prescription strength vitamin d at 100,000 IU's per week. I've been on this high dosage for nine months. At one point my level vitamin d level went up to 32. However, they just tested me again (July 26, 2011) and my level has dropped to 26. My doctor now wants me to double my dose to 200,000 UI's for the next three months. Can you tell me why my levels are not going up, and if I should be asking for more blood work? If I need more blood work, what should I request? Any information that you can provide me with will be much appreciated. Thank you in advance. Did your doctor administer a 25-hydroxyvitamin test to determine if you have optimal levels of Vitamin D in your blood? In order to get the proper amount of Vitamin D your body requires, you need to be able to find a healthy balance of sunlight, but still reduce your risk of skin cancer, particularly Melanoma. Wear sunscreen every time you are in the sun and keep your sun exposure to 20 minutes at a time. Low vitamin D levels occur because of less intake of vitamin D, less exposure to sunlight or as side effects of some diseases. Toxic substances, harmful chemicals, side effects of certain medicines can also give rise to such condition that the level of vitamin D goes down the normal level. It is a rare occurrence that the vitamin D levels is low because of some hereditary diseases. Following are some of the causes for low levels of vitamin D. Lack of Exposure to Sunlight The layer under the skin produces vitamin D using sunlight. People of certain geographical locations like those in the northern hemisphere have living conditions such that their exposure to sunlight is minimum. Aged people and small babies often do not get enough exposure to sunlight. Aging skin of elderly people needs more time to prepare vitamin D. People with the condition of lupus are sensitive towards sunlight. So, they are advised not to stay out under direct sunlight for a long time. Under all these conditions, the factors responsible for low vitamin D, is absence of sunlight. There are very few food substances that contain naturally occurring vitamin D. Some of the food items that provide us vitamin D are beef liver, fleshy part of the fish, egg yolk, fish oils and cheese. Therefore, vegetarians are more prone to low vitamin D levels. Another very important element that is usually overlooked by conventional doctors is that you need to take magnesium to get the vitamin D to work properly. Usually, just a normal amount of magnesium is all that is required to get the vitamin D to synthesize properly. I strongly recommend going to the Vitamin D Council's website to find more information about the magnesium and other supplement connection to vitamin D absorption. Yes, there were other minerals mentioned, but magnesium seems to be the most important one. I urge you to also sign up for their newsletters. Vitamin D deficiency is not funny, because it can kill. It almost killed me. Mine was only 8 when I started out. In addition to that last post. The amount your are stating you are taking is it really that high or did you mean 1,000 or 2,000 IU? That is an incredibly high amount and seems out fo the ball park. Our experience is that we were low but not as low as you are we were around 20 to 30. The recommended daily intake is about 2,000 I.U. but you have to test your blood a few times to get it right and it takes some months for the level to go up. So I tend to absorb it well and only need 1,000 I.U. to get the Vit D to be normal but I was tested every 3 months until the level was at the right amount. My husband has to take 4,000 I.U. in order to be normal. Yes sunlight does help but you need to get out there (with sunscreen on) and it didn't go up for me after walking an hour a day in the sun. So, it depend on your diet and ability to absorb. As the last post said you need magnesium to help. The Content on this Site is presented in a summary fashion, and is intended to be used for educational and entertainment purposes only. It is not intended to be and should not be interpreted as medical advice or a diagnosis of any health or fitness problem, condition or disease; or a recommendation for a specific test, doctor, care provider, procedure, treatment plan, product, or course of action. Med Help International, Inc. is not a medical or healthcare provider and your use of this Site does not create a doctor / patient relationship. We disclaim all responsibility for the professional qualifications and licensing of, and services provided by, any physician or other health providers posting on or otherwise referred to on this Site and/or any Third Party Site. Never disregard the medical advice of your physician or health professional, or delay in seeking such advice, because of something you read on this Site. We offer this Site AS IS and without any warranties. By using this Site you agree to the following Terms and Conditions. If you think you may have a medical emergency, call your physician or 911 immediately.
674
Our Fifth Grade students have just held their Invention Convention, a culminating experience for a library research unit. In addition to learning about famous inventors and the invention process using books and online resources, each student had the opportunity to design an invention. Working individually or with partners, the students developed original ideas for new products, some of which help to solve everyday problems. This creative group of students worked enthusiastically, brainstorming for ideas, developing models and prototypes, and creating ads to market their inventions. Some groups even wrote “commercials” to go along with their inventions. The students presented their inventions to an appreciative audience at school, and all were impressed with the level of creative enthusiasm. The video below highlights the students talking about their inventions and features some of their commercials. Enjoy watching and please feel free to leave a comment on this post!
391
CTComms sends on average 2 million emails monthly on behalf of over 125 different charities and not for profits. Take the complexity of technology and stir in the complexity of the legal system and what do you get? Software licenses! If you've ever attempted to read one you know how true this is, but you have to know a little about software licensing even if you can't parse all of the fine print. By: Chris Peters March 10, 2009 A software license is an agreement between you and the owner of a program which lets you perform certain activities which would otherwise constitute an infringement under copyright law. The software license usually answers questions such as: The price of the software and the licensing fees, if any, are sometimes discussed in the licensing agreement, but usually it's described elsewhere. If you read the definitions below and you're still scratching your head, check out Categories of Free and Non-Free Software which includes a helpful diagram. Free vs Proprietary: When you hear the phrase "free software" or "free software license," "free" is referring to your rights and permissions ("free as in freedom" or "free as in free speech"). In other words, a free software license gives you more rights than a proprietary license. You can usually copy, modify, and redistribute free software without paying a fee or obtaining permission from the developers and distributors. In most cases "free software" won't cost you anything, but that's not always the case – in this instance the word free is making no assertion whatsoever about the price of the software. Proprietary software puts more restrictions and limits on your legal permission to copy, modify, and distribute the program. Free, Open-Source or FOSS? In everyday conversation, there's not much difference between "free software," "open source software," and "FOSS (Free and Open-Source Software)." In other words, you'll hear these terms used interchangeably, and the proponents of free software and the supporters of open-source software agree with one another on most issues. However, the official definition of free software differs somewhat from the official definition of open-source software, and the philosophies underlying those definitions differ as well. For a short description of the difference, read Live and Let License. For a longer discussion from the "free software" side, read Why Open Source Misses the Point of Free Software. For the "open-source" perspective, read Why Free Software is Too Ambiguous. Public domain and copyleft. These terms refer to different categories of free, unrestricted licensing. A copyleft license allows you all the freedoms of a free software license, but adds one restriction. Under a copyleft license, you have to release any modifications under the same terms as the original software. In effect, this blocks companies and developers who want to alter free software and then make their altered version proprietary. In practice, almost all free and open-source software is also copylefted. However, technically you can release "free software" that isn't copylefted. For example, if you developed software and released it under a "public domain" license, it would qualify as free software, but it isn't copyleft. In effect, when you release something into the public domain, you give up all copyrights and rights of ownership. Shareware and freeware. These terms don't really refer to licensing, and they're confusing in light of the discussion of free software above. Freeware refers to software (usually small utilities at sites such as Tucows.com) that you can download and install without paying. However, you don't have the right to view the source code, and you may not have the right to copy and redistribute the software. In other words, freeware is proprietary software. Shareware is even more restrictive. In effect, shareware is trial software. You can use it for a limited amount of time (usually 30 or 60 days) and then you're expected to pay to continue using it. End User Licensing Agreement (EULA). When you acquire software yourself, directly from a vendor or retailer, or directly from the vendor's Web site, you usually have to indicate by clicking a box that you accept the licensing terms. This "click-through" agreement that no one ever reads is commonly known as a EULA. If you negotiate a large purchase of software with a company, and you sign a contract to seal the agreement, that contract usually replaces or supersedes the EULA. Most major vendors of proprietary software offer some type of bulk purchasing and volume licensing mechanism. The terms vary widely, but if you order enough software to qualify, the benefits in terms of cost and convenience are significant. Also, not-for-profits sometimes qualify for it with very small initial purchases. Some of the benefits of volume licensing include: Lower cost. As with most products, software costs less when you buy more of it. Ease of installation. Without volume licenses, you usually have to enter a separate activation code (also known as a product key or license key) for each installed copy of the program. On the other hand, volume licenses provide you with a single, organisation-wide activation code, which makes it much easier to find when you need to reinstall the software. Easier tracking of licenses. Keeping track of how many licenses you own, and how many copies you've actually installed, is a tedious, difficult task. Many volume licensing programs provide an online account which is automatically updated when you obtain or activate a copy of that company's software. These accounts can also coordinate licensing across multiple offices within your organisation. To learn more about volume licensing from a particular vendor, check out some of the resources below: Qualified not-for-profits and libraries can receive donated volume licenses for Microsoft products through TechSoup. For more information, check out our introduction to the Microsoft Software Donation Program, and the Microsoft Software Donation Program FAQ. For general information about the volume licensing of Microsoft software, see Volume Licensing Overview. If you get Microsoft software from TechSoup or other software distributors who work with not-for-profits, you may need to go to the eOpen Web site to locate your Volume license keys. For more information, check out the TechSoup Donation Recipient's Guide to the Microsoft eOpen Web Site. Always check TechSoup Stock first to see if there's a volume licensing donation program for the software you're interested in. If TechSoup doesn't offer that product or if you need more copies than you can find at TechSoup, search for "volume licensing not-for-profits software" or just "not-for-profits software." For example, when we have an inventory of Adobe products, qualifying and eligible not-for-profits can obtain four individual products or one copy of Creative Suite 4 through TechSoup. If we're out of stock, or you've used up your annual Adobe donation, you can also check TechSoup's special Adobe donation program and also Adobe Solutions for Nonprofits for other discounts available to not-for-profits. For more software-hunting tips, see A Quick Guide to Discounted Software Programs. Pay close attention to the options and licensing requirements when you acquire server-based software. You might need two different types of license – one for the server software itself, and a set of licenses for all the "clients" accessing the software. Depending on the vendor and the licensing scenario, "client" can refer either to the end users themselves (for example, employees, contractors, clients, and anyone else who uses the software in question) or their computing devices (for example, laptops, desktop computers, smartphones, PDAs, etc.). We'll focus on Microsoft server products, but similar issues can arise with other server applications. Over the years, Microsoft has released hundreds of server-based applications, and the licensing terms are slightly different for each one. Fortunately, there are common license types and licensing structures across different products. In other words, while a User CAL (Client Access License) for Windows Server is distinct from a User CAL for SharePoint Server, the underlying terms and rights are very similar. The TechSoup product pages for Microsoft software do a good job of describing the differences between products, so we'll focus on the common threads in this article. Moreover, Microsoft often lets you license a single server application in more than one way, depending on the needs of your organisation. This allows you the flexibility to choose the licenses that best reflect your organisation's usage patterns and thereby cost you the least amount of money. For example, for Windows Server and other products you can acquire licenses on a per-user basis (for example, User CALs) or per-device basis (for example, Device CALs). The license required to install and run most server applications usually comes bundled with the software itself. So you can install and run most applications "out of the box," as long as you have the right number of client licenses (see the section below for more on that). However, when you're running certain server products on a computer with multiple processors, you may need to get additional licenses. For example, if you run Windows Server 2008 DataCenter edition on a server with two processors, you need a separate license for each processor. SQL Server 2008 works the same way. This type of license is referred to as a processor license. Generally you don't need client licenses for any application that's licensed this way. Client Licenses for Internal Users Many Microsoft products, including Windows Server 2003 and Windows Server 2008, require client access licenses for all authenticated internal users (for example, employees, contractors, volunteers, etc.). On the other hand, SQL Server 2008 and other products don't require any client licenses. Read the product description at CTXchange if you're looking for the details about licensing a particular application. User CALs: User CALs allow each user access to all the instances of a particular server product in an organisation, no matter which device they use to gain access. In other words, if you run five copies of Windows Server 2008 on five separate servers, you only need one User CAL for each person in your organisation who access those servers (or any software installed on those servers), whether they access a single server, all five servers, or some number in between. Each user with a single CAL assigned to them can access the server software from as many devices as they want (for example, desktop computers, laptops, smartphones, etc.). User CALs are a popular licensing option. Device CALs: Device CALs allow access to all instances of a particular server application from a single device (for example, a desktop computer, a laptop, etc.) in your organisation. Device CALs only make sense when multiple employees use the same computer. For example, in 24-hour call centres different employees on different shifts often use the same machine, so Device CALs make sense in this situation. Choosing a licensing mode for your Windows Server CALs: With Windows Server 2003 and Windows Server 2008, you use a CAL (either a User CAL or a Device CAL) in one of two licensing modes: per seat or per server. You make this decision when you're installing your Windows Server products, not when you acquire the CALs. The CALs themselves don't have any mode designation, so you can use either a User CAL or a Device CAL in either mode. Per seat mode is the default mode, and the one used most frequently. The description of User CALs and Device CALs above describes the typical per seat mode. In "per server" mode, Windows treats each license as a "simultaneous connection." In other words, if you have 40 CALs, Windows will let 40 authenticated users have access. The 41st user will be denied access. However, in per server mode, each CAL is tied to a particular instance of Windows Server, and you have to acquire a new set of licenses for each new server you build that runs Windows. Therefore, per server mode works for some small organisations with one or two servers and limited access requirements. You don't "install" client licenses the way you install software. There are ways to automate the tracking of software licenses indirectly, but the server software can't refuse access to a user or device on licensing grounds. The licenses don't leave any "digital footprint" that the server software can read. An exception to this occurs when you license Windows Server in per server mode. In this case, if you have 50 licenses, the 51st authenticated user will be denied access (though anonymous users can still access services). Some key points to remember about client licensing: The licensing scenarios described in this section arise less frequently, and are too complex to cover completely in this article, so they're described briefly below along with more comprehensive resources. You don't need client licenses for anonymous, unauthenticated external users. In other words, if someone accesses your Web site, and that site runs on Internet Information Server (IIS), Microsoft's Web serving software, you don't need a client license for any of those anonymous users. If you have any authenticated external users who access services on your Windows-based servers, you can obtain CALs to cover their licensing requirements. However, the External Connector License (ECL) is a second option in this scenario. The ECL covers all use by authenticated external users, but it's a lot more expensive than a CAL, so only get one if you'll have a lot of external users. For example, even if you get your licenses through the CTXchange donation program, an ECL for Windows Server 2008 has an £76 administrative fee, while a User CAL for Windows Server 2008 carries a £1 admin fee. If only a handful of external users access your Windows servers, you're better off acquiring User CALs. Also, an ECL only applies to external users and devices. In other words, if you have an ECL, you still have to get a CAL for all employees and contractors. Even though Terminal Services (TS) is built into Windows Server 2003 and 2008, you need to get a separate TS CAL for each client (i.e. each user or each device) that will access Terminal Services in your organisation. This TS license is in addition to your Windows Server CALs. Microsoft's System Centre products (a line of enterprise-level administrative software packages) use a special type of license known as a management license (ML). Applications that use this type of licensing include System Center Configuration Manager 2007 and System Center Operations Manager 2007. Any desktop or workstation managed by one of these applications needs a client management license. Any server managed by one of these applications requires a server management license, and there are two types of server management licenses – standard and enterprise. You need one or the other but not both. There are also special licensing requirements if you're managing virtual instances of Windows operating systems. For more information, see TechSoup's Guide to System Center Products and Licensing and Microsoft's white paper on Systems Center licensing. Some Microsoft server products have two client licensing modes, standard and enterprise. As you might imagine, an Enterprise CAL grants access to more advanced features of a product. Furthermore, with some products, such as Microsoft Exchange, the licenses are additive. In other words, a user needs both a Standard CAL AND an Enterprise CAL in order to access the advanced features. See Exchange Server 2007 Editions and Client Access Licenses for more information. With virtualisation technologies, multiple operating systems can run simultaneously on a single physical server. Every time you install a Microsoft application, whether on a physical hardware system or a virtual hardware system, you create an "instance" of that application. The number of "instances" of particular application that you can run using a single license varies from product to product. For more information see the Volume Licensing Briefs, Microsoft Licensing for Virtualization and the Windows Server Virtualization Calculator. For TechSoup Stock products, see the product description for more information. There are a lot of nuances to Microsoft licensing, and also a lot of excellent resources to help you understand different scenarios. About the Author: Chris is a former technology writer and technology analyst for TechSoup for Libraries, which aims to provide IT management guidance to libraries. His previous experience includes working at Washington State Library as a technology consultant and technology trainer, and at the Bill and Melinda Gates Foundation as a technology trainer and tech support analyst. He received his M.L.S. from the University of Michigan in 1997. Originally posted here. Copyright © 2009 CompuMentor. This work is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. The latest version of Microsoft Office Professional Plus is an integrated collection of programs, servers, and services designed to work together to enable optimised information work.
592
Swine flu, these days, may not be as rampant as last year when over 787 cases were detected in the State and nearly 55 persons died because of complications arising out of it. However, sporadic cases continue to get reported from districts and there is always a possibility of the virus making a comeback next winter, physicians maintain. To be prepared for this eventuality, authorities on Thursday launched swine flu vaccination drive among healthcare workers, who are, when it comes to fighting H1N1 virus, the first line of defence. All government hospitals including those in the districts have been given sufficient doses of the vaccine for inoculation of healthcare workers. This is the first phase of swine flu vaccination drive and, according to authorities, the second phase covering high risk groups among general public will be taken up after an indigenous vaccine is developed. The present vaccine being administered to health workers, has been imported by noted pharma giant Sanofi Pasteur. Health care workers from private hospitals, who are willing to get inoculated, can also get themselves administered with the vaccine at any of the Government hospitals, authorities said. The Ministry of Health and Family Welfare has supplied close to 80,000 doses of swine flu vaccine for inoculation of health workers in the State. In all, the Centre has procured nearly 15 lakh doses of swine flu vaccine for distribution all over the country. The swine flu vaccine will provide immunity for only one year because of the possibility of the virus changing its ‘nature' and turning into a more virulent form in the next few years. “In the second phase of vaccination, which could start in three to four months, pregnant women and children will be our prime targets. Because of their compromised immunity, they are susceptible to this virus and hence they will be given the indigenous vaccine,” said state swine flu coordinator Dr. K. Subhakar. Keywords: Medical facilities
498
Sepia is a dark brown-grey color, named after the rich brown pigment derived from the ink sac of the common cuttlefish. The word sepia is Greek for "cuttlefish". Sepia in human culture In the last quarter of the 18th century, Professor Jacob Seydelmann of Dresden developed a process to extract and produce a more concentrated form for use in watercolors and oil paints. It has been suggested that the actual skin color of most black people would be most accurately represented by the color sepia. There is a magazine for African-Americans called Sepia, which was started in 1947. Sepia ink was commonly used as writing ink in classical times. Sepia tones are used in photography; the hue resembles the effect of aging in old photographs and photographs chemically treated for archival purposes, an effect sometimes created by purpose. Many digital cameras include a sepia tone effect as well.
138
Mercury findings raise new questions One of a kind New data from NASA's Messenger spacecraft has surprised scientists showing the planet Mercury has some of the most unusual internal dynamics ever seen. The findings, which will appear in the journal Science, mean the planet closest to the Sun has evolved differently compared to the other terrestrial planets in the solar system. After its first full year in orbit around Mercury, Messenger has returned a detailed picture of the planet's northern hemisphere suggesting a deep reservoir of high-density material exists below a thin crust surface. Using the Messenger data, Dr Maria Zuber, of the Massachusetts Institute of Technology, and colleagues created a detailed elevation model showing the planet's northern hemisphere is far flatter than Mars or the Moon. They also found extensive lowlands and a vast northern volcanic plain. The researchers were also surprised to discover that the floor of several craters are tilted. For example, the 1500-kilometre wide Caloris impact crater is tilted to the extent that parts of its floor are now taller than the rim. Unlike the Moon "Prior to Messenger's observations, many scientists believed Mercury was much like the Moon," says Zuber. "We thought it cooled off very early in solar system history, and has been a dead planet throughout most of its evolution." Now Zuber and colleagues say there's compelling evidence that Mercury must have sustained intense geophysical activity for most of its history. Messenger's also provided scientists with the first measurements of Mercury's gravity field showing the planet's crust is thicker at low latitudes and thinner toward the northern polar region. They believe Mercury's outer shell is denser than previously thought, indicating a deep layer of iron sulfide below the surface. The data also suggests Mercury has a huge iron rich liquid outer core and perhaps a solid inner core, together comprising about 85 per cent of the planet's radius. By comparison Earth's core is about half our planet's radius. The data means Mercury's mantle and crust occupy only the outer 15 percent or so of the planet's radius giving it a different internal structure compared to other terrestrial planets. Strange and interesting world Planetary scientist Dr Craig O'Neill from Sydney's Macquarie University says the new data shows Mercury is a far more interesting planet than previously thought. "It seems the internal dynamics of Mercury is doing a lot more to the planet's crust than we gave it credit for," says O'Neill. "The tilted crater floors are interesting and we don't really know what would have caused this." According to O'Neill the discovery of what may be a liquid core, could also explain the magnetic field detected around Mercury. "We used to think it was just residual traces of magnetism in the rocks," says O'Neill. "But thanks to Messenger we now know there's something going on in Mercury's core."
62
The aim of the project - which is in the style of a Wikipedia - is to make the large volume of information that exists on children's rights more accessible, to highlight persistent violations and inspire collective action. Much of the information in the new Wiki is already available on the CRIN website but might not be easily retrieveable. See the Wiki here: http://wiki.crin.org/mediawiki/index.php?title=Main_Page We are launching the Wiki with an initial batch of 41 country pages, with more to follow. They are: Afghanistan, Angola, Argentina, Bahrain, Belarus, Belgium, Burkina Faso, Burundi, Cambodia, Cameroon, Colombia, Costa Rica, Czech Republic, Denmark, Ecuador, Egypt, El Salvador, Finland, Grenada, Guatemala, Japan, Lao, PDR, Macedonia, former Yugoslav Republic of, Mongolia, Montenegro, New Zealand, Nicaragua, Nigeria, Norway, Pakistan, Paraguay, Serbia, Singapore, Spain, Sri Lanka, Sudan, Tajikistan, Tunisia, Turkey, Ukraine, Yemen The Wiki is a web-based, multi-lingual and interactive project - this means we need your input to ensure the pages are kept up to date. Find out how to contribute below. Please note that this project is still being tested, if you have comments or suggestions, please email them to email@example.com. Why are we doing this? The purpose of making all information about children's rights available in one place is to build a clearer picture of some of the repeated violations of children's rights in a given country. The eventual goal will be to try to match these violations with possible avenues of redress. We have found that few children's rights advocates are aware of or make use of the full range of opportunities within the UN and regional human rights mechanisms to pursue children's rights advocacy. Yet many of the bodies which are not specifically child rights-focused, such as the UN Special Procedures and the Universal Periodic Review, also issue recommendations on children's rights. The Wiki therefore brings together all recommendations and decisions on children's rights made by these bodies, including courts, to make it easy for advocates to make the most of all available options. What is in the Wiki? Each country has its own homepage with the following sections: In addition, the Wiki contains state-by-state information on ratifications of international human rights treaties, communications/ complaints mechanisms and inquiry procedures. On the Wiki, read: Our eventual aim is to make all information about children's rights available in English as well as the official language of the country concerned. Parts of the Wiki which exist in other languages are currently linked to from the English version. If you are planning to translate any sections of the Wiki into your language or are available to volunteer your time to help with this, please contact CRIN at firstname.lastname@example.org
471
A jet owned by the German Air Research Center, which is equipped with devices and sensors to analyze the ash emitted by the volcano in Eyjafjallajökull glacier in Iceland, arrived in the country yesterday. “The jet can tell us about the distribution of the ash and it can, to a certain degree, fly into the ash cloud, although it cannot be exposed to too much ash,” Haraldur Ólafsson, professor in meteorology at the University of Iceland, told Morgunbladid. On their way to the country, the crew of the test jet didn’t notice much ash above Iceland’s southwestern corner. The jet took a few dives above Eyjafjallajökull to collect samples, which will be analyzed today. Ólafsson said it is clear that the ash level will be low. The jet will go on another expedition today and is expected to return to Germany on Sunday. The cost of the expeditions, approximately ISK 30 million (USD 233,000, EUR 175,000), will only be covered by Iceland to a small extent—the largest part will be paid for by the German Ministry of Transport and the British Meteorological Office. Ólafsson explained that significant interests are at stake. “In the past days the forecasts that have come from the British Met Office have been rather bleak and more pessimistic than what considered reasonable compared to the current situation of the eruption”. “The reason [for the testing] is that it is costly to close the airspace and it is very important to have confirmation of whether there is a real danger,” Ólafsson said, adding that the risk factor of ash in the atmosphere has never been analyzed directly before. “It has been evaluated visually. People have measured the height of the ash plume on radars and visually from airplanes,” Ólafsson explained. “Significant uncertainty lies in these measurements, which explains why the forecasts have been questionable”.
647
I am a newby taking my first Java class. I have to create an Applet of a working Calculator. So far, all I know how to do is create buttons and JTextArea. I don't know how to set up any functionality among the components. I don't even know which is better to use; JTextField or TextArea. If anyone out there can help, I will be happy to wash your car and maybe even do some free yard work if you could give me some pointers. Thanks. Joined: Jan 29, 2003 Hi, welcome to the ranch! This is a big subject area, but fortunately Sun has pretty good tutorials on almost everything Java. See if the Swing Tutorial covers the right kinds of things. For adding functionality to the widgets, what you're looking for is probably in the EventListener and ActionListener family. You add these objects to your UI components and Swing calls them when interesting things like mouse clicks and keystrokes happen. If this seems like the right direction, you might wander over to the Swing forum with further questions. [ July 05, 2007: Message edited by: Stan James ] A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi Joined: Jun 04, 2007 Thanks for the help. I know how to set up the event handlers (sorry I didn't mention that). I guess I need to figure out which events to use. I have managed to get numbers to show up in the JTextArea, but I don't know how to get any results from the numbers.
128
Cortex Off, Consciousness Off This dramatic reduction in brain activity after loss of consciousness is scarcely surprising. The link between consciousness and this organ is tight, as expressed in the adage “No brain: never mind!” Yet neuroscientists are trying to track the footprints of consciousness to its actual lair. Which region in the cortex, the thalamus or elsewhere is essential to be conscious at all? Consider the following two experiments. Twenty-five patients with Parkinson's disease were anesthetized with propofol or sevoflurane while the electrical activity of both the cortex and thalamus was monitored by a group under François Gouin of the Timone University Hospital Center at the University of the Mediterranean in Marseille, France. Their neocortex was monitored by a conventional electroencephalographic (EEG) electrode placed on the scalp on top of the head, whereas thalamic activity was recorded by an electrode implanted deep inside the brain in the subthalamic nucleus. This electrode stimulates the brain to alleviate the shaking that is the hallmark of Parkinson's. Experimenters assessed consciousness by tapping patients on the shoulder and asking them every 20 seconds to open their eyes. When consciousness was lost after anesthesia was initiated—that is, when the patients no longer opened their eyes following the command—the cortical EEG changed dramatically, switching from low amplitude and irregular activity into readings dominated by large and slow brain waves that occur about once every second. Such so-called delta band activity is characteristic of deep sleep. Furthermore, the complexity of the cortical EEG signal decreased significantly when patients stopped responding. None of these changes occurs in the thalamic electrode at the time that consciousness is lost. Indeed, it is only several minutes later that the thalamic voltage signal matches that of the cortex. The data—consistent for two quite different anesthetic agents, one injected and the other one inhaled—argue that the drivers for the loss of consciousness are parts (or all) of the neocortex and that the thalamus follows.
881
Taking Play Seriously By ROBIN MARANTZ HENIG Published: February 17, 2008 On a drizzly Tuesday night in late January, 200 people came out to hear a psychiatrist talk rhapsodically about play -- not just the intense, joyous play of children, but play for all people, at all ages, at all times. (All species too; the lecture featured touching photos of a polar bear and a husky engaging playfully at a snowy outpost in northern Canada.) Stuart Brown, president of the National Institute for Play, was speaking at the New York Public Library's main branch on 42nd Street. He created the institute in 1996, after more than 20 years of psychiatric practice and research persuaded him of the dangerous long-term consequences of play deprivation. In a sold-out talk at the library, he and Krista Tippett, host of the public-radio program ''Speaking of Faith,'' discussed the biological and spiritual underpinnings of play. Brown called play part of the ''developmental sequencing of becoming a human primate. If you look at what produces learning and memory and well-being, play is as fundamental as any other aspect of life, including sleep and dreams.'' The message seemed to resonate with audience members, who asked anxious questions about what seemed to be the loss of play in their children's lives. Their concern came, no doubt, from the recent deluge of eulogies to play . Educators fret that school officials are hacking away at recess to make room for an increasingly crammed curriculum. Psychologists complain that overscheduled kids have no time left for the real business of childhood: idle, creative, unstructured free play. Public health officials link insufficient playtime to a rise in childhood obesity. Parents bemoan the fact that kids don't play the way they themselves did -- or think they did. And everyone seems to worry that without the chance to play stickball or hopscotch out on the street, to play with dolls on the kitchen floor or climb trees in the woods, today's children are missing out on something essential. The success of ''The Dangerous Book for Boys'' -- which has been on the best-seller list for the last nine months -- and its step-by-step instructions for activities like folding paper airplanes is testament to the generalized longing for play's good old days. So were the questions after Stuart Brown's library talk; one woman asked how her children will learn trust, empathy and social skills when their most frequent playing is done online. Brown told her that while video games do have some play value, a true sense of ''interpersonal nuance'' can be achieved only by a child who is engaging all five senses by playing in the three-dimensional world. This is part of a larger conversation Americans are having about play. Parents bobble between a nostalgia-infused yearning for their children to play and fear that time spent playing is time lost to more practical pursuits. Alarming headlines about U.S. students falling behind other countries in science and math, combined with the ever-more-intense competition to get kids into college, make parents rush to sign up their children for piano lessons and test-prep courses instead of just leaving them to improvise on their own; playtime versus r?m?uilding. Discussions about play force us to reckon with our underlying ideas about childhood, sex differences, creativity and success. Do boys play differently than girls? Are children being damaged by staring at computer screens and video games? Are they missing something when fantasy play is populated with characters from Hollywood's imagination and not their own? Most of these issues are too vast to be addressed by a single field of study (let alone a magazine article). But the growing science of play does have much to add to the conversation. Armed with research grounded in evolutionary biology and experimental neuroscience, some scientists have shown themselves eager -- at times perhaps a little too eager -- to promote a scientific argument for play. They have spent the past few decades learning how and why play evolved in animals, generating insights that can inform our understanding of its evolution in humans too. They are studying, from an evolutionary perspective, to what extent play is a luxury that can be dispensed with when there are too many other competing claims on the growing brain, and to what extent it is central to how that brain grows in the first place. Scientists who study play, in animals and humans alike, are developing a consensus view that play is something more than a way for restless kids to work off steam; more than a way for chubby kids to burn off calories; more than a frivolous luxury. Play, in their view, is a central part of neurological growth and development -- one important way that children build complex, skilled, responsive, socially adept and cognitively flexible brains. Their work still leaves some questions unanswered, including questions about play's darker, more ambiguous side: is there really an evolutionary or developmental need for dangerous games, say, or for the meanness and hurt feelings that seem to attend so much child's play? Answering these and other questions could help us understand what might be lost if children play less.
520
Heel Bursitis is another type of heel pain. The sufferer of this kind of heel pain experiences pain at the back of the heel when the patient moves his joint of the ankle. In the heel bursitis type of heel pain there is swelling on the sides of the Achilles’ tendon. In this condition the sufferer may experience pain in the heel when his feet hit the ground. Heel bruises are also referred as heel bumps they are usually caused by improper shoes. The constant rubbing of the shoes against the heel. What is bursitis? Bursitis is the inflammation of a bursa. Normally, the bursa provides a slippery surface that has almost no friction. A problem arises when a bursa becomes inflamed. The bursa loses its gliding capabilities, and becomes more and more irritated when it is moved. When the condition called bursitis occurs, the normally slippery bursa becomes swollen and inflamed. The added bulk of the swollen bursa causes more friction within an already confined space. Also, the smooth gliding bursa becomes gritty and rough. Movement of an inflamed bursa is painful and irritating. “Itis” usually refers to the inflammation of a certain part of the body, therefore Bursitis refers to the constant irritation of the natural cushion that supports the heel of the foot (the bursa). Bursitis is often associated with Plantar Fasciitis, which affects the arch and heel of the foot. What causes bursitis? - Bursitis and Plantar Fasciitis can occur when a person increases their levels of physical activity or when the heel’s fat pad becomes thinner, providing less protection to the foot. - Ill fitting shoes. - Biomechanical problems (e.g. mal-alignment of the foot, including over-pronation). - Rheumatoid arthritis. Bursitis usually results from a repetitive movement or due to prolonged and excessive pressure. Patients who rest on their elbows for long periods or those who bend their elbows frequently and repetitively (for example, a custodian using a vacuum for hours at a time) can develop elbow bursitis, also called olecranon bursitis. Similarly in other parts of the body, repetitive use or frequent pressure can irritate a bursa and cause inflammation. Another cause of bursitis is a traumatic injury. Following trauma, such as a car accident or fall, a patient may develop bursitis. Usually a contusion causes swelling within the bursa. The bursa, which had functioned normally up until that point, now begins to develop inflammation, and bursitis results. Once the bursa is inflamed, normal movements and activities can become painful. Systemic inflammatory conditions, such as rheumatoid arthritis, may also lead to bursitis. These types of conditions can make patients susceptible to developing bursitis. - Cold presses or ice packs. - Anti-inflammatory tablets. - Cushioning products. - Massaging the foot / muscle stimulation. - Stretching exercises. - Insoles or orthotics.
64
Communities in Kenya have adopted one another’s food preferences and recipes. With the exception of the Coast and Indian communities for whom food preparation is an elaborate process, nearly all other communities garnish their food by frying it. They also boil, bake in hot ashes and roast. Common foods include: It is a sticky mixture of flour and water and is accompanied by vegetables, meat, milk or milk mixed with blood especially for pastoralist communities. Traditionally, it is made from millet or cassava. But maize flour has increasingly taken pride of place. lt is used to prepare sauces and stews. Communities have different ways of preparing meat, but roast meat is widely eaten as a delicacy especially in urban areas. Animals slaughtered for meat depend on the location, with beef and mutton the most common. Camel and game are the preserve of communities with access to the animals. In the last quarter ofthe 20th century, maize replaced sorghum as the most important cereal in Kenya. lt is roasted and eaten as a snack and sold on the streets and in markets. Green boiled maize, often garnished with salt, is also common. Among the Somali, fresh maize (galeey) is fried in oil and eaten as a snack. Dry maize is also fried to make popcorn, popular with children. A mixture of green or dry maize and beans, cowpeas, pigeon peas or even groundnuts is also popular. lt is eaten as a main course or snack. Sometimes, it is pounded with Irish or sweet potatoes, bananas and green vegetables. A wide range of traditional and exotic fruits are consumed in Kenya, usually as snacks. Mango, citrus fruits, banana, jackfruit, papaya, melons, guava, passion fruit, custard apple and avocado pear are common. Many traditional fruits such as baobab, wild custard apple, carissa, dialium, flacourtia (Indian plum), marula, vangueria, tamarind, vitex and ‘jujube’ are picked in the wild. They are used as accompaniments for starchy food such as ugali. Common traditional vegetables include baobab, cowpeas, amaranth, vine spinach, Ethiopian kale, pumpkin leaves, spider plant and hibiscus. Kale (sukuma wiki) is now the most common vegetable in Kenya.
14
Ismaili Community: History The Shia Imami Ismaili Muslims, generally known as the Ismailis, belong to the Shia branch of Islam. The Shia form one of the two major branches of Islam, the Sunni being the other. The Ismailis live in over 25 different countries, mainly in Central and South Asia, Africa and the Middle East, as well as in Europe, North America and Australia. As Muslims, the Ismailis affirm the fundamental Islamic testimony of truth, the Shahada, that there is no God but Allah and that Muhammad (peace be upon him and his family) is His Messenger. They believe that Muhammad was the last and final Prophet of Allah, and that the Holy Quran, Allah's final message to mankind, was revealed through him. Muslims hold this revelation to be the culmination of the message that had been revealed through other Prophets of the Abrahamic tradition before Muhammad, including Abraham, Moses and Jesus, all of whom Muslims revere as Prophets of Allah. In common with other Shia Muslims, the Ismailis affirm that after the Prophet's death, Hazrat Ali, the Prophet's cousin and son-in-law, became the first Imam - the spiritual leader - of the Muslim community and that this spiritual leadership (known as Imamat) continues thereafter by hereditary succession through Ali and his wife Fatima, the Prophet's daughter. Succession to Imamat, according to Shia doctrine and tradition, is by way of Nass (Designation), it being the absolute prerogative of the Imam of the Time to appoint his successor from amongst any of his male descendants. His Highness Prince Karim Aga Khan is the 49th hereditary Imam of the Shia Imami Ismaili Muslims. He was born on 13 December 1936 in Geneva, son of Prince Aly Khan and Princess Tajuddawlah Aly Khan and spent his early childhood in Nairobi, Kenya. He attended Le Rosey School in Switzerland for nine years and graduated from Harvard in 1959 with a BA (Honours) in Islamic History. He succeeded his grandfather Sir Sultan Mahomed Shah Aga Khan on 11 July 1957 at the age of 20. Spiritual allegiance to the Imam and adherence to the Shia Imami Ismaili tariqah (persuasion) of Islam according to the guidance of the Imam of the Time, have engendered in the Ismaili Community an ethos of self-reliance, unity, and a common identity. In a number of the countries where they live, the Ismailis have evolved a well-defined institutional framework through which they have, under the leadership and guidance of the Imam, established schools, hospitals, health centres, housing societies and a variety of social and economic development institutions for the common good of all citizens regardless of their race or religion. During the course of history, the Ismailis have, under the guidance of their Imams, made significant contributions to Islamic civilisations, the cultural, intellectual and religious life of Muslims. The University of al-Azhar and the Academy of Science, Dar al-Ilm, in Egypt and indeed the city of Cairo itself, are testimony to this contribution. Among the renowned philosophers, jurists, physicians, mathematicians, astronomers and scientists of the past who flourished under the patronage of Ismaili Imams are Qadi al-Numan, al-Kirmani, Ibn al-Haytham (al-Hazen), Nasir e-Khusraw and Nasir al-Din Tusi.
156
Is it scientific fact that acid stabilizes meringue or is this a fallacy? If so does anyone know actually why and are there any other substances that do this well also? Acids allow more air to be beaten into a meringue. In order to make meringue, the proteins in egg white must be denatured. In their natural state, the proteins are curled up into tightly packed balls. When the egg is beaten, they uncoil into long strands. These strands then begin to coagulate, or join together, with the help of the sugar you add. The air you whisk in gets trapped between these joining strands, giving the meringue its characteristic light texture. Acid delays coagulation, which means that there is more time for air to get trapped in amongst the proteins, resulting in a lighter meringue. The acids usually added to meringue are white wine vinegar, lemon juice, or cream of tartar. Fresh eggs are more acidic than old ones, so these help too. Some cooks use copper bowls to make meringue, because copper ions from the bowl bind to a particular protein (conalbumin) and strengthen it. |show 3 more comments| I have made two separate meringue mixtures side by side: one with vinegar and one without. In my experience it makes no difference to the final outcome provided that you add the sugar really slowly (a tablespoon at a time) and not too early. If this is done correctly then there is no need to add an acid.
40
Objective: To determine the prevalence of infant exposure to environmental tobacco smoke (ETS) among infants attending child health clinics in regional NSW; the association between such exposure and household smoking behaviours; and the factors associated with smoking restrictions in households with infants. Methods: Parents completed a computer-based questionnaire and infant urine samples were collected. Information was obtained regarding the smoking behaviours of household members and samples were analysed for cotinine. Results: Twenty seven per cent of infants had detectable levels of cotinine. Infant ETS exposure was significantly associated with the smoking status of household members, absence of complete smoking bans in smoking households and having more than one smoker in the home. Smoking households were significantly less likely to have a complete smoking ban in place. Conclusions: This study suggests that a significant proportion of the population group most vulnerable to ETS were exposed. Implications: Future efforts to reduce children's exposure to ETS need to target cessation by smoking parents, and smoking bans in households of infants where parents are smokers if desired reductions in childhood ETS-related illness are to be realised. Australian and New Zealand Journal of Public Health Vol. 34, Issue 3, p. 269-273
492
Want to stay on top of all the space news? Follow @universetoday on TwitterSedimentary rock covers 70% of the Earth. Erosion is constantly changing the face of the Earth. Weathering agents…wind, water, and ice…break rock into smaller pieces that flow down waterways until the settle to the bottom permanently. These sediments( pebbles, sand, clay, and gravel) pile up and for new layers. After hundred or thousands of years these rocks become pressed together to form sedimentary rock. Sedimentary rock can form in two different ways. When layer after layer of sediment forms it puts pressure on the lower layers which then form into a solid piece of rock. The other way is called cementing. Certain minerals in the water interact to form a bond between rocks. This process is similar to making modern cement. Any animal carcasses or organisms that are caught in the layers of sediment will eventually turn into fossils. Sedimentary rock is the source of quite a few of our dinosaur findings. There are four common types of sedimentary rock: sandstone, limestone, shale, and conglomerate. Each is formed in a different way from different materials. Sandstone is formed when grains of sand are pressed together. Sandstone may be the most common type of rock on the planet. Limestone is formed by the tiny pieces of shell that have been cemented together over the years. Conglomerate rock consists of sand and pebbles that have been cemented together. Shale forms under still waters like those found in bogs or swamps. The mud and clay at the bottom is pressed together to form it. Sedimentary rock has the following general characteristics: - it is classified by texture and composition - it often contains fossils - occasionally reacts with acid - has layers that can be flat or curved - it is usually composed of material that is cemented or pressed together - a great variety of color - particle size varies - there are pores between pieces - can have cross bedding, worm holes, mud cracks, and raindrop impressions This is only meant to be a brief introduction to sedimentary rock. There are many more in depth articles and entire books that have been written on the subject. Here is a link to a very interesting introduction to rocks. Here on Universe Today there is a great article on how sedimentary rock show very old signs of life. Astronomy Cast has a good episode on the Earth’s formation.
379
Silica fume is a highly pozzolanic material that is used to enhance mechanical and durability properties of concrete. It may be added directly to concrete as an individual ingredient or in a blend of portland cement and silica fume. Interest in the use of silica fume resulted from the strict enforcement of air-pollution measures designed to stop release of the material into the atmosphere. Initial use of silica fume in concrete was mostly for cement replacement,along with water-reducing admixtures. Eventually,the availability of high-range water-reducing admixtures(superplasticizers) allowed new possibilities for the use of silica fume to produce high levels of performance. * Reduces concrete permeability * Increases concrete strength * Improves resistance to corrosion
187
Daily Planet's Ingram discusses prion disease 0 Discovery Channel's Daily Planet co-host Jay Ingram visits Grande Prairie today to offer behind-the-scenes details of a mysterious and contagious series of diseases. The lecture takes place at the Grande Prairie Regional College at 7 p.m., where Ingram discusses fatal prion diseases. "Here at this very microscopic level, strange things are happening and just now we are beginning to figure out what they are," he said. The most well-known form of a prion disease is bovine spongiform encephalopathy (BSE), widely known as the mad cow disease. Prion diseases spread when malformed proteins attach themselves to healthy tissue. Unlike other infectious ailments, they are incurable. "It is a protein that has gone wrong," said Stefanie Czub, a scientist with the University of Calgary and the Canadian Food Inspection Agency who will be present at the lecture. "For other infectious diseases we have a cure or the body heals itself quite efficiently.prion diseases, once infected, they are invariably fatal." A prion disease of current concern to Western Canadians is chronic wasting disease (CWD), affecting deer and elk in southern Saskatchewan and Alberta. Unlike mad cow disease, which infects the animal's brain, spinal cord and central nervous system, chronic wasting disease spreads to several parts of the animal, and is even present in urine and saliva. Animals infected become very thin, and can carry the disease for two years before these signs become evident. Prion diseases can only be formally diagnosed by sampling infected tissue. "With all these diseases, it takes quite a long time for the symptoms to show," Ingram said. "You could hunt and kill a deer and eat it, and it might have the chronic wasting disease prions in it." "The ultimate diagnosis can only be done on a piece of tissue, not in blood," Czub said. Twenty deer have been identified in Alberta with CWD since monitoring of the disease began in 2005. While the cause of mad cow disease is generally believed to be the use of recycled beef and bone meal material in livestock feed, which became a common practice in the 1980s, CWD's cause remains unknown. "Nobody really knows," Ingram said. "It could be that an infected deer goes to a salt lick, licks it, and the prions are in the saliva." Ingram said that the disease is currently not an issue for the Peace Country, but infected deer and elk are bringing it into southern Alberta as they travel along the valleys of the South Saskatchewan and Red Deer Rivers. "If chronic wasting disease spreads far enough north, especially in Saskatchewan, that it intersects with the caribou migration routes, and if caribou are susceptible, then you've got a huge problem on your hands," Ingram said. The medical community is taking a close look at CWD due to the similarities prion diseases have with the degenerative Alzheimer's, Parkinson's and Lou Gehrig's diseases. "The way that they spread is somewhat similar," Ingram said. "In Alzheimer's and Parkinson's and Lou Gehrig's disease, you get an accumulation in the brain of junk basically; they're called plaques, these sort of dark deposits if you look at brain tissue after autopsy." "They are all part of the so-called protein-misfolding diseases," Czub said. "One might be a very good model for the other, so we need to keep that in mind. Especially with this enormous increase in Alzheimer's in the future to be expected. One in three over the age of 65 is going to develop Alzheimer's disease in the next 10 years." Hosted by the Alberta Prion Research Institute, Ingram's lecture is open to the public free of charge in the Collins Recital Hall, room L106 today at 7 p.m.
763
CentOS uses both font systems and they use different folders: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-x-fonts.html Red Hat Enterprise Linux uses two subsystems to manage and display fonts under X: Fontconfig and xfs. The newer Fontconfig font subsystem simplifies font management and provides advanced display features, such as anti-aliasing. This system is used automatically for applications programmed using the Qt 3 or GTK+ 2 graphical toolkit. For compatibility, Red Hat Enterprise Linux includes the original font subsystem, called the core X font subsystem. This system, which is over 15 years old, is based around the X Font Server (xfs). On CentOS5/Redhat it seems that XFS gets its fonts from the X config file /etc/X11/fs/config which points to /usr/share/X11/fonts, and fontconfig gets its config from /etc/fonts/font.conf which points to /usr/share/fonts. By default neither font system sees the fonts from the other system. Seems that RH wants to move to fontconfig but still has some things that use XFS. Why they didnt just put all the fonts in 1 folder and pointed everything there so that both font systems had all the same fonts is a mystery.
216
Lier Psychiatric Hospital (Lier Psykiatriske Sykehus or Lier Asyl in Norweigan) in Norway, has a long history as an institution. The sickest people in the society was stowed away here and went from being people to be test subjects in the pharmaceutical industry’s search for new and better drugs. The massive buildings house the memory of a grim chapter in Norwegian psychiatric history the authorities would rather forget. UPDATE: When you have read this post you might be interested in reading my report one year later! The buildings welcome you Many of the patients never came out again alive and many died as a result of the reprehensible treatment. It was said that the treatment was carried out voluntarily, but in reality the patients had no self-determination and the opportunity to make their own decisions. Must be creepy at night There is little available information about the former activities at Lier Hospital. On this page (In Norwegian) you can read more about the experiments that were carried out on this Norwegian mental hospital in the postwar period from 1945 to 1975. It’s about the use of LSD, electroshock, brain research funded by the U.S. Department of Defense and drug research sponsored by major pharmaceutical companies. It is perhaps not surprising that they try to forget this place and the events taking place here. Chair in room One of many rooms Things that is left behind including bath tub It was also performed lobotomy here. That’s a procedure that involves knocking a needle-like object into the eye socket and into the patients head to cut the connection between the anterior brain lobes and the rest of the brain. Lobotomy was primarily used to treat schizophrenia but also as a soothing treatment for other disorders. The patients who survived were often quiet, but generally this surgery made the patients worse. Today lobotomy is considered barbaric and it is not practiced in Norway. From a window Lier Psyciatric Hospital, or Lier Asylum as it was called originally, was built in 1926 and had room for nearly 700 patients at the most. In 1986, many of the buildings were closed and abandoned and they still stand empty to this day. Some of the buildings are still in operation today for psychiatric patients. Exterior of the A building Desinfection bath tub These photos are from my visit there as a curios photographer. The place was clearly ravaged by the youths, the homeless and drug addicts who have infiltrated the buildings during its 23 years of abandonment. On net forums people has written up and down about ghost stories and the creepy atmosphere. I was curious how I would experience the place myself. But I found it was pretty quiet and peaceful. I went there during the day so I understand that during nighttime, one should look far for a more sinister place. The floor consisted of a lot of broken glass and other debris. View through window A pile with electrical boxes or something These days, there has been provided money to demolish the. 15 million NOKs is the price. Neighbors cheer but the historic, photographers and ghost hunting kids think it’s sad. This is the most visited, and just about the only and largest urban exploration site in Norway. I have read and recommend Ingvar Ambjørnsen first novel, “23-Salen”, which is about when he worked as a nurse at Lier Psychiatric Hospital for one year. The book provides insight into how life for patients and nurses turned out in one of the worst wards. The famous motorized wheelchair Doorways and peeling paint Top floor, view to the roof and empty windows Disused stairs outside
791