text
stringlengths
307
28.2k
__index_level_0__
int64
0
904
Nestled in a quiet residential neighborhood in Long Beach, Calif., are 4.7 acres of Spanish, Mexican and American history, where families helped transform southern California from its ranching beginnings to a modern, urban society. Rancho Los Cerritos includes an 1844 adobe building, formal gardens and landscaped grounds. It is a national, state and local landmark, owned and operated by the city as a public museum and historic site since 1955. As the country’s economic crisis works its way to the regional and local municipal forefront, the Rancho has found itself on the list for possible budget cuts. It begs the question, “Can one put a price on historic culture?” While the answer seems obvious, the reality from a fiscal viewpoint is that this rare community treasure does not sustain itself financially. From The Start The original Rancho site was 27,000 acres of pastures for cattle and sheep that, starting in the late 19th century, gave way to development of the cities of Long Beach, Lakewood, Signal Hill, Bellflower and Paramount. The site boasts a two-story adobe building with 22 ground-floor rooms and a serene central courtyard. It possesses outstanding potential for an interpretation of historical themes, beginning with the period of Native American occupation and continuing through the romanticized Spanish Colonial Revival Period of the 1920s and 1930s. The rich artisan materials of the area are present throughout the building. The 2- to 3-foot thick walls tell the story of the Native Americans who made adobe blocks using mud from the site. The beams were hand-hewn redwood from the Monterey area, and a majority of the rooms in the east wing contained redwood planked floors. The construction of the ranch house began in 1844 by then-owner John Temple. The building was one of the largest and most impressive domestic adobes of its time for colonial Southern California, and its extensive gardens were unique for the time period. Temple built the ranch house as a country home and headquarters for his cattle-ranching operation. Its second owners, Flint, Bixby & Co., stocked the land with sheep. During the 19th century, horses, carriages and buggies were saddled and prepared in the courtyard. There were troughs and hitching posts, and blacksmithing and cooking facilities. The ”milk room” opened onto a second courtyard north of the house, and contained rows of shining pans filled with cream both for churning and for the table–“clotted cream, thick enough to spread with a knife upon hot baking-powder biscuits, or a steaming baked potato.” Other original buildings on the site were a barn, corn crib, adobe brick oven, wool barn, a dip for the sheep after they had been sheared, a sheep barn and a granary. The large, 2-acre garden planted by Temple appeared to be a mixture of New England and subtropical influences. In 1930, the well-known landscape architect, Ralph Cornell, created a garden plan for the new owners of Rancho Los Cerritos. He incorporated native plants, preserved many of the trees and plants surviving from previous owners, and reintroduced some plant species from the 19th-century garden. Surviving landscape elements included an osage orange, three pomegranate, two citrus, two olive and three Italian cypress trees. Several black locust trees also survived, along with a large Moreton Bay fig. Cornell’s overall plan called for a sweeping driveway for the southern entrance and western boundary, bordered by curving layers of trees and shrubs. Outside the south wing, he planted a grid of both familiar and exotic fruit trees. A central lawn surrounded the Moreton Bay fig, bordered by trees and shrubs along a walkway. Through his use of rich materials and plantings, Cornell created a pleasant garden atmosphere to surround the remodeled living quarters. Even today, over 80 years after he created the gardens, many of the historic plantings are still alive and well. A Red-Carpet Reputation In March 1934, a Historic American Building Survey was made of Rancho Los Cerritos. Three years later, it was designated by the Department of the Interior as possessing “Exceptional historic or architectural interest,” being “most worthy of careful preservation for the benefit of future generations.” In 1943, the city began working to acquire the Rancho as a historic site. The city opened a museum, and completed the purchase in 1956 for use as a historical monument, park and library. Rancho Los Cerritos was designated as a National Historic Landmark in 1970. Presently, the city administers the Rancho through the Department of Parks, Recreation and Marine. The city holds title to the site and provides the entire funding for its operations. The Friends of Rancho Los Cerritos is a volunteer support group, which assists the site with its programming, and supports the educational and preservation mission. The Rancho Los Cerritos Foundation, a non-profit arm of the organization, was established in 1994, and is tasked with fundraising and development for restoration, capital projects and educational enhancements. As the city faces a looming $52-million budget deficit next year, policymakers will be faced with difficult decisions. What public programs and services can be eliminated with the least impact? With public health and safety as a priority, fire, police and health services should take precedence. Libraries and parks are further down the list. How important are the cultural and historic resources to overall city operations and function? Many say these resources are “optional” and not core services. In 2007 and 2008, Rancho Los Cerritos raised $37,700 in revenue from programs, donations and gift-shop sales. The operational cost has a $467,000 price tag, subsidized by the city. Can the city afford to keep the site open during a financial crisis? These are extraordinary times. Numbers on a spreadsheet tell a very different story than the one that is tangible at the Rancho. Translation is lost between profit and loss statements and the general atmosphere of the site. Gazing at the magnificence of the 130-year-old Moreton Bay fig that sits on the back lawn area of the Rancho, one wonders about the Native Americans, the families and visitors who have enjoyed its grandeur and elegance over the centuries. This place, which represents a lifestyle reminiscent of the earlier tranquil days of ranch living. This place, where the owners entertained guests and played bridge after Sunday dinner, lit the large, two-story Christmas tree with real candles, and was the site of the annual Easter-egg hunt. These were real experiences, of real people. It is the story of many generations from all walks of life–entrepreneurial businessmen, close-knit families, people who helped shape the community and region. Theirs are stories about land and economic development, cultural diversity and the growth of the city. And if there remains no choice but to close down this historic landmark, what is in store for this tree and these buildings that have survived and been a part of so much? Despite the engaging history of Rancho Los Cerritos, these modern times force us to deal with funding realities. Rancho Los Cerritos will soon turn a new page in its history book. One can only hope that the vivid memories can be kept alive through continued education about this neighborhood jewel, and a bookmark will steadfastly remain in its place until we are again able to forge ahead in better times. Sandra Gonzalez, FASLA, is the manager for the Planning & Development Bureau for the city of Long Beach’s Parks, Recreation and Marine Department. She can be reached via e-mail at email@example.com
323
A Box of Universe Watch the cosmos evolve in a cube one billion light-years wide Isaac Newton’s universe was a cozy, tidy place. Gathered around the sun were six planets, a handful of moons and the occasional comet, all moving against a backdrop of stationary stars. Newton provided us with the mathematical tools needed to compute the motions of these bodies. Given initial positions and velocities, we can calculate the forces acting on each object, using Newton’s law of universal gravitation. From the forces we can determine accelerations, and then update the positions and velocities for the next round of calculations. This scheme of computation is known as the n-body method. Perhaps Newton himself could have put it to work if he had had suitable computing machinery. Today we have the computers. On the other hand, our universe is far larger and more intricate than Newton’s. Now the solar system is merely a speck in a spiral galaxy of several hundred billion stars. Our galaxy drifts among billions of others, which form clusters and superclusters and a whole hierarchy of structures extending as far as the eye (and the telescope) can see. Those objects are getting farther away all the time because the universe is expanding, and moreover the expansion is accelerating. Strangest of all, the luminous matter of the galaxies—everything we see shining in the night sky—makes up less than one-half of 1 percent of what’s out there. Most of the universe is unseen and unidentified stuff known only as “dark matter” and “dark energy.” Given this profound change in the nature and the scale of the known universe, I find it remarkable that computer simulations of cosmic evolution can still rely on n-body algorithms rooted in the principles of Newtonian mechanics. The same techniques that predict planetary motions here at home in the solar system also describe the gravitational process that assembles thousands of galaxies into filaments a hundred million light-years long. A major new series of cosmological simulations is now beginning to release its findings. The project, known as Bolshoi, is led by Anatoly Klypin of New Mexico State University and Joel Primack of the University of California, Santa Cruz. “Bolshoi” is Russian for “big” or “grand,” and the name is apt: This is a large-scale computational project, consuming six million CPU hours and producing a hundred terabytes of data. And yet, when you ponder the vast sweep of space and time being modeled, it seems a marvel that so much universe can be squeezed into such a small box. » Post Comment
327
A diplomatic crisis is engulfing part of Borneo, after Filipino rebels seized control of a remote section of Malaysia’s Sabah state as part of an unresolved territorial dispute that stretches back centuries. Malaysian security forces have surrounded 100 to 200 members of the Royal Army of Sulu, who have holed up in the village of Lahad Datu for the past two weeks in order to press their historic claim to the land. The Philippine and Malaysian governments are now engaged in tense negotiations in order to resolve the dispute without the use of force. The rebels, who hail from the autonomous island province of Sulu in the southwestern Philippines, had been given until midnight on Tuesday to voluntarily leave the area, but Manila has been desperately trying to negotiate an extension to this deadline to avoid bloodshed and a tense standoff currently hangs in place. The leader of the rebel unit is the brother of Jamalul Kiram III, one of the two main claimants to the title of Sultan of Sulu. Back in the 17th century, before the Philippines existed in its present form, the two principle sultanates in the region were Sulu and Brunei. In 1658, the Sultan of Brunei for some reason gave Sabah to the Sultanate of Sulu, which today is considered part of the Philippines. However, the picture is further complicated by an 1878 deal between the Sultanate of Sulu and the British North Borneo Company, in which Sabah was leased to the Europeans on a rolling contract. To this day, the Malaysian government pays a token sum, equivalent to around $1,500, to the Philippines every year in recognition of this continuing arrangement. The Royal Army of Sulu interprets this deal as a lease that can be canceled, while Malaysia believes that it represents the permanent transfer of the territory. It does not appear that the Malaysian authorities are willing to give up the land, which boasts valuable petroleum reserves, palm-oil plantations and also serves as an agricultural and manufacturing hub. Regional commentators have accused the Sulu rebels of trying to exploit past claims as a gateway toward ensuring future prosperity. “The governments of Malaysia and the Philippines are trying to manage this incident carefully,” Jonah Blank, senior political scientist specializing in Southeast Asia for RAND Corp., a global policy think-tank, tells TIME. ”We’ve seen many Muslim rebel groups arise or take refuge in the southern part of the Philippines, and Malaysia has brokered a fragile cease-fire: neither Kuala Lumpur nor Manila is eager to see that fall apart.” Philippine President Benigno Aquino III on Tuesday appealed to Kiram to instruct his brother to end the occupation. “If you are truly the leader of your people, you should be one with us in ordering your followers to return home peacefully,” he said during a statement aired on national TV. On Sunday, Manila sent the Philippine navy ship BRP Tagbanua to Borneo carrying Filipino-Muslim leaders, social workers and medical personnel for a “humanitarian mission” to bring their compatriots home. However, Royal Army of Sulu sources indicate that the rebels are not willing to entertain such a retreat. Some observers believe that the timing of the occupation is designed to disrupt the Malaysian national elections that are due before the end of June, and the issue has now become a political hot potato domestically. The Center for Media Freedom and Responsibility, a Philippine NGO, on Tuesday released a joint statement condemning the arbitrary detention of three al-Jazeera journalists who were in Sabah to report on the standoff. The group was eventually released after being held and interrogated for at least six hours. Liew Chin Tong, a Democratic Action Party MP and shadow Defense Minister for the Pakatan Rakyat opposition coalition of Malaysia, tells TIME that the country is now suffering the consequences of decades of poorly enforced border controls. “Sabah is a key state which was previously seen as a safe zone for the government but now keenly contested by the opposition,” he says.
884
HISTORICAL HIGHLIGHTS OF BAYLOR UNIVERSITY Baylor University was founded under the leadership of Judge R.E.B. Baylor, Reverend James Huckins, and Reverend William Milton Tryon, three farsighted pioneer missionaries working through the Texas Baptist Education Society. They, along with other associations, sent representatives in 1848 to create the Baptist State Association, which later became the Baptist State Convention. 1845-Baylor chartered on February 1 by the Republic of Texas. 1849-Instruction in law begun. 1857-School of Law organized. 1883-School of Law closed. 1920-School of Law reorganized. 1886-Baylor merged with Waco University and moved to Waco. 1903-College of Medicine organized in Dallas by assuming responsibility for operating the University of Dallas Medical Department. 1943-Moved to Houston. 1969-Given independent status. 1903-College of Pharmacy organized in Dallas. 1930-College of Pharmacy terminated. 1905-Theological Seminary organized in Waco. 1907-Separated from Baylor University. 1910-Moved to Fort Worth. 1918-College of Dentistry organized in Dallas by taking over the State Dental College which had been founded in 1905. 1971-The College was separately incorporated in 1971, although Graduate programs continued to be offered through Baylor University. 1996 -The College became a part of the Texas A&M System on September 1, 1996. 1919-Baylor Hospital organized in Dallas – now Baylor University Medical Center. 1919-College of Arts and Sciences organized. 1919-College of Fine Arts organized, which consisted of offerings in music and in expression. 1921-Terminated in favor of the present School of Music. 1919-School of Education organized. 1920-School of Nursing organized as a diploma-granting program. 1921-School of Music organized. 1923-School of Business organized. 1959-Renamed Hankamer School of Business in honor of Mr. and Mrs. Earl Hankamer of Houston. 1947-Graduate School organized. Graduate study and degrees had been offered since 1894. 1950-The School of Nursing reorganized as an academic unit of Baylor University offering a Bachelor of Science in Nursing degree. 2000-Renamed Louise Herrington School of Nursing in honor of Louise Herrington Ornelas. 1951-Graduate program in hospital administration established in conjunction with the Army Medical Field Service School, Fort Sam Houston. 1971-Graduate program in physical therapy added. 1971-Program in physicianʼs assistant added in collaboration with the Army Medical Field Service School, Fort Sam Houston. Terminated in 1977. 1972-Name of Army Medical Field Service School changed to Academy of Health Sciences of the U.S. Army. 1973-Baylor University Memorandum of Agreement with the U.S. Army Academy of Health Sciences affiliated more than 20 programs of instruction with 150 course offerings for academic credit at Baylor University. Terminated in 1977. 1987-University School organized. Responsibilities were reassigned to other academic units in 1992. 1993-George W. Truett Theological Seminary organized in Waco. 1994-Seminary classes begin. 1995-School of Engineering and Computer Science organized. 2002-Honors College organized. 2005-School of Social Work granted independent status from the College of Arts and Sciences.
572
Evaluating Tagging Methods and Movement Patterns of Round Gobies The round goby (Neogobius melanostomus) is an invasive fish species introduced in the St. Clair River in 1990 and is now found throughout the Great Lakes basin. Information on round goby movement and behavior is needed to understand their potential impact on the Great Lakes. Research on potential tagging methods and movement of round gobies is scant. We explored the use of marking round gobies with passive integrated transponder (PIT) tags in order to determine the effects of tagging on growth and mortality. In general, we found that the presence of a tag in the fish had no strong effect on growth or mortality. We also conducted a study on the movement patterns of round gobies in Muskegon Lake. Using PIT tagged fish we followed 48 round gobies enclosed in a 20x20-m block net for 22 days. Our goal was to determine the movement patterns of the gobies within the block net and examine the effects of various factors on these patterns. However, during the course the study, we found that the equipment used was not optimal for use with round gobies. Due to a high escape rate and low detection rate of fish, no conclusions could be surmised on round goby movement. Overall, we determined that although PIT tags do not strongly affect the growth or mortality of round gobies, the equipment available currently seems to be inadequate for tracking round gobies in shallow-water, lake habitats. Faculty Mentor: Carl Ruetz Page last modified July 14, 2009
900
Want to stay on top of all the space news? Follow @universetoday on Twitter When astronauts return to the moon for long duration missions, they will need reliable sources of power. Solar energy will be plentiful for the 14 Earth-day- long lunar daytime, but what about the equally as long lunar night? NASA engineers are exploring the possibility of nuclear fission to provide the necessary power. If you’re having visions of a Three Mile Island nuclear reactor on the moon, put your fears to rest. A nuclear reactor used in space is much different than Earth-based systems, says Lee Mason of the NASA Glenn Research Center, who is the principal investigator for testing a fission powered system for the moon. There are no large concrete cooling towers, and the reactor is about the size of an office trash can. Of course, it won’t produce as much energy as the big reactors on Earth, but it should be more than adequate for the projected power needs of a lunar outpost. “Our goal is to build a technology demonstration unit with all the major components of a fission surface power system and conduct non-nuclear, integrated system testing in a ground-based space simulation facility,” said Mason. “Our long-term goal is to demonstrate technical readiness early in the next decade, when NASA is expected to decide on the type of power system to be used on the lunar surface.” A fission surface power system on the moon has the potential to generate a steady 40 kilowatts of electric power, enough for about eight houses on Earth. It works by splitting uranium atoms in a reactor to generate heat that then is converted into electric power. The fission surface power system can produce large amounts of power in harsh environments, like those on the surface of the moon or Mars, because it does not rely on sunlight. The primary components of fission surface power systems are a heat source, power conversion, heat rejection and power conditioning, and distribution. Glenn recently contracted for the design and analysis of two different types of advanced power conversion units as an early step in the development of a full system-level technology demonstration. These power conversion units are necessary to process the heat produced by the nuclear reactor and efficiently convert it to electrical power. Two different companies have designed concepts that can produce a total of 12 kilowatts of power. One uses piston engines and the other a high speed turbine coupled with a rotary alternator. “Development and testing of the power conversion unit will be a key factor in demonstrating the readiness of fission surface power technology and provide NASA with viable and cost-effective options for nuclear power on the moon and Mars,” said Don Palac, manager of Glenn’s Fission Surface Power Project. A contractor will be selected after a year of design and analysis. Testing of the non-nuclear system is expected to take place in 2012 or 2013 to verify the performance and safety of the systems and determine if these systems can easily be used on the moon, or even on Mars.
231
WebMD Medical News Daniel J. DeNoon Louise Chang, MD Oct. 9, 2012 – About 13,000 people in 23 states got the fungus-contaminated steroid pain shots in the ongoing outbreak of fungal meningitis. So far, 119 people who got the shots have come down with fungal infections of the fluid surrounding their spinal cords and brains. Eleven of those people have died. The case count rises daily, as symptoms of fungal infection can take up to a month to appear, and there's often a delay in case reporting. Most of the 13,000 who got the tainted shots will not get the infection, suggests John Jernigan, MD, director of the CDC's office of health care infection prevention, research, and evaluation. "The attack rate is still to be determined, but so far it appears that the vast majority of patients who received the injection have not developed evidence of meningitis," Jernigan says in an email. "But the investigation is ongoing, and exposed patients and their physicians should be vigilant for signs of illness." All of the fungal infections to date have been in patients who received spinal injections. Some of the patients who received the contaminated steroids got shots in painful joints such as the knee or elbow. To date, none of these patients has reported a fungal infection. The source of the outbreak is a compounding pharmacy called New England Compounding Center (NECC). The company makes more than 2,400 different medicines sold throughout the U.S. All have been recalled. All cases so far have been in patients who received one of 17,676 single-shot vials of a steroid, methylprednisolone, since May 21, 2012. An FDA investigation of the NECC facility, including studies of whether other medications might be contaminated, is ongoing. There haven't yet been any reports of fungal infection in patients who got joint injections of the NECC steroids. Even so, the CDC is warning such patients to be on the lookout for symptoms of fungal infection: Patients who have such symptoms should see a doctor for further tests. Doctors likely will collect joint fluid to test it for fungal infection. If you suspect you may have received a dose of contaminated medicine, contact the health provider who gave it to you. Ask if the medication came from NECC. All NECC products carry the NECC logo. Clinics that gave the suspect shots are contacting all patients to warn them to look out for symptoms. Symptoms may appear one to four weeks after getting a pain injection. For patients who got spinal injections, early symptoms may be very mild. At first, most patients only feel a little worse than usual. For example, patients with back pain may feel slightly worse pain, or slightly more weakness. The CDC warns patients who have had a spinal steroid shot since May 21, 2012, to call a doctor immediately if they have any of these symptoms: Treatment of fungal meningitis is complicated. Antifungal drugs must be given intravenously, every day. At least at first, patients should be treated in a hospital. Treatment often lasts for several months, and can have serious side effects. As of Oct. 9, the CDC reports deaths and cases in these states: SOURCES:Curtis Allen, public information officer, CDC, email interview.CDC web site.FDA web site.David Kibbe, public information officer, Massachusetts Department of Health.
567
Chatrooms, social networking sites “behind generation that can’t spell”November 23rd, 2010 - 6:33 pm ICT by ANI London, Nov 23 (ANI): A study has suggested that Internet chatrooms and social networking sites are to be blamed for children spelling words incorrectly. The study says as people type at speed online, there is now a “general attitude” that there is no need to correct mistakes or conform to regular spelling rules. And this means that children who have been brought up with the Internet do not question wrongly spelt words. “The increasing use of variant spellings on the internet has been brought about by people typing at speed in chatrooms and on social networking sites where the general attitude is that there isn’t a need . . . to conform to spelling rules,” the study stated. “We are now witnessing the effect these linguistic variations are having on children born into the computer age with such a high level of access in and out of schools,” the Scotsman quoted report author Lucy Jones, a former student at Manchester University, as saying. “They do not question their existence,” she stated. The paper, which surveyed a group of 18 to 24 year olds as part of the research, found that the majority believe that unconventional spellings are used on the Internet because it is faster and has become the norm. More than one in five (22 percent) said they would not be confident in writing an important e-mail without referring to a dictionary or spell checker. Despite the widespread use of so-called “variant” spelling, almost a third of those questioned said that alternative non-standard spellings are “completely unacceptable”. Two thirds believe that dictionaries should contain variant spellings. “From this most recent survey we can conclude that the unprecedented reach and scale of the internet has given rise to new social practices and it is now an agent in spelling change,” Jack Bovill, chair of the English Spelling Society, added. (ANI) - Traditional art of letter writing 'dying out' among kids - May 21, 2010 - Merriam-Webster's Word of the Summer is Sarah Palin's 'refudiate' - Sep 08, 2010 - Excessive Internet browsing affects mental health - Sep 26, 2010 - Indian kids worst victims of cyberbullying, says study - Jan 18, 2012 - Attempting to save English from horrors of 'text-speak' - Jun 07, 2010 - Twitter, MySpace, Facebook revamping dictionaries - Jan 09, 2010 - Teen drinking linked to higher internet use - May 10, 2011 - 'More Indian teens using networking sites to communicate' - May 23, 2012 - A million kids worldwide addicted to Facebook - Nov 04, 2011 - Social networking teens may turn drinks, drugs addicted - Aug 26, 2011 - Mobile phones can produce hilarious text - Jan 09, 2011 - Britain nurturing kids into becoming 'couch potatoes' - May 04, 2011 - Government all for Internet freedom: NSA - May 16, 2012 - Now, a robot that can read and learn like a human! - Dec 07, 2010 - UK kids with own Internet profile are 'vulnerable' to grooming online - Apr 20, 2011 Tags: bovill, computer age, dictionaries, dictionary, e mail, english spelling, internet chatrooms, linguistic variations, lucy jones, manchester university, report author, scotsman, social networking sites, social practices, spell checker, spelling change, spelling rules, spelling words, spelt, variant spellings
210
XML and Java Servlets With recent introduction of new APIs and tools for the Java Server Pages (JSP) environment, marrying XML data to Java-based services and applications is easier than ever before. The September 17 release of the JavaServer Pages 1.2 specification ( - The new specification includes better mechanisms for accessing and describing XML data than earlier versions. - Standard JSP tag libraries support XML based data more efficiently and effectively. - Lots of third-party JSP tag libraries are popping up that know about all kinds of XML applications and services. - Better message handling facilities are integrated into the JSP environment, not coincidentally based on XML and the Simple Object Access Protocol, aka SOAP. Tag libraries permit collections of custom, XML-like tags to be defined and used in Java servlets constructed to invoke such libraries. In other words, a tag library defines a set of Java classes for the custom JSP actions that the tag library supports. Invoking the tag library imports custom processing and actions into the Java environment so a Java runtime environment like JRun 3.0 will know what to do with such tags as they're recognized. Tag libraries are usually packaged as Java archives in JAR files to make custom tag functions available to developers as they create content, and to Java runtime environments and tools as they encounter and react to such custom tags. The great thing about JSP and Java servlets is that they run on the server side, and can therefore control their environments to a large extent. This makes it possible to grab and interpret XML-based data on the server, and to transform it into plain-vanilla HTML, XHTML, or other formats (PDF, plain text, rich text files, and so forth) for delivery to Web clients. This helps sidestep the kinds of horrible compatibility issues that dynamic content can cause on the user agent side of a Web or other Internet service connection by keeping custom activities entirely on the server side, where they can be carefully implemented, tested, monitored, and controlled without necessarily being exposed to users. I find it very interesting that what works for effective delivery of XML data in general works equally well as a technique for combining Java and XML to supports all kinds of powerful, dynamic applications and services. For more information on this great topic, please consult: - Sun's Java Server Pages home (http://java.sun.com/products/jsp/) - Westy Rockwell's article on "JSP Tag Libraries" at www.informit.com (search on JSP or the entire title for this and lots of other useful references) - Visit jakarta.apache.org to examine a substantial collection of custom JSP tag libraries. - www.orionserver.com offers a nice JSP tutorial and another good collection of JSP tag libraries as well. - Bill Brogden's excellent book Java Developer's Guide to Servlets and JSP does a nice job of introducing servlets and JSP, and covering tag libraries as well. Have questions, comments, or feedback about this or other XML-related topics? Please e-mail me at email@example.com; I'm always glad to hear from my readers! Ed Tittel is a principal at LANWrights, Inc., a wholly owned subsidiary of LeapIt.com. LANWrights offers training, writing, and consulting services on Internet, networking, and Web topics (including XML and XHTML), plus various IT certifications (Microsoft, Sun/Java, and Prosoft/CIW). This was first published in December 2001
192
As an extended service of _____________________________________________, you will receive fact sheets to help you learn how to investigate, select and use waste minimization opportunities for your industry. Reducing or eliminating hazardous substances is an important business decision. When you use hazardous chemicals in your processes, you are not just making a one-time purchase of the material, you are also paying for: These costs add up, and finding alternatives to these substances should be a priority. Alternatives are available and often out-perform old process equipment and chemicals. Proper storage and disposal Permits Protecting worker health submitting required reports training employees How can auto body repair shops' pollution prevention efforts help industrial laundries? Often small amounts of hazardous materials are left in industrial shop towels and wipes after use. The wipes are usually not treated as hazardous waste and are often cleaned by industrial laundries for re-use. Wipes and towels from auto body repair shops often contain cleaners, oils, solvents and paints composed of hazardous materials. These chemicals are transferred to the industrial laundry's wastewater during washing. Auto body repair shops' efforts to reduce the amount of hazardous substances they use and discard in wipes will reduce the pollutant load in the laundries' wastewater. Pollution prevention efforts to substitute less hazardous cleaners, oils, solvents and paints will benefit both auto body shops and industrial laundries. The pollution prevention ideas in this fact sheet may help you lessen the overall impact of your facility on the environment as well as potentially reduce operating costs. Identify and Use Less Hazardous Paints and CoatingsWHO TO CALL FOR HELP Less toxic paints and coatings mean less hazardous materials left on shop towels and wipes. - Consider using paints and coatings with less hazardous ingredients. Identify and Use Less Hazardous Solvents and Cleaners Less hazardous cleaners mean less hazardous materials left on shop towels and wipes. - Consider using cleaners and solvents with a lower volatile organic compound (VOC) content. - Consider using water-based or citrus-based dillmonine cleaner in place of solvents. Reduce the Amount of Solvents and Cleaners Used Employees often have ideas on how to use less solvents. Check with them. - Use a spray bottle or plunger can to deliver solvents where they're needed. - Don't dip shop towels or wipes into open solvent containers. - Reduce the size of the shop towel or wipe. You'll reduce the amount of solvent used at the same time. - Consider reusing shop towels or wipes for repetitive tasks. - Keep used wipes and towels in closed containers between uses. - Limit the amount of solvent available for use each day. Spray Gun Cleaning Tips - Consider developing a multiple stage cleaning process with a soak stage using partially spent solvent. - Keep spray guns in proper working condition. This minimizes cleaning effort and solvent use. - Immerse only the spray gun tip when cleaning. Recover Solvents and Cleaners from Towels and Wipes Before They Are Sent to the Laundry. Recovery methods include: - Gravity Drain - Hand Wringing - Automatic wringing - Centrifuging--explosion proof - VOC stripping using steam - Use a liner (mesh type bag) in the used towel/wipe collection container to keep the wipes above any free liquid in the bottom of the container - Don't put towels or wipes with free liquid into the collection container--wring them out first - Recovered solvents may be reused - Share Material Safety Data Sheets for materials used in your shop with your launderer - Collect, store and transport used shop towels and wipes to the laundry facility in closed containers - Don't use shop towels or wipes to clean up spills of hazardous materials or to dispose of excess materials - Sort shop towels or wipes according to the types of materials they may contain For Free, Non-Regulatory Assistance and Referrals, contact PPRC. Produced by the Northwest Partnership for Environmental Technology Education for the Pacific Northwest Pollution Prevention Resource Center, 513 First Ave. West, Seattle, WA 98119 phone: 206-352-2050, fax: 206-352-2049, e-mail: firstname.lastname@example.org, WWW address: http://www.pprc.org
445
Become a fan of h2g2 We should remember that... of the 6,000 stars [that] the average human eye could see in the entire sky, probably not more than thirty – or one-half of one percent – are less luminous than the Sun; that probably, of the 700-odd stars nearer than ten parsecs, at least 96% are less luminous than the Sun. There is not even ONE real yellow giant – such as Capella, Pollux, or Arcturus – nearer than ten parsecs1 and only about four main sequence A stars. – Dutch astronomer Willem Jacob Luyten (1899 - 1994) The closest star to us is, of course, our own Sun. It's unusual because it's a solitary yellow dwarf, while most of the stars nearby are in binary or even multiple systems. What makes our star really special though, is that it provides the energy for the only life in the Universe that we know of. Less Than Ten Light Years The nearest star to the Sun is Proxima Centauri (also known as alpha Centauri C), which is a red dwarf, and is 4.2 light years2 distant. It has two stellar companions, the yellow dwarf Rigil Kentaurus (alpha Centauri A) and an orange dwarf, alpha Centauri B. They take up joint second place in our list, at 4.35 light years. Barnard's Star, a red dwarf, is just under six light years away. Next comes Wolf 359, another red dwarf. Yet another red dwarf, Lalande 21185, was thought to be the fourth-closest star when its co-ordinates were published by Joseph-Jérôme Lefrançais de Lalande (1732 - 1807) in 1801. This was before Barnard's Star and Wolf 359 were discovered. Lalande 21185 cannot be seen by the naked eye because at 7th magnitude it is too dim; however, it counts as sixth-closest to the Sun at 8.3 light years. The seventh-closest is a star most denizens of Earth would recognise: Sirius (alpha Canis Majoris), with eighth place taken by its companion Sirius B, sometimes referred to as 'the Pup'. Sirius B is classified as a white dwarf, but it is one of the biggest known: in fact, its mass is comparable to that of our own Sun. Completing the top ten stellar neighbours are BL Ceti, a red dwarf flare star, and its binary partner UV Ceti3, which are 8.7 light years away from our Sun. Flare stars unleash bright flashes of light as well as streams of charged particles. Some of the stars studied have flares of such enormous intensity that they can increase the brightness of the star by up to 10%. The flares are only brief, like a camera flash, but would be detrimental to any nearby planets. Next in line is Ross 154, one of many discovered in 1925 by American astronomer Frank Elmore Ross (1874 - 1960). Ross 154 is a UV Ceti-type flare star 9.7 light years distant, and is the last of the stars within ten light years of our Solar System. |Star||Other Name or | |#1||Proxima Centauri||alpha Centauri C||Red dwarf||Centaurus||4.2| |#2||Rigil Kentaurus||alpha Centauri A||Yellow dwarf||Centaurus||4.35| |#2||alpha Centauri B||HD 128621||Orange dwarf||Centaurus||4.35| |#4||Barnard's Star||Proxima Ophiuchi||Red dwarf||Ophiuchus||5.98| |#5||Wolf 359||CN Leonis||Red dwarf||Leo||7.7| |#6||Lalande 21185||HD 95735||Red dwarf||Ursa Major||8.3| |#7||Sirius||alpha Canis Majoris||Blue-white| |#7||Sirius B||alpha Canis Majoris B||White dwarf||Canis Major||8.5| |#9||BL Ceti||Luyten 726-8 A||Red dwarf| |#9||UV Ceti||Luyten 726-8 B||Red dwarf| |#11||Ross 154||V1216 Sgr||Red dwarf| Between Ten and Twelve Light Years At 10.3 light years is another one on the Ross catalogue: Ross 248, a red dwarf flare star. Due to the wide variety of periods that this star flares (4.2 years, 120 days, and five other catalogued outbursts between 60 and 291 days apart), astronomers suspect that Ross 248 has an undetected companion which is causing the erratic flaring. Next on is Epsilon Eridani, which has a dust disc4 and a suspected extrasolar planet system, the closest detected up to the time of writing, 2012. The two candidate planets are not thought to be hospitable to life (as we know it) because their proposed orbits are so far from the star. If the planets do exist, they are likely to be frigid worlds like our outermost planet Neptune5. The French astronomer Nicolas Louis de Lacaille (1713 - 62) went on a 1751-4 expedition to the Cape of Good Hope, effectively a blank canvas sky for him to map. Using the planet Mars as a point of reference, his observations were the foundations for working out the lunar and solar parallax. Finding himself somewhat of a celebrity upon his return to Paris, de Lacaille hid from public attention in Mazarin College, writing up his findings. Barely taking care of himself, de Lacaille suffered from gout and was prone to over-working to the point of exhaustion. Unfortunately his catalogue, Coelum Australe Stelliferum, which described 14 new constellations and 42 nebulous objects among almost 10,000 southern stars, wasn't published until after he died at the age of just 49 years. One of those stars, Lacaille 9352, ranks as the 14th-closest to our Sun at 10.7 light years distance. EZ Aquarii is a triple star system situated at 11.3 light years away. EZ Aquarii A, B and C are all red dwarfs, and they may all be flare stars; however, not much is known about the smallest component (B). They are so dim (magnitude +13) that specialist equipment is required to view them. The system was labelled Luyten 789-6 by Dutch astronomer Willem Jacob Luyten, whose interest in astronomy had been sparked by viewing the predicted return of Halley's Comet in 1910, as an 11-year old schoolboy. In 1925 Luyten lost an eye in an accident but this tragedy did not wreck his chosen career. He was already working at the Harvard College Observatory, having been offered a post by the new director Harlow Shapley, whose own profile had been raised due to his participation in the Shapley-Curtis Debate of 1920. Luyten 'observed and measured more stellar images than anyone else', according to his biography at the National Academy of Sciences. He took up teaching at the University of Minnesota in 1931, and when he retired in 1967 he was given the title of Astronomer Emeritus which he held until his death at the ripe old age of 95. Procyon is a binary system which registers at +0.3 magnitude. The system consists of a yellow-white main sequence subgiant star, Procyon A, and a white dwarf companion, Procyon B, which was detected by Arthur von Auwers in 1862. |Star||Other Name or | |#12||Ross 248||HH Andromedae||Red dwarf| |#13||Epsilon Eridani||Sadira||Orange dwarf||Eridanus||10.5| |#14||Lacaille 9352||HD 217987||Red dwarf||Piscis Austrinus||10.7| |#15||Ross 128||FI Virginis||Red dwarf| |#16||EZ Aquarii A||Luyten 789-6 A||Red dwarf| |#16||EZ Aquarii B||Luyten 789-6 B||Red dwarf||Aquarius||11.3| |#16||EZ Aquarii C||Luyten 789-6 C||Red dwarf| |#19||Procyon A||alpha Canis Minoris||Yellow-white| |#19||Procyon B||alpha2 Canis Minoris||White dwarf||Canis Minor||11.4| The binary system 61 Cygni has two orange dwarf components of 6th magnitude at 11.41 light years away. Its distance was the first to be measured of any star. These two stars claim joint 21st place in our list of close stellar neighbours. Another pair of red dwarf stars, Struve 2398 A and B, positioned at just 11.5 light years distant, are the next nearest. They were studied by Russian-German astronomer Prof Friedrich von Struve (1793 - 1864), director of the Dorpat Observatory (now the Tartu Observatory) in Estonia, who listed them in his Catalogus novus stellarum duplicium (Double Star Catalogue) of 1827. Groombridge 34 are twin variable red dwarfs. Newly-discovered variable stars are given upper case capital letters, so Groombridge 34 A and B are also known as GX Andromedae and GQ Andromedae respectively. The Epsilon Indi system is fascinating because it contains the closest-known brown dwarfs. Brown dwarfs are approximately the same size as Jupiter6, but their mass is at least ten times greater, possibly up to 50×. These bodies are neither star nor planet, but 'failed' stars. Other titles have been proposed, as it's hardly encouraging to keep referring to them as 'failed stars'. Suggestions so far include planetar (which sounds like something from the science fiction genre) and substar (that 'sub' prefix isn't much of an improvement). Since 2004, planets have been discovered orbiting brown dwarfs, (although not, as yet, in the Epsilon Indi system), so their profile has been raised. Hopefully they are in line for a better class in the future. DX Cancri is a solo red dwarf flare star which expands to five times its usual brightness during flare activity. It is thought by some astronomers that DX Cancri is a member of the Castor Moving Group which was suggested in 1990 by JP Anosova and VV Orlov at the Astronomical Observatory in Leningrad State University, Russia. A moving group is the term for a collection of stars which share the same origin. Although they are not gravitationally bound to each other they are on the same path on their journey through the galaxy, like an unravelled, stretched-out cluster. The Castor Moving Group is named after the luminary of Gemini, and includes the stars Alderamin (alpha Cephei), Fomalhaut, Vega, psi Velorum and Zubenelgenubi (alpha Librae). Tau Ceti is one of the few nearby stars which are visible to the naked eye, albeit in the dim constellation Cetus, the Whale. Tau Ceti shot to fame in 1960, when Frank Drake launched Project Ozma, aiming to detect non-natural signals from space. Drake chose two stars which were similar to our Sun, Tau Ceti and Epsilon Eridani, for his project, which evolved to become SETI, the Search for Extra-terrestrial Intelligence. In December 2012, it was announced that five planets had been discovered orbiting Tau Ceti, with one of them possibly residing in the system's habitable zone. Red dwarf GJ 1061 in the southern constellation Horologium is the last of the stars within 12 light years. GJ 1061 is really small, even on the dwarf star scale: it registers just over ten percent of the Sun's mass. It is so dim (+13 mag) that you'd need a decent-sized telescope to view it, but just in case you ever get the opportunity, its co-ordinates are 03h 36m RA, −44° 30' 46" Dec. |Star||Other Name or | |#21||61 Cygni A||V1803 Cyg A||Orange dwarf||Cygnus||11.41| |#21||61 Cygni B||V1803 Cyg B||Orange dwarf||Cygnus||11.41| |#23||Struve 2398 A||NSV 11288||Red dwarf||Draco||11.5| |#23||Struve 2398 B||Gliese 725 B||Red dwarf| |#25||Groombridge 34 A||GX Andromedae||Red dwarf||Andromeda||11.6| |#25||Groombridge 34 B||GQ Andromedae||Red dwarf||Andromeda||11.6| |#27||Epsilon Indi||HD 209100||Orange dwarf +| two Brown dwarfs |#28||DX Cancri||LHS 248||Red dwarf| |#29||Tau Ceti||HD 10700||Yellow dwarf||Cetus||11.88| |#30||GJ 1061||LHS 1565||Red dwarf||Horologium||11.99| Nearby Stars in Fantasy and Science Fiction Stars which are close to our own Solar System have inspired imaginative writers going back hundreds of years. Here is just a sample: Proxima Centauri: the 1990s TV series Babylon 5 featured the planet Proxima III, which hosts an Earth Alliance colony. Alpha Centauri A: prolific author Isaac Asimov wrote about the water world Alpha of the Alpha Centauri A system in the Foundation and Earth book of his Foundation series. Alpha Centauri B: Witburg is a rocky planet orbiting Alpha Centauri B in the 2002 online role-playing game Earth & Beyond. Barnard's Star: Timemaster, a 1992 novel by Robert L Forward, bases its plot in the Barnard's Star system. Wolf 359: Star Trek fans will recognise Wolf 359 as the system where Starfleet's armada was practically wiped out by the hive-minded Borg. Lalande 21185: in the 1951 novel Rogue Queen penned by L Sprague de Camp, the planet Ormazd which orbits Lalande 21185 is investigated by Earth's space authority. Sirius: Micromégas is one of the earliest known science fiction stories, it was written in 1752 by François-Marie Arouet (better known by his pen name Voltaire). The Micromégas of the story was an extremely tall7 alien visitor to Earth who hailed from one of the planets in the Sirius system. Sirius B: in Seed of Light, a 1959 novel by Edmund Cooper, the Sirius A star is barren but Sirius B has a hospitable planet, Sirius B III, out of its five attendant worlds. The plot revolves around the people sent there to save the human race after the Earth has been devastated. BL Ceti: Larry Niven wrote A Gift From Earth in 1968, a part of his Known Space collection of multiple works. The plot involves the twin red dwarf stars BL Ceti and UV Ceti, which are important signposts for the eventual destination. UV Ceti: a space station called Eldorado, part of the 'Great Circle' route, is based at UV Ceti in the 1981 story Downbelow Station by CJ Cherryh. Ross 154: the planet Tei Tenga in the Ross 154 system is where the United Aerospace Armed Forces (UAFF) had a couple of military research bases in the video game Doom. Ross 248: Diadem is an icy world in orbit around Ross 248 in Alastair Reynolds's story Glacial. Following a failed attempt at human colonisation, an investigation a century later reveals that the planet is sentient and it uses cold-blooded annelids, burrowing through its ice-mantle, to 'think'. Epsilon Eridani: Les Grognards d'Éridan (The Napoleons Of Eridanus), written by French author Claude Avice in 1970, features a detachment of soldiers from the Napoleonic era who are abducted by aliens and transported to the Epsilon Eridani system to fight their battles for them. Also, Epsilon Eridani was the parent star of the planet Reach in the extremely successful Xbox game Halo: Reach. Lacaille 9352: in the fictional universe of the Hyperion Cantos dreamed up by Dan Simmons, the inhospitable planet Sibiatu's Bitterness orbits the star Lacaille 9352. Ross 128: Across the Sea of Suns, written in 1984 by Gregory Benford, features a race of alien aquatic creatures which live under the ice-mantle of the frozen world Pocks, a member of the Ross 128 system. EZ Aquarii A, B and C: the character Sheldon in The Big Bang Theory regularly lists 'the closest stars to me' when ascending and descending stairs. Once, in disguise, he spoke the words 'EZ Aquarii B, EZ Aquarii C,' while passing Amy on the stairs, and was dismayed that she recognised him. Procyon A/B: His Master's Voice was written by Polish author Stanislaw Lem in 1968. This book focuses on the attempts by highly intelligent Earthlings to understand a message from the Procyon system. 61 Cygni: The region surrounding 61 Cygni is known as the 'Darkling Zone' in the popular TV series Blake's 7. Groombridge 34: The Groombridge 34 system features in the Halo series of Xbox games. Epsilon Indi: New New York on Epsilon Indi III has a portal to the Earth, via a created wormhole, in the 1996 Starplex book by Robert J Sawyer. Tau Ceti: Time for the Stars, written in 1956 by Robert A Heinlein, explores the telepathic bond between twins over the vastness of space. Tau Ceti III, in the Star Trek universe, is a hospitable M-class planet. One of the bountiful fruits which grows there is the Kaferian apple. While they can be eaten raw, they are much more tasty stewed with Talaxian spices and served in a pie, as recommended in the vegetarian options at Quark's Bar on the space station Deep Space Nine.
292
General Chemistry/Periodicity and Electron Configurations Blocks of the Periodic Table The Periodic Table does more than just list the elements. The word periodic means that in each row, or period, there is a pattern of characteristics in the elements. This is because the elements are listed in part by their electron configuration. The Alkali metals and Alkaline earth metals have one and two valence electrons (electrons in the outer shell) respectively. These elements lose electrons to form bonds easily, and are thus very reactive. These elements are the s-block of the periodic table. The p-block, on the right, contains common non-metals such as chlorine and helium. The noble gases, in the column on the right, almost never react, since they have eight valence electrons, which makes it very stable. The halogens, directly to the left of the noble gases, readily gain electrons and react with metals. The s and p blocks make up the main-group elements, also known as representative elements. The d-block, which is the largest, consists of transition metals such as copper, iron, and gold. The f-block, on the bottom, contains rarer metals including uranium. Elements in the same Group or Family have the same configuration of valence electrons, making them behave in chemically similar ways. Causes for Trends There are certain phenomena that cause the periodic trends to occur. You must understand them before learning the trends. Effective Nuclear Charge The effective nuclear charge is the amount of positive charge acting on an electron. It is the number of protons in the nucleus minus the number of electrons in between the nucleus and the electron in question. Basically, the nucleus attracts an electron, but other electrons in lower shells repel it (opposites attract, likes repel). Shielding Effect The shielding (or screening) effect is similar to effective nuclear charge. The core electrons repel the valence electrons to some degree. The more electron shells there are (a new shell for each row in the periodic table), the greater the shielding effect is. Essentially, the core electrons shield the valence electrons from the positive charge of the nucleus. Electron-Electron Repulsions When two electrons are in the same shell, they will repel each other slightly. This effect is mostly canceled out due to the strong attraction to the nucleus, but it does cause electrons in the same shell to spread out a little bit. Lower shells experience this effect more because they are smaller and allow the electrons to interact more. Coulomb's Law Coulomb's law is an equation that determines the amount of force with which two charged particles attract or repel each other. It is , where is the amount of charge (+1e for protons, -1e for electrons), is the distance between them, and is a constant. You can see that doubling the distance would quarter the force. Also, a large number of protons would attract an electron with much more force than just a few protons would. Trends in the Periodic table Most of the elements occur naturally on Earth. However, all elements beyond uranium (number 92) are called trans-uranium elements and never occur outside of a laboratory. Most of the elements occur as solids or gases at STP. STP is standard temperature and pressure, which is 0° C and 1 atmosphere of pressure. There are only two elements that occur as liquids at STP: mercury (Hg) and bromine (Br). Bismuth (Bi) is the last stable element on the chart. All elements after bismuth are radioactive and decay into more stable elements. Some elements before bismuth are radioactive, however. Atomic Radius Leaving out the noble gases, atomic radii are larger on the left side of the periodic chart and are progressively smaller as you move to the right across the period. Conversely, as you move down the group, radii increase. Atomic radii decrease along a period due to greater effective nuclear charge. Atomic radii increase down a group due to the shielding effect of the additional core electrons, and the presence of another electron shell. Ionic Radius For nonmetals, ions are bigger than atoms, as the ions have extra electrons. For metals, it is the opposite. Extra electrons (negative ions, called anions) cause additional electron-electron repulsions, making them spread out farther. Fewer electrons (positive ions, called cations) cause fewer repulsions, allowing them to be closer. |Ionization energy is the energy required to strip an electron from the atom (when in the gas state). Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period or upward within a group, the first ionization energy generally increases. As the atomic radius decreases, it becomes harder to remove an electron that is closer to a more positively charged nucleus. Ionization energy decreases going left across a period because there is a lower effective nuclear charge keeping the electrons attracted to the nucleus, so less energy is needed to pull one out. It decreases going down a group due to the shielding effect. Remember Coulomb's Law: as the distance between the nucleus and electrons increases, the force decreases at a quadratic rate. It is considered a measure of the tendency of an atom or ion to surrender an electron, or the strength of the electron binding; the greater the ionization energy, the more difficult it is to remove an electron. The ionization energy may be an indicator of the reactivity of an element. Elements with a low ionization energy tend to be reducing agents and form cations, which in turn combine with anions to form salts. Electron Affinity |Electron affinity is the opposite of ionization energy. It is the energy released when an electron is added to an atom. Electron affinity is highest in the upper left, lowest on the bottom right. However, electron affinity is actually negative for the noble gasses. They already have a complete valence shell, so there is no room in their orbitals for another electron. Adding an electron would require creating a whole new shell, which takes energy instead of releasing it. Several other elements have extremely low electron affinities because they are already in a stable configuration, and adding an electron would decrease stability. Electron affinity occurs due to the same reasons as ionization energy. Electronegativity is how much an atom attracts electrons within a bond. It is measured on a scale with fluorine at 4.0 and francium at 0.7. Electronegativity decreases from upper right to lower left. Electronegativity decreases because of atomic radius, shielding effect, and effective nuclear charge in the same manner that ionization energy decreases. Metallic Character Metallic elements are shiny, usually gray or silver colored, and good conductors of heat and electricity. They are malleable (can be hammered into thin sheets), and ductile (can be stretched into wires). Some metals, like sodium, are soft and can be cut with a knife. Others, like iron, are very hard. Non-metallic atoms are dull, usually colorful or colorless, and poor conductors. They are brittle when solid, and many are gases at STP. Metals give away their valence electrons when bonding, whereas non-metals take electrons. The metals are towards the left and center of the periodic table—in the s-block, d-block, and f-block . Poor metals and metalloids (somewhat metal, somewhat non-metal) are in the lower left of the p-block. Non-metals are on the right of the table. Metallic character increases from right to left and top to bottom. Non-metallic character is just the opposite. This is because of the other trends: ionization energy, electron affinity, and electronegativity.
453
Modern laparoscopic surgeries may be minimally invasive, but they still require multiple incisions. To make laparoscopies even less intrusive, scientists and surgeons at Columbia University and Vanderbilt University have built a robot that can enter the body through a single 15-millimeter incision or through a natural opening like the mouth. Once inside the body the robot, which has not yet been tested in humans, unfolds like a NASA spaceship, communicates its position through a wire connected to an external computer, and follows instructions to advance, stop, tie sutures and perform other actions. It comes with a camera that tracks the movements of surgical instruments and projects them onto a computer console. Developers say it could perform appendectomies, hysterectomies, some types of kidney surgery, and possibly ear and throat surgery. The Insertable Robotic Effector Platform (IREP) is entering animal testing this fall and could be available within five years. Until now, no study has offered conclusive proof that robotic surgery trumps traditional laparoscopic techniques, but IREP's developers say it is lighter and cheaper than da Vinci, the leading surgical system. “There is definitely a potential here,” says William Lowrance, a robotic surgery expert at the University of Utah, adding that it might offer more dexterity and precision than traditional laparoscopic tools. This article was originally published with the title The Robot Will See You Now.
677
Learning how to read coherently and write effectively teaches you to think critically, increases your vocabulary, and improves your language and research skills. No matter what field you enter, communication and writing skills are important and highly sought after by employers. Maranatha’s English major will expose you to enduring literary works and teach you to write more effectively, whether you write sermons, correspondence, books, magazine articles, or other forms of communication. Because of its emphasis on critical thinking and writing, English is also one of the best ways to prepare for graduate school. Unlike secular English programs that promote relativistic theory about language, Maranatha’s English major teaches students to “approve things that are excellent” (Phil. 1:10) and to evaluate literature from a moral and biblical perspective. The English major will not only immerse you in the wide world of literature and awaken your cultural awareness but also equip you to discern objective truth and beauty according to God’s standard. An English minor is also available for those majoring in another field who want to improve their ability to think, read, and write effectively through an acquaintance with excellent literary works. This is the suggested class list for the English major. You may also look over the details for our English Education major. |English Composition 1||3||English Composition 2||3| |Computer Information System Elective||1||Computer Information System Elective||2| |Christian Life 1||1||New Testament Survey||2| |Old Testament Survey||2||Minor||3| |Fundamentals of Public Speaking||2||Music Elective||2| |The Modern World||3||Science Elective||3| |British Literature Survey to 1789||3||British Literature Survey: 1789 to Present||3| |Composition and Literature||3||American Masterpieces||3| |Elementary Spanish 1||3||Elementary Spanish 2||3| |Computer Information System Elective||1||Baptist Heritage||3| |Christian Life 2||1||Bible Elective||2| |Principles of Bible Study||2||Minor||3| |Writing Elective||3||Literary Criticism||3| |Period Literature Elective||3||Computer Information System Elective||1| |Intermediate Spanish 1||3||Intermediate Spanish 2||3| |Introduction to Philosophy||2||Bible Doctrine 1||3| |Christian Life 3||1||Minor||3| |Minor||3||American Studies Elective||3| |Period Literature Elective||3||Period Literature Elective||3| |Writing Elective||3||Writing or Period Literature Elective||3| |Bible Doctrine 2||3||English Elective||3| |Humanities Elective||3||Humanities Elective||3| - Copy editor - Creative writer - English as a Second Language teacher - Linguistic specialist - Literacy tutor - Literature specialist Chelsie (Czichray) Messenger (’08) is enrolled in the Professional Communications program at Clemson University and hopes to someday work in business communications or public relations. She previously taught English at Easley Christian School in Easley, SC. “My academic advisor, Nathan Huffstutler, helped me sort out my future career goals and helped me develop my love of technical writing,” Messenger said.
518
50 years ago, President John F. Kennedy told the United States that man would go to the moon. Soon, another American president may announce that the same celestial body will serve as a waypoint for manned space exploration. The Verge has learned that NASA intends to deploy a robotic lunar rover on the Moon in 2017 to search for water and other resources necessary for space travel, and that NASA may have secured support from the White House for an actual manned outpost — a space station — floating above the far side of the moon. We spoke to Logsdon as well, and he said that the administration is planning a shift in policy that could make the Moon and its surrounding space a more important part of the equation. And our source tells us that part of that plan is deploying RESOLVE on the lunar surface in 2017. It’s a payload designed to be mounted on a robotic rover and driven across the moon to find water and other useful materials for space travel, “Cheaper than lifting water off the Earth” such that spaceships won’t have the tremendous expense of lifting them from Earth in order to bring them along. They could theoretically travel from Earth to the lunar waypoint and find resources waiting for them before undertaking a journey further abroad. 2017 also happens to be the target date for the first unmanned mission for NASA’s new Space Launch System and Orion capsule, which will make a loop around the moon, but our source wasn’t sure whether that craft would be the one to drop the lunar rover. In fact, they suggested that instead, the rover would be deployed as part of a commercial partnership, and that the mission would “lay the groundwork for commercial lunar transport.” Private firms have been working on commercial space travel for a while, with the SpaceX Dragon capsule successfully completing its first resupply mission to the International Space Station just last month, but recently there’s been some interest in space mining as well: Planetary Resources, a company backed by James Cameron, Larry Page and Eric Schmidt, plans to launch a spaceship within two years and begin mining asteroids by 2022. When we asked NASA about the possibilities of a moon base, a representative wouldn’t confirm or deny the plans. “We are pursuing a range of possible destinations on route to an eventual trip to Mars,” they explained, but admitted that a lunar waypoint would be “in the range of possibilities that have been discussed” and could be “a potential stepping stone to Mars.” At present, NASA could only confirm that the Space Launch System’s first unmanned mission (Exploration Mission 1) is still slated for 2017 and a second manned mission (Exploration Mission 2) with a crew of four would likely occur in 2021, that astronauts would attempt to land on an asteroid by 2025, and arrive at Mars sometime in the 2030s. NASA did caution, however, that there’s no current plan to land people on the Moon itself. “Neither EM1 nor EM2 would put boots on the surface of the Moon,” NASA told us. Rumors of such a deep-space outpost surfaced as early as February of this year, when a leaked memo from a NASA administrator detailed an idea to build a “human-tended waypoint” at Earth-Moon Lagrange Point 2 (EML-2): a point in space where balanced gravitational forces allow an object to remain in stationary orbit relative to both the Earth and the Moon. From there, NASA could launch missions deeper into space — say, to Mars, or a near-Earth asteroid — using the base as a stepping stone. In September, the Orlando Sentinel revealed that the “gateway spacecraft” wasn’t just a crazy idea. The publication reported that the White House had been pitched on a plan to begin construction as early as 2019, possibly defraying the to-be-determined expense by using parts left over from the International Space Station and components from international partners, including Russia — which has committed to a moon base of its own — and Italy. At the time, the Sentinel reported that it was unclear whether the Obama Administration would support the move. Now, space policy expert John Logsdon told Space.com that the White House is indeed interested in the idea, and had merely been “holding off announcing that until after the election.” In 2010, President Obama told the nation that we would send men to an asteroid for the first time, and then on to Mars by the mid-2030s, but suggested that the Moon itself wasn’t part of the plan:
487
Hundreds of years ago, settlers who came to Fort Worth were greeted by rolling prairies for as far as the eye could see. Today, those welcoming fields have been replaced by growing cities and towns, but Botanical Research Institute of Texas (BRIT) remembers this heritage and culture with the annual celebration of Prairie Day. BRIT's Prairie Day offers family-friendly education about the beauty and importance of the North Texas landscape through hands-on events and activities that kids will love. Kids will be encouraged to get a little dirty as they help to make seed balls, a Prairie Day tradition. Using a Native American technique, visitors will combine seeds, humus, and red clay into tiny packets of life that will be tossed into the fields surrounding BRIT to populate the area with native plants. Imagine the fun of returning to the BRIT next spring to see the plants you helped to grow! Other activities throughout the day will combine fun family games with opportunities to learn about nature, including face painting, balloon twisting, and a variety of games provided by the Log Cabin Village. In addition, live music and cowboy poets will be on hand to entertain the crowd. Don't miss these other great activities: - Beekeeping demonstrations - Soap and candle making - Children’s coloring contest - Basket Weaving - Solar Cooking Demonstrations - Meet two special guest prairie dogs The 3rd annual Prairie Day event will be held on Saturday, May 18th from 10 am to 2 pm at BRIT Headquarters 1700 University Drive.
545
Crab, King – US (kani) In the U.S., there are three commercial King Crab species – Red, Blue and Golden King Crab – all caught in Alaskan waters. King Crabs typically mature around 5-7 years of age and can have a leg span of 6 feet. Most King Crab populations are at healthy levels of abundance and not considered overfished. Some populations are closed for fishing, however, to rebuild numbers. King Crabs are caught using crab pots, which are typically 700 lb steel pots covered with nylon webbing. Although large, they do relatively little damage to the muddy sea floor. New management policies and regional closures have helped mold this ‘derby’ fishery into a well-regulated, efficient and economically stable program. Bycatch in King Crab fisheries is low, typically consisting of female and undersized male crabs.
745
Smartphones have been around for at least several years now, but they still have certain limitations. Despite having a plethora of wireless technologies built-in--Wi-Fi, Bluetooth, 3G, etc.--there's no simple way to transfer "clippings" of data from one device to another. But a new research project at MIT called Sparsh is aiming to fix that oversight. Sparsh (the Hindi word for "touch") isn't an app, at least not in the way we generally use the word. It's a tool that's supposed to be part of a mobile operating system, like "undo" or "select all," running within apps at all times. It creates a virtual cloud-based clipboard where any data, like a phone number or photograph, can temporarily live until it's "pasted" to another device. For it to work, at least two devices need to be Sparsh-enabled. A user wanting to share data becomes, in concept, an avatar for a copy-and-paste-like function. The person touches data on a device, such as a photo or text, and Sparsh sends it to the cloud. The same person then touches another device, and presto! The relevant information is pasted in as if it had been copied from the same machine. Sparsh isn't the only tool for transferring small amounts of device-to-device data on the scene. Indeed, a popular iPhone app called Bump allows people to trade photos, apps, contact info, and even music from one phone to another simply by bumping the devices together. Bump is very cool, but it requires both the sender and recipient to be running the app. In addition, it's not open with what it can send or where it can send it--it only works from phone to phone, and while there are many options for things it can send, there are more things it simply can't. Sparsh aims to live in the devices we use at the operating-system level, meaning it would seem intuitive to use and be available within any app for almost any type of data. … Read more
890
The Chesapeake Bay TMDL, Maryland's Watershed Implementation Plan and Maryland's 2012-2013 Milestone Goals The Chesapeake Bay TMDL: A Pollution Diet for the Chesapeake Watershed The Chesapeake Bay is a national treasure constituting the largest estuary in the United States and one of the largest and most biologically productive estuaries in the world. Despite significant efforts by federal, state, and local governments and other interested parties, pollution in the Chesapeake Bay prevents the attainment of existing water quality standards. The pollutants that are largely responsible for impairment of the Bay are nutrients, in the form of nitrogen and phosphorus, and sediment. The United States Environmental Protection Agency (EPA), in coordination with the Bay watershed jurisdictions of Maryland, Virginia, Pennsylvania, Delaware, West Virginia, New York, and the District of Columbia (DC), developed and, on December 29, 2010, established a nutrient and sediment pollution diet for the Bay, consistent with Clean Water Act requirements, to guide and assist Chesapeake Bay restoration efforts. This pollution diet is known as the Chesapeake Bay Total Maximum Daily Load (TMDL), or Bay TMDL. MDE took part in an ongoing, high-level decision-making process to create the essential framework for this complex, multi-jurisdictional TMDL that will address nutrient and sediment impairments throughout the entire 64,000 square mile Chesapeake Bay watershed. MDE participated in numerous inter-jurisdictional and inter-agency workgroups and committees over the last three years to provide technical expertise and guidance for developing the Bay TMDL in a manner consistent with the State’s water quality goals and responsibilities. In particular, MDE worked to ensure that the Bay TMDL addressed the nutrient and sediment impairments in all of Maryland’s tidal waters listed as impaired by those pollutants on the State’s Integrated Report of Surface Water Quality. MDE took the lead on developing an allocation process that will enable the State to meet a key requirement for the Bay TMDL and Maryland’s Watershed Implementation Plan: the sub-allocation of major basin loading caps of nutrient and sediment to each of 58 “segment-sheds” in Maryland – the land areas that drain to each impaired Bay water quality segment – and to each pollutant source sector in those areas. Maryland’s Watershed Implementation Plan for the Bay TMDL Concurrent with the development of the Bay TMDL, EPA charged the Bay watershed states and DC with developing watershed implementation plans in order to provide adequate “reasonable assurance” that the jurisdictions can and will achieve the nutrient and sediment reductions necessary to implement the TMDL within their respective boundaries. Maryland’s Phase I Plan provides a series of proposed strategies that will collectively meet the 2017 target (70% of the total nutrient and sediment reductions needed to meet final 2020 goals). After more than a year of cooperative work, MDE and the Departments of Natural Resources, Agriculture, and Planning released a Draft Phase I Plan for public review in October 2010 and, following extensive consideration of hundreds of public comments, submitted Maryland’s Final Phase I Watershed Implementation Plan to EPA on December 3, 2010. Maryland’s Phase II Plan provides a series of proposed strategies that will collectively meet the 2017 target (60% of the total nutrient and sediment reductions needed to meet final 2025 goals). This was changed from Phase I due to concerns that the implementation was not achievable with that timeframe. Maryland worked many partners in local jurisdictions to develop Phase II Watershed Implementation Plans with more detailed reduction targets and specific strategies to further ensure that the water quality goals of the Bay TMDL will be met. See Maryland's Development Support for the Chesapeake Bay Phase II WIP webpage. MDE is continuing to work with its partners in local jurisdictions to implement the Phase II WIP. See the Implementing Maryland’s WIP: Making Progress toward Bay Restoration Goals webpage. Please direct questions or comments concerning this project to Tom Thornton with Maryland's TMDL Program at (410) 537-3656 or email at Thomas.Thornton@maryland.gov. |Acrobat® Reader is required to view and print the PDF files. If you do not have it click on the icon to the right.
96
(CNN) — For parents in Somalia, giving their children immunizations is not a choice. In a country enduring more than 20 years of conflict, Somalia is home to one of the highest child mortality rates in the world, with one in five Somali children dying before their 5th birthday, aid agencies say — in many instances, from diseases that could be prevented by vaccines. Yet for some equally loving parents in the developed world, the messages surrounding childhood vaccination have become muddied. Some communities in areas previously considered disease-free are now falling below the levels of “herd immunity” required to protect against diseases such as measles, whooping cough and mumps. This week, in Swansea, Wales, the local public health agency announced that 886 people have been diagnosed with measles in an epidemic that started in November. The outbreak has been attributed to low measles, mumps and rubella immunization rates. One man’s death has been linked to the measles virus, while 80 people have been hospitalized. In 2011, six people in France died as a result of a measles epidemic that neared 15,000 confirmed cases, according to the World Health Organization. Teens not getting vaccinated PFCs’ impact on vaccine effectiveness Study: 10% of kids improperly vaccinated Vaccine schedules for children In 2010, a whooping cough outbreak, resulting from pockets of under-vaccinated people in California, resulted in 10 deaths, according to the California Department of Public Health. Nine of these were infants were too young to be vaccinated. “We are extremely concerned about what’s happening in some parts of the developed world,” said Jos Vandelaer, chief of immunization at UNICEF, one of the groups helping vaccination efforts in Somalia. “In the developing world, many people don’t even get the chance to be immunized. Health systems are not strong enough to take the vaccine to every child despite the fact that their parents want it.” Parents with real fears Measles, whooping cough and Hib (haemophilus influenzae type B), along with many other childhood diseases, can be deadly, but they are vaccine-preventable. Measles alone killed more than 150,000 people globally in 2011, according to WHO. Measles is also highly infectious, with one carrier likely to pass on the virus to between 14 and 18 other susceptible people, said Dr. Matthew Snape of the Oxford Vaccine Group in the pediatrics department at the University of Oxford, England. Despite the severity of these diseases, some parents in the developed world choose not to immunize their children and accept the risks. “Studies show that it is the upper middle class, well-educated Caucasian parents who are shunning vaccines,” said Dr. Paul Offit, director of the Vaccine Education Center at the Children’s Hospital of Philadelphia. “They have generally gone to graduate school, are in positions of management and are used to being in control.” A study released this month by the National Health Performance Authority in Australia reflected this trend. A number of affluent Sydney suburbs were identified as regions where low levels of immunity have put entire communities at risk from these diseases. The reasons behind parents’ decisions are complex. Part of the problem is lingering doubts around vaccine safety that were compounded by a retracted 1998 study linking the measles, mumps and rubella vaccine with autism. Although declared an “elaborate fraud” by the British Medical Journal, it raised questions about the safety of immunization in the minds of many parents. These doubts then were spread worldwide on the Internet and in the media by anti-vaccination groups and some celebrities. “If you want to scare yourself about vaccines, it’s not that hard,” Offit said. “Just turn on your computer.” For such parents, the perceived risks of vaccination outweigh the risks they associate with disease. In Australia, where vaccination is not mandatory, the anti-vaccination Australian Vaccination Network website says parents need to make an informed choice. The site offers links to articles and parental accounts of the potential side effects of many vaccines. A UNICEF working paper released this week to coincide with World Vaccination Week has tracked the rise of anti-vaccination sentiment in Eastern and Central Europe and concludes that poorly managed immunization campaigns in some countries have also contributed to the problem. Concerned parents in the affected countries are taking to blogs and Facebook, discussing their mistrust of vaccines and government programs, questioning the involvement of pharmaceutical companies and often recommending alternative medicine. A March survey conducted by the U.S. organization Public Policy Polling showed that 20% of Americans believe there is a link between childhood vaccines and autism, and a further 34% were not sure. Diseases long forgotten in the developed world While there are some risks associated with vaccines, they are mostly minor, such as pain at the vaccine site or low-grade fever, according to the U.S. Centers for Disease Control and Prevention. A serious allergic reaction is rare and usually reported in less than one out of 1 million doses, the agency reported. “Hundreds of millions of children every year are vaccinated, and the number of side effects we see is minimal,” UNICEF’s Vandelaer said. “The anti-vaccine groups focus on the potential side effects, not on the real side effects.” On the question of autism, numerous studies conducted over the past decade have all demonstrated there is no scientific link between vaccines and autism. With so much conflicting information readily available to parents, Dr. Dina Pfeifer, program manager for vaccine-preventable diseases and immunization for WHO’s Europe office, said she believes the decision of whether to immunize children has become so fraught that many parents choose to do nothing at all. “They have a difficulty dealing with the amount of information for and against (vaccination) on the Internet, and out of this confusion they are failing to recognize the risks of the disease,” she said. Another factor driving parents’ decision not to vaccinate is the security that comes with herd vaccination, as rates of immunization for many diseases remain above 92% for the population. But Europe’s recent battle with measles demonstrates the problems under-vaccinated populations can pose, especially with older children. “Europe had 100,000 cases of measles from 2009 to 2012, and that shows how prevalent the pockets of un-immunized populations are in that area,” Pfeifer said. “Almost 50% of those cases were older than 10 years of age, and the older you are when you contract measles, the more severe the course of the disease.” Another factor of these pockets is their affluence; these parents tend to be the ones able to afford overseas travel. In 2008, a 7-year-old U.S. boy whose parents chose not to immunize him against measles traveled with his family to Switzerland. He caught the virus and returned to San Diego, unknowingly exposing 839 people to the disease and infecting 11 unvaccinated children, according to the journal Pediatrics. In Europe and the United States, parents and most people under 45 have never seen the effects of diseases such as measles, diphtheria or polio. “The fear factor (among parents in the developed world) is missing now — the knowledge of what’s on the other side if you don’t have vaccinations,” said Dr. Seth Berkley, CEO of the Global Alliance for Vaccines and Immunization, known as GAVI Alliance. The lack of knowledge of these diseases is also a problem among younger doctors and pediatricians, who may not be able to identify the signs, resulting in misdiagnoses. “There is a lot of value in case-based learning, but it is difficult to learn how to recognize these diseases if you haven’t seen them before,” said WHO’s Pfeifer. In contrast, most parents in the developing world, in places such as Somalia, have seen family members suffer, be maimed or die from such diseases, health advocates said. Education and motivation To address the problem, Berkley prescribes localized programs in countries to supplement the already high overall levels of immunization. Other physicians are supporters of parental education and want to ensure parents feel free to ask as many questions of their doctors and health care workers. Dr. Steve Hambleton of the Australian Medical Association said further motivation may be necessary. “When you incentivize the parents in a meaningful way, whether it be financial or with other incentives, you can make an enormous difference in vaccination uptake,” he said. Berkley, a doctor who specializes in epidemiology and global health, said he has seen the devastating effects of vaccine-preventable diseases in war-torn countries and refugee camps. Berkley said he wished he could take some parents in the developed world “on a tour, show them how horrible it is. Show them the illness that occurs out of these viruses.” “We’ve brought down child mortality dramatically with these vaccination campaigns and we are making dramatic progress, but the challenge is getting people to understand what the world was like before this.”
542
January 18, 1813 Joseph Farwell Glidden: The Father of Barbed Wire Joseph Farwell Glidden was born on January 18, 1813, and became known as the "Father of Barbed Wire." In 1874, Glidden was issued patent #157,124 for the first wire technology that was capable of restraining cattle. While his barbed wire was neither the first nor the last, it was ruled the best after a three year legal battle.
120
- OurWorld 2.0 - http://ourworld.unu.edu/en - What does Cancún offer the climate generation? Posted By Huw Oliphant On December 8, 2010 @ 8:30 pm In General | 4 Comments Across the globe, young people are paying attention to what is happening at the COP16 climate negotiations in Cancun, Mexico. They are the “climate generation” — the ones who are going to have to live with our climate legacy, our tendency to prevaricate and our collective global failure to act sooner. The hope is that we will see substantive progress in Cancun towards an internationally binding agreement. This hope was echoed by a group of 30 young climate champions (from Japan, Korea, Thailand, Indonesia, Australia and Vietnam) who met in Vietnam on 22-26 November 2010 and who called on the leaders of the world to stop talking and take action on climate change. They were brought together by the British Council Climate Generation Project in a workshop that focused on green business and entrepreneurship, aimed to help participants develop their green projects through project management and leadership skills training, as well as interaction with experts in the field. What can the climate generation do? To begin with, business as usual is not an option. These young people are green entrepreneurs and they are all developing “cool” projects aimed at addressing climate and sustainability issues in their community. These young people are green entrepreneurs and they are all developing cool projects aimed at addressing climate and sustainability issues in their community. For instance, Syuichi Ishibashi from Japan has developed an impressive energy literacy project to help monitor on-line energy use at home. Other projects include a factory making furniture from recycled materials in Thailand; a community based conservation project (link in Indonesian) near Lake Buret in Tulungagung, East Java; a green fashion show in Korea; and a project aimed at developing smart grid systems in Vietnam. Young leaders can make a difference Siti Nur Alliah, a British Council Climate Champion from Indonesia, is tackling a burning issue in the heart of Indonesia’s countryside in a quest to reverse climate change and lift a community out of poverty. As a community organiser, Alliah was keenly aware that farmers in Sekonyer village, central Borneo, remained poor no matter how much they burned surrounding forest to expand their farmland. “Tragically, no matter how many resources are exploited, they are still considered as a community living under the poverty line,” she said. So Alliah decided to see if there were more environmentally friendly agriculture methods that would have the added benefits of raising the quality and quantity of farm yields. The result was the Forest Farming project, which works with villagers in Sekonyer to expand their knowledge of agriculture to find better ways of managing their land. Alliah says the project depended on cultivating local knowledge, encouraging the community to take action and fostering commitment in the village to sustained involvement. She says the great willingness of the indigenous people to work with her and her team to change their farming practices is a real sign that they are making a difference. Climate Champion Hiroki Fukushima is part of a network of Japanese students who have issued the Climate Campus Challenge to educational institutions throughout the country. To help cut the education sector’s emissions of greenhouse gas, the students encourage universities to use renewable energy sources, retrofit campus buildings and buy green products. They then rank the colleges according to their success. “We made an environmental survey of 334 universities and assessed them according to criteria, such as energy consumption per student, reduction in energy consumption, climate change policies, climate education for students and unique initiatives,” Hiroki said. “We published an Eco University Ranking and awarded certificates to universities with good environmental policies and activities, and organised a seminar to promote universities with good practice.” Hiroki says the number of Japanese students participating in environmental activities has been on the rise since the 1990s but few of the activities were aimed at tackling climate change on campus. To remedy the situation, the core group studied projects overseas and thought how the activities could be best adapted to implement the campus challenge. They then enlisted students at various universities and established a network to realise the project. Thai Climate Champion Panita Topathomwong is driving home her environmental message through her Cool Bus Cool Smile project. Panita is encouraging residents to cut their greenhouse gas emissions by leaving the car in the garage and hopping on public transport. She says that transport is one of the big sources of carbon dioxide emissions in urban centres and cutting back on car use would help lower the sector’s environmental impact and have the added benefit of easing gridlock. So, with the support of the Bangkok Mass Transit Authority (BMTA) and the Ministry of Transport, Panita and her team organised a design competition to decorate three city buses and take the message to the road. The decorated buses toured Bangkok streets for three months after a high-profile launch in April 2009 attended by the Vice Minister of Transport, the Director of the BMTA, a representative from the British Embassy Bangkok and various media. Climate Champions are young green entrepreneurs who are developing innovative projects aimed at addressing climate and sustainability issues in their community. “We believe that by redecorating the buses with great images we conveyed the message about climate change to wider audiences in Bangkok. The buses were like mobile billboards which convinced everybody to be concerned about climate change,” Panita said. It is cool to be a Climate Champion Since 2008, the British Council Climate Generation has worked with over 120,000 young people from across the world interested in tackling climate change. Through the Climate Generation project, young people have the chance to come up with grassroots projects to combat and offset the effects of climate change. The participants are given the training and resources they need to realise their proposals and spread the word about the issue in their communities. Climate Generation encourages young people interested in tackling climate change to connect with each other, come up with local solutions and reach out to local, national and international decision-makers. As Climate Champions, programme participants have access to the training and information they need to ignite discussion in their communities and devise projects that will help people adapt to and mitigate climate change. The result is a global network of enthusiastic young people with the knowledge, contacts and on-the-ground resources to take action on climate change and make positive contributions to people’s lives. Climate Champions have come from a wide variety of backgrounds including government, business, entrepreneurship, NGOs, education and media. Through training in communication and negotiation, they can learn how to put their plans into practice and give voice to the concerns of their generation. Clearly, the Climate Champions are acting in advance of the outcomes at COP16 and “doing it themselves” but hope that real progress can be made at Cancun. • ♦ • For more information on the programme, please contact Huw Oliphant . Article printed from OurWorld 2.0: http://ourworld.unu.edu/en URL to article: http://ourworld.unu.edu/en/what-does-cancun-offer-for-the-climate-generation/ URLs in this post: stop talking and take action: http://www.youtube.com/watch?v=b8APNC8R57w Climate Generation Project: http://climatecoolnetwork.ning.com/ energy literacy project: http://e-idea2010.climate-change.jp/en/ishibashi.php factory: http://www.kokoboard.com community based conservation project: http://www.pplhmangkubumi.or.id/ Huw Oliphant: mailto:email@example.com Copyright © 2008 OurWorld 2.0. All rights reserved.
825
At the heart of the celebration of Thanksgiving is the idea of 'giving thanks' but the main attraction of this festival is the tradition of family gathering for sharing the Thanksgiving dinner and eating turkey on this holiday. It is well known that the turkey has a delicious history and it is considered as the favorite bird of the Americans. But there are even more interesting and fun facts about this bird, which can surprise you. Here we are with our collection of various facts about turkey as Turkey Trivia. Turkeys originated in North and Central America. Usually the turkeys are found in hardwood forests with grassy areas but they are capable of adapting themselves to different habitats. Turkeys spend the night in trees. You can easily see a turkey on a warm clear day or during light rain. Turkeys fly to the ground at first light and feed until mid-morning. Feeding resumes in mid-afternoon. Turkeys start gobbling before sunrise and generally continue through most of the morning. The field of vision of wild turkey is so good that it is about 270 The wild turkey has excellent hearing. A spooked turkey can run at speed up to 20 miles per hour. A wild turkey can run at speed of up to 25 miles per hour. A wild turkey can fly for short distances at up to 55 miles per hour. Domesticated turkeys or the farm-raised turkeys cannot fly. Turkeys were one of the first birds to be domesticated in the America. The male turkeys are called 'tom', the female turkeys are called 'hen' and the baby turkeys are called 'poult'. The male turkeys gobble whereas female turkeys make a clicking noise. The male turkeys gobble to attract the female turkeys for mating. The gobble is a seasonal call made during the spring and fall. A mature turkey generally has around 3,500 feathers. The Apache Indians considered the turkey timid and wouldn't eat it or use its feathers on According to an estimate, during the Thanksgiving holiday more than 45 million turkeys are cooked and around 525 million pounds of turkey is About ninety-five percent of American families eat turkey on the Thanksgiving Day whereas fifty percent eat turkey on Christmas holiday. Almost fifty percent of Americans eat turkey at least once every 2 According to the National Turkey Federation about twenty-four percent of Americans purchase fresh turkeys for Thanksgiving and seventy percent purchase frozen turkeys. North Carolina is the number one producer of turkeys. It produces around 61 million turkeys per year. Minnesota and Arkansas are second and third number producers of turkey. The part of the turkey that is used in a good luck ritual is known as The red fleshy growth from the base of the beak that hangs down over the beak is called 'snood'. It is very long on male turkeys.
667
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Developmental Psychology: Cognitive development · Development of the self · Emotional development · Language development · Moral development · Perceptual development · Personality development · Psychosocial development · Social development · Developmental measures BSID-III is the current version and is a standard series of measurements used primarily to assess the motor (fine and gross), language (receptive and expressive), and cognitive development of infants and toddlers, ages 0-3. This measure consists of a series of developmental play tasks and takes between 45 - 60 minutes to administer. Raw scores of successfully completed items are converted to scale scores and to composite scores. These scores are used to determine the child's performance compared with norms taken from typically developing children of their age (in months). The assessment is often used in conjunction with the Social-Emotional Adaptive Behavior Questionnaire. Completed by the parent or caregiver, this questionniare establishes the range of adaptive behaviors that the child can currently achieve and enables comparison with age norms. - Essentials of Bayley Scales of Infant Development II Assessment Maureen M. Black, Kathleen Matula. New York: John Wily, 1999. ISBN: 978-0-471-32651-9 - "Scales of Infant Development and Play-Based Assessment in Two-Year Old At-Risk Children" Lisa Kelly-Vance, Howard Needelman, Kim Troia, Brigette Oliver Ryalls, University of Nebraska-Omaha, USA Developmental Disabilities Bulletin, Vol. 27 (1), 1999 - "Poor predictive validity of the Bayley Scales of Infant Development for cognitive function of extremely low birth weight children at school age." Hack M, Taylor HG, Drotar D, Schluchter M, Cartar L, Wilson-Costello D, Klein N, Friedman H, Mercuri-Minich N, Morrow M. Pediatrics. 2005 Aug;116(2):333-41.
634
Pricing in the sky Many policy problems (from road congestion to water and electricity shortages) can be greatly improved through the proper application of a simple concept–pricing. In this USA Today article, Reason's Bob Poole explains how to apply pricing to air traffic control: By charging planes to use the most congested airports and airways, the system would give its customers economic incentives to reschedule flight times or choose less-congested airports without such charges. That easing of demand would provide breathing room to address the looming shortfalls in air-traffic-control capacity. All sorts of new technology can increase this capacity, both cross-country and on the approaches to airport runways. "Synthetic vision" systems can permit pilots to land at socked-in airports at nearly the same rate as in clear weather. So the air-traffic system need not break down just because Chicago has a bad-weather day. Other advanced technologies can reduce the size of the protective bubble needed around planes en route to keep them safely separated.
209
|A tamarin rock star | (photographed by Ltshears at Wikimedia) Our moods change when we hear music, but not all music affects us the same way. Slow, soft, higher-pitched, melodic songs soothe us; upbeat classical music makes us more alert and active; and fast, harsh, lower-pitched, dissonant music can rev us up and stress us out. Why would certain sounds affect us in specific emotional ways? One possibility is because of an overlap between how we perceive music and how we perceive human voice. Across human languages, people talk to their babies in slower, softer, higher-pitched voices than they speak to adults. And when we’re angry, we belt out low-pitched growly tones. The specific vocal attributes that we use in different emotional contexts are specific to our species… So what makes us so egocentric to think that other species might respond to our music in the same ways that we do? |A serene tamarin ponders where he placed | his smoking jacket (photographed by Michael Gäbler at Wikimedia) Cotton-top tamarins are squirrel-sized monkeys from northern Colombia that are highly social and vocal. As in humans (and pretty much every other vocalizing species studied), they tend to make higher-pitched tonal sounds when in friendly states and lower-pitched growly sounds when in aggressive states. But tamarin vocalizations have different tempos and pitch ranges than our tempos and pitch ranges. Chuck and David musically analyzed recorded tamarin calls to determine the common attributes of the sounds they make when they are feeling friendly or when they are aggressive or fearful. Then they composed music based on these attributes, essentially creating tamarin happy-music and tamarin death metal. They also composed original music based on human vocal attributes. They played 30-second clips of these different music types to pairs of tamarins and measured their behavior while the song was being played and for the first 5 minutes after it had finished. They compared these behavioral measures to the tamarins’ behavior during baseline periods (time periods not associated with the music sessions). An example of happy tamarin music (Copyright by David Teie and available through Biology Letters) can be found here. An example of aggressive tamarin music (Copyright by David Teie and available through Biology Letters) can be found here.As the researchers had predicted, tamarins were much more affected by tamarin music than by human music. Happy tamarin music seemed to calm them, causing the tamarins to move less and eat and drink more in the 5 minutes after the music stopped. Compared to the happy tamarin music, the aggressive tamarin music seemed to stress them out, causing the tamarins to move more and show more anxious behaviors (like bristling their fur and peeing) after the music stopped. The tamarins also showed lesser reactions to the human music. They showed less anxious behavior after the happy human music played and moved less after the aggressive human music played. So, human voice-based music also affected the tamarins to some degree, but not as strongly. This may be because there are some aspects of how we communicate emotions with our voice that are the same in tamarins. (How did the tamarin music make you feel?) Can you imagine what we could do with this idea of species-specific music? Well, David and Chuck did! They have since developed music for cats using similar techniques. Although they're still working on the paper, they have said that the cats prefered and were more calmed by cat music compared to human music. You can find samples and get your own copies here. We often think of vocal signals conveying messages in particular sounds, like words and sentences. But calls seem to do much more than that, making the emotions and behaviors of those listening resemble the emotions of those calling. Want to know more? Check this out: Snowdon, C., & Teie, D. (2009). Affective responses in tamarins elicited by species-specific music Biology Letters, 6 (1), 30-32 DOI: 10.1098/rsbl.2009.0593
281
What Is Air Pollution? in its great magnitude has existed in the 20th century from the coal burning industries of the early century to the fossil burning technology in the new century. The problems of air pollution are a major problem for highly developed nations whose large industrial bases and highly developed infrastructures generate much of the air Every year, billions of tonnes of pollutants are released into the atmosphere; the sources include power plants burning fossil fuels to the effects of sunlight on certain natural materials. But the air pollutants released from natural materials pose very little health threat, only the natural radioactive gas radon poses any threat to health. So much of the air pollutants being released into the atmosphere are all results of man’s activities. In the United Kingdom, traffic is the major cause of air pollution in British cities. Eighty six percent of families own either one or two vehicles. Because of the high-density population of cities and towns, the number of people exposed to air pollutants is great. This had led to the increased number of people getting chronic diseases over these past years since the car ownership in the UK has nearly trebled. These include asthma and respiratory complaints ranging through the population demographic from children to elderly people who are most at risk. Certainly those who are suffering from asthma will notice the effects more greatly if living in the inner city areas or industrial areas or even near by major roads. Asthma is already the fourth biggest killer, after heart diseases and cancers in the UK and currently, it affects more than three point four million In the past, severe pollution in London during 1952 added with low winds and high-pressure air had taken more than four thousand lives and another seven hundred in 1962, in what was called the ‘Dark Years’ because of the dense dark polluted air. is also causing devastation for the environment; many of these causes are by man made gases like sulphur dioxide that results from electric plants burning fossil fuels. In the UK, industries and utilities that use tall smokestacks by means of removing air pollutants only boost them higher into the atmosphere, thereby only reducing the concentration at their site. These pollutants are often transported over the North Sea and produce adverse effects in western Scandinavia, where sulphur dioxide and nitrogen oxide from UK and central Europe are generating acid rain, especially in Norway and Sweden. The pH level, or relative acidity of many of Scandinavian fresh water lakes has been altered dramatically by acid rain causing the destruction of entire fish populations. In the UK, acid rain formed by subsequent sulphur dioxide atmospheric emissions has lead to acidic erosion in limestone in North Western Scotland and marble in Northern England. In 1998, the London Metropolitan Police launched the ‘Emissions Controlled Reduction’ scheme where by traffic police would monitor the amount of pollutants being released into the air by vehicle exhausts. The plan was for traffic police to stop vehicles randomly on roads leading into the city of London, the officer would then measure the amounts of air pollutants being released using a CO2 measuring reader fixed in the owner's vehicle's exhaust. If the exhaust exceeded the legal amount (based on micrograms of pollutants) the driver would be fined at around twenty-five pounds. The scheme proved unpopular with drivers, especially with those driving to work and did little to help improve the city air quality. In Edinburgh, the main causes of bad air quality were from the vast number of vehicles going through the city centre from west to east. In 1990, the Edinburgh council developed the city by-pass at a cost of nearly seventy five million pounds. The by-pass was ringed around the outskirts of the city where its main aim was to limit the number of vehicles going through the city centre and divert vehicles to use the by-pass in order to reach their destination without going through the city centre. This released much of the congestion within the city but did little very little in solving the city’s overall air quality. To further decrease the number of vehicles on the roads, the government promoted public transport. Over two hundred million pounds was devoted in developing the country's public transport network. Much of which included the development of more bus lanes in the city of London, which increased the pace of bus services. Introduction of gas and electric powered buses took place in Birmingham in order to decrease air pollutants emissions around the centre of the city. Because children and the elderly are at most risk to chronic diseases, such as asthma, major diversion roads were build in order to divert the vehicles away from residential areas, schools and elderly institutions. In some councils, trees were planted along the sides of the road in order to decrease the amount of carbon monoxide emissions. Other ways of improving the air quality included the restriction on the amounts of air pollutants being released into the atmosphere by industries; tough regulations were placed whereby if the air quality dropped below a certain level around the industries area, a heavy penalty would be wavered against them. © Copyright 2000, Andrew Wan.
78
Variation is a term used in genetic science, and concerns the emergence of different varieties, or species. This genetic phenomenon causes individuals or groups within a given species to possess different features from others. For example, all human beings on Earth possess essentially the same genetic information. But thanks to the variation potential permitted by that genetic information, some people have round eyes, or red hair, or a long nose, or are short and stocky in stature. Darwinists, however, seek to portray variation within a species as evidence for evolution. The fact is, however, that variations constitute no such thing, because variation consists of the emergence of different combinations of genetic information that already exists, and cannot endow individuals with any new genetic information or characteristics. Variation is always restricted by existing genetic information. These boundaries are known as the gene pool in genetic science. (See The Gene Pool.) Darwin, however, thought that variation had no limits when he proposed his theory267, and he depicted various examples of variation as the most important evidence for evolution in his book The Origin of Species. All human beings on Earth share basically the same genetic information, but thanks to the variation potential permitted by this genetic information, they often look very different from one another. According to Darwin, for example, farmers mating different variations of cow in order to obtain breeds with better yields of milk would eventually turn cows into another species altogether. Darwin’s idea of limitless change stemmed from the primitive level of science in his day. As a result of similar experiments on living things in the 20th century, however, science revealed a principle known as genetic homeostasis. This principle revealed that all attempts to change a living species by means of interbreeding (forming different variations) were in vain, and that between species, there were unbreachable walls. In other words, it was absolutely impossible for cattle to evolve into another species as the result of farmers mating different breeds to produce different variations, as Darwin had claimed would happen. Luther Burbank, one of the world’s foremost authorities on the subject of genetic hybrids, expresses a similar truth: “there are limits to the development possible, and these limits follow a law.” 268 Thousands of years of collective experience have shown that the amount of biological change obtained using cross-breeding is always limited, and that there is a limit to the variations that any one species can undergo. Indeed, in the introduction to their book Natural Limits to Biological Change Professor of Biology Lane P. Lester and the molecular biologist Raymond G. Bohlin wrote: That populations of living organisms may change in their anatomy, physiology, genetic structure, etc., over a period of time is beyond question. What remains elusive is the answer to the question, How much change is possible, and by what genetic mechanism will these changes take place? Plant and animal breeders can marshal an impressive array of examples to demonstrate the extent to which living systems can be altered. But when a breeder begins with a dog, he ends up with a dog—a rather strange looking one, perhaps, but a dog nonetheless. A fruit fly remains a fruit fly; a rose, a rose, and so on.269 Variations and their various changes are restricted inside the bounds of a species’ genetic information, and they can never add new genetic information to species. For that reason, no variation can be regarded as an example of evolution. The Danish scientist W. L. Johannsen summarizes the situation: The variations upon which Darwin and Wallace placed their emphasis cannot be selectively pushed beyond a certain point, that such variability does not contain the secret of “indefinite departure.” 270 The fact that there are different human races in the world or the differences between parents and children can be explained in terms of variation. Yet there is no question of any new component being added to their gene pool. For example, no matter how much you seek to enrich their species, cats will always remain cats, and will never evolve into any other mammal. It is impossible for the sophisticated sonar system in a marine mammal to emerge through recombination. (See Recombination.) Variation may account for the differences between human races, but it can never provide any basis for the claim that apes developed into human beings. Vestigial Organs Thesis One claim that long occupied a place in the literature of evolution but was quietly abandoned once it was realized to be false is the concept of vestigial organs. Some evolutionists, however, still imagine that such organs represent major evidence for evolution and seek to portray them as such. A century or so ago, the claim was put forward that some living things had organs that were inherited from their ancestors, but which had gradually become smaller and even functionless from lack of use. The tonsils, which evolutionists long sought to define as vestigial organs, have been found to play an important role in protecting against throat in fections, particularly up until adulthood. Those organs were in fact ones whose functions had not yet been identified. And so, the long list of organs believed by evolutionists to be vestigial grew ever shorter. The list of originally proposed by the German anatomist R. Wiedersheim in 1895 contain approximately 100 organs, including the human appendix and the coccyx. But the appendix was eventually realized to be a part of the lymph system that combats microbes entering the body, as was stated in one medical reference source in 1997: Other bodily organs and tissues—-the thymus, liver, spleen, appendix, bone marrow, and small collections of lymphatic tissue such as the tonsils in the throat and Peyer’s patch in the small intestine—are also part of the lymphatic system. They too help the body fight infection. 271 The tonsils, which also appeared on that same list of vestigial organs, were likewise discovered to play an important role against infections, especially up until adulthood. (Like the appendix, tonsils sometimes become infected by the very bacteria they seek to combat, and so must be surgically removed.) The coccyx, the end of the backbone, was seen to provide support for the bones around the pelvic bone and to be a point of fixation for certain small muscles. In the years that followed, other organs regarded as vestigial were shown to serve specific purposes: The thymus gland activates the body’s defense system by setting the T cells into action. The pineal gland is responsible for the production of important hormones. The thyroid establishes balanced growth in babies and children. The pituitary ensures that various hormone glands are functioning correctly. Today, many evolutionists accept that the myth of vestigial organs stemmed from sheer ignorance. The evolutionist biologist S.R. Scadding expresses this in an article published in the magazine Evolutionary Theory: Since it is not possible to unambiguously identify useless structures, and since the structure of the argument used is not scientifically valid, I conclude that ‘vestigial organs’ provide no special evidence for the theory of evolution.272 Evolutionists also make a significant logical error in their claim that vestigial organs in living things are a legacy from their ancestors: Some organs referred to as “vestigial” are not present in the species claimed to be the forerunners of man. For example, some apes have no appendix. The zoologist Professor Hannington Enoch, an opponent of the vestigial organ thesis, sets out this error of logic: Apes possess an appendix, whereas their less immediate relatives, the lower apes, do not; but it appears again among the still lower mammals such as the opossum. How can the evolutionists account for this? 273 The scenario of vestigial organs put forward by evolutionists contains its own internal inconsistencies, besides being scientifically erroneous. We humans have no vestigial organs inherited from our supposed ancestors, because humans did not evolve randomly from other living things, but were fully and perfectly created in the form we have today. It has now been realized that the appendix (below), which evolutionist biologists imagined to be vestigial, plays an important role in the body's immune system. The lowest bone in the spinal column, known as the coccyx, is al so not vestigial, but a point for muscles to at tach to.
208
Patient data is more accessible than ever thanks to patient and provider mobile connectivity. Data protection and patient ID verification have become critical parts of health care infrastructure. Providers want to identify their patients as quickly as possible, if they’re in need of emergency treatment, but they also need to ensure the data they're reading is accurate and secure. Read this guide to understand how different health care patient ID verification technologies and trends, including single sign-on, data breach protection and response, and more, have affected and will continue to shape the health IT industry. Table of contents: Identification numbers or codes can be used to authenticate doctors and verify the accuracy of patient data. Hospitals also use IDs to control what information doctors can access through the use of their mobile devices. Read more to see other ways in which IDs are currently used in health care facilities. Single sign-on in health care Single sign-on technology enables doctors and patients more convenient access to their information by giving them one login name and password that grants them access to various processes. Read how else single sign-on is used to protect and grant access to critical data. Patient info data breaches Instances of compromised or stolen data are preventable through patient and physician education. However, there have been examples of data breaches in which patient information has been exposed, including a notable breach at Beth Israel Deaconess Medical Center in Boston. Future patient ID options and regulations A national system consisting of a unique patient ID for every American has been a stated goal of some health care personnel. This is one possible option for secure patient identification in the future. There are other areas, like cloud and mobile devices, that are changing the way that health care IDs will be used.
198
- Join The Movement - Media & More - About Us Types of Builds Millard Fuller used to say that a home is the foundation on which human development occurs. It is also an important, positive step in working on a safer, healthier and more responsible future. Many people struggling to put food on the table, pay bills, purchase school supplies and clothing and maintain transportation to work are not thinking about repairing their homes, even though those homes might be dangerous, literally crumbling around them and their children. The Fuller Center is an organization devoted to partnership, renewed opportunity and providing a hand up instead of a hand out. The construction and rehabilitation of simple, decent houses are the two basic ways we do this. The work of The Fuller Center allows the elderly to live out their rest of their days comfortably in their own homes, gives families a fresh start, enables the handicapped to maintain a level of independence in accessible homes and, in some cases, transforms entire neighborhoods.
300
Battle of Caporetto The Battle of Caporetto (also known as the Twelfth Battle of the Isonzo or the Battle of Karfreit as it was known by the Central Powers), took place from 24 October to 19 November 1917, near the town of Kobarid (now in Slovenia), on the Austro-Italian front of World War I. The battle was named after the Italian name of the town of Kobarid (known as Karfreit in German). Austro-Hungarian forces, reinforced by German units, were able to break into the Italian front line and rout the Italian army, which had practically no mobile reserves. The battle was a demonstration of the effectiveness of the use of stormtroopers and the infiltration tactics developed in part by Oskar von Hutier. The use of poison gas by the Germans played a key role in the collapse of the Italian Second Army. ||This section needs additional citations for verification. (October 2012)| The Austrian offensive began at approximately 02:00 on 24 October 1917. Due to the inclement weather that morning, particularly the mist, the Italians were caught by complete surprise. The battle opened with a German artillery barrage, poison gas, and smoke, and was followed by an all-out assault against the Italian lines. The Italians had outdated gas masks, gave no counter fire and had given the Germans all the weather information they needed over their radio. The defensive line of the Italian Second Army was breached between the IV and XXVII Corps almost immediately. The German forces made extensive use of flamethrowers and hand grenades as a part of their infiltration tactics, and were able to tear gaping holes in the Italian line, especially in the Italian strongholds on Mount Matajur and the Kolovrat Range. By the end of the first night, von Below's men had advanced a remarkable 25 km (16 mi). German and Austro-Hungarian attacks from either side of von Below's central column were less effective, however. The Italian Army had been able to repel the majority of these attacks, but the success of von Below's central thrust threw the entire Italian Army into disarray. Forces had to be moved along the Italian front in an attempt to stem von Below's breakout, but this only weakened other points along the line and invited further attacks. At this point, the entire Italian position on the Tagliamento River was under threat. 2nd army commander Luigi Capello was Italy's best general but was bedridden with fever while still retaining command. Realizing his forces were ill-prepared for this attack and were being routed, Capello requested permission to withdraw back to the Tagliamento. He was overruled by Cadorna, however, who believed that the Italian force could regroup and hold out against the attackers. Finally, on 30 October, Cadorna ordered the majority of the Italian force to retreat to the other side of the river. It took the Italians four full days to cross the river, and by this time the German and Austro-Hungarian armies were on their heels. By 2 November, a German division had established a bridgehead on the Tagliamento. About this time, however, the rapid success of the attack caught up with them. The German and Austro-Hungarian supply lines were stretched to breaking point, and as a result, they were not able to launch another concerted attack. Cadorna took advantage of this to retreat further, and by 10 November had established a position on the Piave River. Failures of German Logistics Even before the battle, Germany was struggling to feed and supply its armies in the field. Erwin Rommel, who, as a junior officer, won the Pour le Mérite for his exploits in the battle, often bemoaned the demands placed upon his "poorly fed troops". The Allied blockade of the German Empire, which the Kaiserliche Marine had been unable to break, was responsible for food shortages and widespread malnutrition in Germany and the Central Powers in general. When inadequate provisioning was combined with the gruelling night marches preceding the battle of Caporetto (Kobarid), a heavy toll was extracted from the German and Austro-Hungarian forces. Despite these logistical problems, the initial assault was extremely successful. However, as the area controlled by the combined Central Powers forces expanded, an already limited logistical capacity was overstrained. By the time the attack reached the Piave, the soldiers of the Central Powers were running low on supplies and were feeling the physical effects of exhaustion. As the Italians began to counter the pressure put on them by the Central Powers, the German forces lost all momentum and were once again caught up in another round of attrition warfare. Italian losses were enormous: 10,000 were killed, 30,000 wounded and 265,000 were taken prisoner – morale was so low among the Italian troops, mainly due to Cadorna's harsh disciplinary regime, that most of these surrendered willingly. Furthermore, roughly 3,000 guns, 3,000 machine guns and 2,000 mortars were captured, along with an untold amount of stores and equipment. Rommel, then an Oberleutnant, captured 1,500 men and 43 officers with just 3 riflemen and 2 officers to help. Austro-Hungarian and German forces advanced more than 100 km (62 mi) in the direction of Venice, but they were not able to cross the Piave River. Although to this point the Italians had been left to fight on their own, after Kobarid (Caporetto) they were reinforced by six French infantry divisions and five British infantry divisions as well as sizeable air contingents. However, these troops played no role in stemming the advancing Germans and Austro-Hungarians, because they were deployed on the Mincio River, some 60 miles behind the Piave, as the British and French strategists did not believe the Piave line could be held. The Piave served as a natural barrier where the Italians could establish a new defensive line, which was held during the subsequent Battle of the Piave River and later served as springboard for the Battle of Vittorio Veneto, where the Austro-Hungarian army was finally defeated after four days of stiff resistance. Luigi Cadorna was forced to resign after the defeat. The defeat alone was not the sole cause, but rather the breaking point for an accumulation of failures, as perceived by the Italian Prime Minister, Vittorio Emanuele Orlando. Throughout much of his command, including at Kobarid (Caporetto), Cadorna was known to have maintained poor relations with the other generals on his staff. By the start of the battle he had sacked 217 generals, 255 colonels and 355 battalion commanders. In addition, he was detested by his troops as being too harsh. He was replaced by Armando Diaz and Pietro Badoglio. He had already been directing the battle 20 miles behind before fleeing another 100 miles to Padua. This led governments to the realization that fear alone could not adequately motivate a modern army. After the defeat at Caporetto, Italian propaganda offices were established, promising land and social justice to soldiers. Italy also accepted a more cautious military strategy from this point on. General Diaz concentrated his efforts on rebuilding his shattered forces while taking advantage of the national rejuvenation that had been spurred by invasion and defeat. After this battle, the term "Caporetto" gained a particular resonance in Italy. It is used to denote a terrible defeat – the failed General Strike of 1922 by the socialists was referred to by Mussolini as the "Caporetto of Italian Socialism". Many years after the war, Caporetto was still being used to destroy the credibility of the liberal state. Popular culture The Battle of Caporetto has been the subject of a number of books. The Swedish author F.J. Nordstedt (i.e. Christian Braw) wrote about the battle in his novel Caporetto. The bloody aftermath of Caporetto was vividly described by Ernest Hemingway in his novel A Farewell to Arms. Curzio Malaparte wrote an excoriation of the battle in his first book, Viva Caporetto, published in 1921. It was censored by the state and suppressed; it was finally published in 1980. - Tucker, Spencer C. (11 November 2010). Battles That Changed History: An Encyclopedia of World Conflict. United States: ABC-CLIO. p. 430. ISBN 978-1-59884-429-0. Retrieved 16 September 2012. - Tucker, Spencer C.; Roberts, Priscilla Mary (25 October 2005). World War I: A Student Encyclopedia. United States: ABC-CLIO. p. 431. ISBN 1-85109-879-8. Retrieved 5 August 2012. "By 10 November Italian losses were 10,000 dead, 30,000 wounded, and 265,000 prisoners (about 350,000 stragglers from the Second Army did manage to reach the Piave line). The army had also lost 3,152 artillery pieces of a preoffensive total of 6,918. An additional 1,712 heavy trench mortars and 3,000 machine guns had been captured or abandoned in the retreat, along with vast amounts of other military equipment, especially as the rapid withdrawal had prevented the removal of heavy weapons and equipment across the Isonzo River. In contrast, the attackers had sustained about 70,000 casualties." - Seth, Ronald (1965). Caporetto: The Scapegoat Battle. Macdonald. p. 147 - Stearns, Peter; Langer, William (2001). The Encyclopedia of World History (6th ed.). Houghton Mifflin Harcourt. p. 669. ISBN 0-395-65237-5. - Dupuy & Dupuy (1970), p. 971 - Geoffrey Regan, More Military Blunders, page 161 - Macksey, Kenneth (1997). Rommel: Battles and Campaigns. Da Capo Press. p. 224. ISBN 0-306-80786-6. - Simkins, Peter; Jukes, Geoffrey; Hickey, Michael (2003). The First World War. Osprey Publishing. p. 352. ISBN 1-84176-738-7. - Townley, Edward (2002). Collier, Martin, ed. Mussolini and Italy. Heinemann. p. 16. ISBN 0-435-32725-9. - Geoffrey Regan, More Military Blunders, page 160 - Morselli, Mario (2001). Caporetto, 1917: Victory Or Defeat?. Routledge. p. 133. ISBN 0-7146-5073-0. Further reading - Connelly, O. On War and Leadership: The Words of Combat Commanders from Frederick the Great to Norman Schwarzkopf, 2002 ISBN 0-691-03186-X - Dupuy R. E., &, Dupuy, T. N., The Encyclopedia of Military History, (revised edition), Jane's Publishing Company, 1970, SBN 356 02998 0 - Morselli, M. Caporetto 1917: Victory or Defeat?, 2001 ISBN 0-7146-5073-0 - Reuth, R. G. Rommel: The End of a Legend, 2005 ISBN 1-904950-20-5 - Seth, Ronald: Caporetto: The Scapegoat Battle. Macdonald, 1965 - Cavallaro, G. V. Futility Ending in Disaster. Xlibris, 2009 |Wikimedia Commons has media related to: Battle of Caporetto| - Walks of Peace in the Soea Region Foundation: The Battles of the Isonzo, 1915–17 - Pro Hereditate: The Isonzo Front - (Italian) La Grande Guerra: Novant'anni fa la Battaglia di Caporetto - The Battle of Caporetto, 1917
855
General Information / Education / Medical / Cultural / Entertainment The History ... Cumberland Gap has been used as a crossing point in the Appalachian Mountains. Animals have used it as a path to the green pastures of Kentucky. Native Americans used the Gap as the Warrior's Path that led from the Potomac River down the south side of the Appalachians through the Gap and north to "The Dark and Bloody Ground" known as Kentucky and on to Ohio. In 1750 Dr. Thomas Walker found the Gap and mapped its location, but the French and Indian Wars closed the new frontiers. Daniel Boone and many other long-hunters used the Gap to the Kentucky hunting grounds. In 1775, after the Treaty of Sycamore Shoals ended most Indian troubles, Boone and thirty men marked out the Wilderness Trail from what is now Kingsport Tennessee through the Cumberland Gap to Kentucky. Part of the Wilderness Road can be walked in Cumberland Gap, Tennessee by the Iron Furnace. Before the Revolutionary War over 12,000 people crossed into the new frontier territory. By the time of Kentucky's admission to the Union, over 100,000 people had passed through the Gap. By 1800 the Gap was being used for transportation and commerce, both east and west. In the 1830's, other routes west caused the Gap to decline in importance. During the Civil War the Gap was called the Keystone of the Confederacy and the Gibraltar of America. Both armies felt the invasion of the North or South would come through the Gap. Both armies held and fortified the Gap against the invasion that never came. The Gap exchanged hands four times to be finally abandoned in 1866 by the Federal Army. Today the Cumberland Gap is the main local route North and South, via Cumberland Gap Parkway (Hwy. 25E). By the mid 1990's a four lane tunnel under the Gap will open a new North-South, East-West route and the Cumberland Gap will be restored like the first pioneer saw it. Claiborne County located on the Tennessee-Kentucky-Virginia borders in East Tennessee, one of the state's three "Grand Divisions." It was formed in 1801 from parts of Hawkins and Grainger Counties. The county seat is Tazewell. The communities of Tazewell and New Tazewell are in Claiborne County, Tennessee. We are located in the beautiful mountains of the Cumberland Gap area. Cumberland Gap is located where Tennessee, Kentucky, and Virginia meet. Claiborne County is a rural county with a population of 28,828. The county covers 2400 square miles. Tazewell, the county seat, is located about 40 miles north of Knoxville, Tennessee. Along with our beautiful mountains we have beautiful Norris Lake with 850 miles of shoreline. Norris Lake was the first T.V.A. lake built in the late 1930's. The lake is fed by two large rivers, the Clinch and the Powell. The lake is enjoyed by fisherman and water lovers of all ages. Some of the larger communities in the county are Tazewell, New Tazewell, Harrogate, Speedwell, Forge Ridge, Midway, Springdale, Cumberland Gap, Cedar Grove, Dogwood Heights, and Lone Mountain. Population in Claiborne County 28,828 Communi Comm Services Claiborne County Utility District United Cities Propane Gas The Claiborne County area is home to 11 schools. The Claiborne County Board of Education consists of 7 members. For additional information contact our superintendent of schools is Dr. Roy K. Norris. You can contact the central office at Box 179, Tazewell, Tennessee 37879. The phone number is (423)626-5225. Welcome to Lincoln Memorial University (LMU). For more than 100 years, LMU has helped serve the higher education needs of our tri-state area and beyond. We are excited by that heritage, and we invite you to share it! The University offers a talented, dedicated faculty and staff, a strong and varied curriculum, a well-rounded student life, a beautiful campus, and excellent facilities. In keeping with its Lincoln legacy, LMU prides itself in providing well developed and relevant academic programs for today's students destined to compete in tomorrow's competitive workplace. Some of our nation's most competent lawyers, doctors, nurses, artists, veterinarians, business persons, and writers have their academic roots at Lincoln Memorial University Claiborne County Hospital and Nursing Home 1850 Old Knoxville Road P.O. Box 219 Tazewell, TN 37879 (865) 626-4211 The Abraham Lincoln Library and Museum houses one of the most diverse Lincoln and Civil War collections in the country. Located on the beautiful campus of Lincoln Memorial University in Harrogate, Tennessee. Exhibited are many rare items - the silver-topped cane Lincoln carried the night of his assassination, a lock of his hair clipped as he lay on his death bed, two life masks made of Lincoln, the tea set he and Mary Todd owned in their home in Springfield, and numerous other belongings. Over 20,000 books, manuscripts, pamphlets, photographs, paintings, and sculptures tell the story of President Lincoln and the Civil War period in America. The Cumberland Gap National Historical Park in Cumberland Gap is a natural opening in the mountains made famous by Daniel Boone. The Indians used this path long before Boone arrived. Today, you can visit the Cumberland Gap National Historical Park and enjoy the history and beauty of our area. If your interested in fishing, boating, or any water activity, then Norris Lake offers all that and more. There are several marinas and boat docks throughout the county. Toll Free 800-747-0713
584
Nausea and vomiting - adults Nausea is the feeling of having an urge to vomit. It is often called being sick to your stomach. Vomiting or throwing up is forcing the contents of the stomach up through the esophagus and out of the mouth. Emesis; Vomiting; Stomach upset; Upset stomach Many common problems may cause nausea and vomiting: Nausea and vomiting may also be early warning signs of more serious medical problems, such as: Once you and your doctor find the cause, you will want to know how to treat your nausea or vomiting . You may be asked to take medicine, change your diet, or try other things to make you feel better. It is very important to keep enough fluids in your body. Try drinking frequent, small amounts of clear liquids If you have morning sickness during pregnancy, ask your doctor about the many possible treatments. The following may help treat motion sickness: - Lying down - Over-the-counter antihistamines (such as Dramamine) - Scopolamine prescription skin patches (such as Transderm Scop) are useful for extended trips, such as an ocean voyage. Place the patch 4 - 12 hours before setting sail. Scopolamine is effective but may produce dry mouth, blurred vision, and some drowsiness. Scopolamine is for adults only. It should NOT be given to children. Call your health care provider if Call 911 or go to an emergency room if: - You think vomiting is from poisoning - You notice blood or dark, coffee-colored material in the vomit Call a health care provider right away or seek medical care if you or another person has: Been vomiting for longer than 24 hours Been unable to keep any fluids down for 12 hours or more Headache or stiff neck Not urinated for 8 or more hours Severe stomach or belly pain Vomited three or more times in 1 day Signs of dehydration include: - Crying without tears - Dry mouth - Increased thirst - Eyes that appear sunken - Skin changes -- for example, if you touch or squeeze the skin, it doesn't bounce back the way it usually does - Urinating less often or having dark yellow urine What to expect at your health care provider's office Your health care provider will perform a physical examination, and will look for signs of dehydration. Your health care provider will ask questions about your symptoms, such as: - When did the vomiting begin? How long has it lasted? How often does it occur? - Does it occur after you eat, or on an empty stomach? - What other symptoms are present -- abdominal pain, fever, diarrhea, or headaches? - Are you vomiting blood - Are you vomiting anything that looks like coffee grounds? - Are you vomiting undigested food? - When was the last time you urinated? Other questions you may be asked include: - Have you been losing weight? - Have you been traveling? Where? - What medications do you take? - Did other people who ate at the same place as you have the same symptoms? - Are you pregnant or could you be pregnant? The diagnostic tests may be performed: Depending on the cause and how much extra fluids you need, you may have to stay in the hospital or clinic for a period of time. You may need fluids given through your veins (intravenous or IV). Malagelada J-R, Malagelada C. Nausea and vomiting. In: Feldman M, Friedman LS, Brandt LJ, eds. Sleisenger & Fordtran's Gastrointestinal and Liver Disease. 9th ed. Philadelphia, Pa: Saunders Elsevier; 2010:chap 14. Mcquaid K. Approach to the patient with gastrointestinal disease. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 134. This article uses information by permission from Alan Greene, M.D., © Greene Ink, Inc. George F. Longstreth, MD, Department of Gastroenterology, Kaiser Permanente Medical Care Program, San Diego, California. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
594
The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices Monday, March 23, 2009 The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices This past year, I have had rewarding opportunities to observe traditional food cultures in varied regions of the world. These are: Athabascan Indian in the interior of Alaska (the traditional Tanana Chiefs Conference tribal lands) in July, 2008 (for more, read below); Swahili coastal tribes in the area of Munje village (population about 300), near Msambweni, close to the Tanzania border in December, 2008-January, 2009 (for more, read below); and,Laikipia region of Kenya (January, 2009), a German canton of Switzerland (March, 2009), and the Piemonte-Toscana region of northern/central Italy (images only, February-March, 2009). In Fort Yukon, Alaska, salmon is a mainstay of the diet. Yet, among the Athabascan Indians, threats to subsistence foods and stresses on household economics abound. In particular, high prices for external energy sources (as of July, 2008, almost $8 for a gallon of gasoline and $6.50 for a gallon of diesel, which is essential for home heating), as well as low Chinook salmon runs for information click here, and moose numbers. Additional resource management issues pose threats to sustaining village life – for example, stream bank erosion along the Yukon River, as well as uneven management in the Yukon Flats National Wildlife Refuge. People are worried about ever-rising prices for fuels and store-bought staples, and fewer and fewer sources of wage income. The result? Villagers are moving out from outlying areas into “hub” communities like Fort Yukon -- or another example, Bethel in Southwest Alaska – even when offered additional subsidies, such as for home heating. But, in reality, “hubs” often offer neither much employment nor relief from high prices. In Munje village in Kenya, the Digo, a Bantu-speaking, mostly Islamic tribe in the southern coastal area of Kenya, enjoy the possibilities of a wide variety of fruits, vegetables, and fish/oils. Breakfast in the village typically consists of mandazi (a fried bread similar to a doughnut), and tea with sugar. Lunch and dinner is typically ugali and samaki (fish), maybe with some dried cassava or chickpeas. On individual shambas (small farms), tomatoes, cassava, maize, cowpeas, bananas, mangos, and coconut are typically grown. Ugali is consumed every day, as are cassava, beans, oil, fish -- and rice, coconut, and chicken, depending on availability. Even with their own crops, villagers today want very much to enter the market economy and will sell products from their shambas to buy staples and the flour needed to make mandazis, which they in turn sell. Sales of mandazis (and mango and coconut, to a lesser extent) bring in some cash for villagers. A treasured food is, in fact, the coconut. This set of pictures show how coconut is used in the village. True, coconut oil now is reserved only for frying mandazi. But it also is used as a hair conditioner, and the coconut meat is eaten between meals. I noted also that dental hygiene and health were good in the village. Perhaps the coconut and fish oils influence this (as per the work of Dr. Weston A. Price). Photos L-R: Using a traditional conical basket (kikatu), coconut milk is pressed from the grated meat; Straining coconut milk from the grated meat, which is then heated to make oil; Common breakfast food (and the main source of cash income), the mandazi, is still cooked in coconut oil Note: All photos were taken by G. Berardi Thursday, February 19, 2009 Despite maize in the fields, it is widely known that farmers are hoarding stocks in many districts. Farmers are refusing the NCPB/government price of Sh1,950 per 90-kg bag. They are waiting to be offered at least the same amount of money as that which was being assigned to imports (Bii, 2009b). “The country will continue to experience food shortages unless the Government addresses the high cost of farm inputs to motivate farmers to increase production,” said Mr. Jonathan Bii of Uasin Gish (Bartoo & Lucheli, 2009; Bii, 2009a, 2009b; Bungee, 2009). Pride and politics, racism and corruption are to blame for food deficits (Kihara & Marete, 2009; KNA, 2009; Muluka, 2009; Siele, 2009). Clearly, what are needed in Kenya are food system planning, disaster management planning, and protection and development of agricultural and rural economies. Click here for the full text. Photos taken by G. Berardi Cabbage, an imported food (originally), and susceptible to much pest damage. Camps still remain for Kenya’s Internally Displaced Persons resulting from post-election violence forced migrations. Food security is poor. Lack of sustained recent short rains have resulted in failed maize harvests. Friday, January 16, 2009 Today I went to a lunch time discussion of sustainability. This concept promoted development with an equitable eye to the triple bottom line - financial, social, and ecological costs. We discussed the how it seemed relatively easier to discuss the connections between financial and ecological costs, than between social costs and other costs. Sustainable development often comes down to "green" designs that consider environmental impacts or critiques of the capitalist model of financing. As I thought about sustainable development, or sustainable community management if you are a bit queasy with the feasibility of continuous expansion, I considered its corollaries in the field of disaster risk reduction. It struck me again that it is somewhat easier to focus on some components of the triple bottom line in relation to disasters. The vulnerability approach to disasters has rightly brought into focus the fact that not all people are equally exposed to or impacted by disasters. Rather, it is often the poor or socially marginalized most at risk and least able to recover. This approach certainly brings into focus the social aspects of disasters. The disaster trap theory, likewise, brings into focus the financial bottom line. This perspective is most often discussed in international development and disaster reduction circles. It argues that disasters destroy development gains and cause communities to de-develop unless both disaster reduction and development occur in tandem. Building a cheaper, non-earthquake resistant school in an earthquake zone, may make short-term financial sense. However,over the long term, this approach is likely to result in loss of physical infrastructure, human life, and learning opportunities when an earthquake does occur. What seems least developed to me, though I would enjoy being rebutted, is the ecological bottom line of disasters. Perhaps it is an oxymoron to discuss the ecological costs of disasters, given that many disasters are triggered natural ecological processes like cyclones, forest fires, and floods. It might also be an oxymoron simply because a natural hazard disaster is really looking at an ecological event from an almost exclusively human perspective. Its not a disaster if it doesn't destroy human lives and human infrastructure. But, the lunch-time discussion made me wonder if there wasn't something of an ecological bottom line to disasters in there somewhere. Perhaps it is in the difference between an ecological process heavily or lightly impacted by human ecological modification. Is a forest fire in a heavily managed forest different from that in an unmanaged forest? Certainly logging can heighten the impacts of heavy rains by inducing landslides, resulting in a landscape heavily rather than lightly impacted by the rains. Similar processes might also be true in the case of heavily managed floodplains. Flooding is concentrated and increased in areas outside of levee systems. What does that mean for the ecology of these locations? Does a marsh manage just as well in low as high flooding? My guess would be no. And of course, there is the big, looming disaster of climate change. This is a human-induced change that may prove quite disasterous to many an ecological system, everything from our pine forests here, to arctic wildlife, and tropical coral reefs. Perhaps, we disaster researchers, need to also consider a triple bottom line when making arguments for the benefits of disaster risk reduction. Tuesday, January 13, 2009 This past week the Northwest experienced a severe barrage of weather systems back to back. Everyone seemed to be affected. Folks were re-routed on detours, got soaked, slipped on ice, or had to spend money to stay a little warmer. In Whatcom and Skagit Counties, there are hundreds to thousands of people currently in the process of recovering and cleaning-up after the floods. These people live in the rural areas throughout the county, with fewer people knowing about their devastation and having greater vulnerability to flood hazards. Luckily, there are local agencies and non-profits who are ready at a moment’s call to help anyone in need. The primary organization that came to the aid of the flood victims was the American Red Cross. The last week I began interning and volunteering with one of these non-profits, the Mt. Baker American Red Cross (ARC) Chapter. While I am still in the process of getting screened and officially trained, I received first-hand experience and saw how important this organization is to the community. With the flood waters rising throughout the week, people were flooded out of their homes and rescued from the overflowing rivers and creeks. As the needs for help increased, hundreds of ARC volunteers were called to service. Throughout the floods there have been several shelters opened to accommodate the needs of these flood victims. On Saturday I was asked to help staff one of these shelters overnight in Ferndale. While I talked with parents and children, I became more aware of the stark reality of how these people have to recover from having all their possessions covered in sewage and mud and damaged by flood waters. In the meantime, these flood victims have all their privacy exposed to others in a public shelter, while they work to find stability in the middle of all the traumas of the events. As I sat talking and playing with the children, another thought struck me. Children are young and resilient, but it must be very difficult when they connect with a volunteer and then lose that connection soon after. Sharing a shelter with the folks over the weekend showed a higher degree of reality and humanity to the situation than the news coverage ever could. I posted this bit about my volunteer experience because it made me realize something about my education and degree track in disaster reduction and emergency planning. We look at ways to create a more sustainable community, and we need to remember that community service is an important part of creating this ideal. Underlying sustainable development is the triple bottom line (social, economy, and environment). Volunteers and non-profits are a major part of this social line of sustainability. Organizations like the American Red Cross only exist because of volunteers. So embrace President-elect Obama’s call for a culture of civil service this coming week and make a commitment to the organization of your choice with your actions or even your pocketbook. Know that sustainable development cannot exist with out social responsibility. Thursday, January 8, 2009 Its been two days now that schools have been closed in Whatcom County, not for snow, but for rain and flooding. This unusual event coincides with record flooding throughout Western Washington, just a year after record flooding closed I5 for three days and Lewis County businesses experienced what they then called an unprecedented 500 year flood. I guess not. There are many strange things about flood risk notation, and this idea that a 500 year flood often trips people up. They often believe a flood of that size will happen only once in 500 years. On a probabilistic level, this is inaccurate. A 500 year flood simply has a .2% probability of happening each year. A more useful analogy might be to tell people they are rolling a 500 sided die every year and hoping that it doesn’t come up with a 1. Next year they’ll be forced to roll again. But, this focus on misunderstandings of probability often hides an even larger societal misunderstanding . Flood risk changes when we change the environment in which it occurs. If a flood map tells you that you are not in the flood plain, better check the date of the map. Most maps are utterly out of date and many vastly underestimate present flood risk. There are several reasons this happens. Urban development, especially development with a lot of parking lots and buildings that don’t let water seep into the ground, will cause rainwater to move quickly into rivers rather than seep into the ground and slowly release. Developers might complain that they are required to create runoff catchment wetlands when they do build. They do, but these requirements may very well be based upon outdated data on flood risk. Thus, each new development never fully compensates for its runoff, a small problem for each site but a mammoth problem when compounded downstream. Deforesting can have the same effect, with the added potential for house-crushing and river-clogging mudslides. Timber harvesting is certainly an important industry in our neck of the woods. Not only is commercial logging an important source of jobs for many rural and small towns, logging on state Department of Natural Resource land is the major source of funding for K-12 education. Yet, commercial logging, like other industries, suffers from a problem of cost externalization. When massive mudslides occurred during last year’s storm, Weyerhaeuser complained that it wasn’t it’s logging practices, but the fact that it was an unprecedented, out of the blue, 500 year storm that caused it. While it is doubtful the slides would have occurred uncut land, that isn’t the only fallacy. When the slide did occur, the costs of repairing roads, treatment plants, and bridges went to the county and often was passed on to the nation’s tax payers through state and federal recovery grants. Thus, what should have been paid by Weyerhaeuser, 500 year probability or not, was paid by someone else. Finally, there is local government. Various folks within local governments set regulations for zoning, deciding what will be built and where. Here is the real crux of the problem. Local government also gets an increase in revenue in the form of property, sales, and business income taxes. Suppress the updating of flood plain maps, and you get a short term profit and often, a steady supply of happy voters. You might think these local governments will have to pay when the next big flood comes, but often that can be avoided. Certainly, they must comply with federal regulations on flood plain management to be part of the National Flood Insurance program, but that plan has significant leeway and little monitoring. Like the commercial logging, disaster-stricken local governments can often push the recovery costs off to individual homeowners through the FEMA homeowner’s assistance program, and off to state and federal agencies by receiving disaster recovery and community development grants and loans. Certainly, some communities are so regularly devastated, and are so few resources, that disasters simply knock them down before they can given stand up again. But others have found loopholes and can profit by continuing to use old food maps and failing to aggressively control flood plain development. What is it going to take to really change this system and make it unprofitable to profit from bad land use management? Here’s a good in-depth article on last year’s landslides in Lewis County. http://seattletimes.nwsource.com/html/localnews/2008048848_logging13m.html An interesting article on the failure of best management practices in development catchment basins can be found here: Hur, J. et al (2008) Does current management of storm water runoff adequately protect water resources in developing catchments? Journal of Soil and Water Conservation, 63 (2) pp. 77-90. Monday, December 29, 2008 It’s difficult to imagine a more colorful book, celebrating locally-grown and –marketed foods, than David Westerlund’s Simone Goes to the Market: A Children’s Book of Colors Connecting Face and Food. This book is aimed at families and the foods they eat. Who doesn’t want to know where their food is coming from – the terroir, the kind of microclimate it’s produced in, as well as who’s selling it? Gretchen sells her pole beans (purple), Maria her Serrano peppers (green), Dana and Matt sell their freshly-roasted coffee (black), Katie her carrots (orange), a blue poem from Matthew, brown potatoes from Roslyn, yellow patty pan squash from Jed, red tomatoes (soft and ripe) from Diana, and golden honey from Bill (and his bees). This is a book perfect for children of any age who want to connect to and with the food systems that sustain community. Order from firstname.lastname@example.org.
834
Celebrate Princeton Invention: Craig Arnold Posted December 21, 2009; 01:08 p.m. Able to adjust its focus more than 100,000 times faster than the human eye, the TAG Lens invented by mechanical and aerospace engineering professor Craig Arnold and his colleagues has applications in materials processing and imaging. (Photo: Brian Wilson) Name: Craig Arnold, associate professor of mechanical and aerospace engineering Invention: Tunable Acoustic Gradient Index of Refraction Lens (TAG Lens) What it does: The TAG lens features a cylinder made of a special material that vibrates when electricity is passed through it, enclosed inside a fluid-filled chamber. Controlling the flow of electricity changes the vibrations that propagate through the fluid, changing the lens' focus more than 100,000 times faster than the human eye can refocus. Inspiration: After developing a low-cost lens to shape laser beam output into different patterns, Arnold and his colleagues focused their attention on understanding how the device worked and its potential applications. Finding that the lens had the unique ability to focus rapidly at a wide range of focal lengths, they realized its potential went far beyond the original intended purpose, with numerous applications in materials processing and imaging. Collaborators: Euan McLeod, a 2009 Ph.D. recipient, and Alexander Mermillod-Blondin, a former postdoctoral researcher in the Arnold lab Back to main story
65
Project management typically has five stages. Some project managers make mistakes during one or more of those stages, which can sabotage the chance of a successful outcome and have disastrous results for everyone involved. Here are two of the most common mistakes — and ways to avoid them — during each of the five stages. Which ones have you made? Stage #1: Initiating: 1. Mistake: The project manager “assumes” he knows what the project sponsor (the high-level person who wants the project done and will support the project) considers to be an acceptable project outcome. How to Avoid the Mistake: You must have a detailed conversation with the project sponsor that establishes specific and realistic measurable objectives for the project. This includes the outcome (expected objectives), a defined timeline (when does it start and when is it supposed to end?) and the cost of the project (how much money, labor, and stuff is required and where the resources will come from?) 2. Mistake: The project manager fails to identify all the stakeholders (those people who will benefit from or be affected by the project and those people who need to be involved in the project at some time during the project). How to Avoid the Mistake: Ask the project sponsor to identify the primary and secondary stakeholders. Who will be directly and indirectly affected by the project’s outcome? Which stakeholders’ support will be needed to provide the required resources for the project? Include additional stakeholders as the progression of the project reveals them. Stage #2- Planning: 1. Mistake: The project manager fails to include the team when planning how the outcome is going to be achieved. How to Avoid the Mistake: It is essential to get the team’s ownership of the project and their commitment to produce agreed-upon results on time and within budget. Let the team help create a written project action plan by involving them in: - Deciding roles and responsibilities of team members - Creating a project schedule - Determining the resources needed (personnel, time, material, other resources, etc.) - Creating a communication plan for the project (see Planning - Mistake #2) - Identifying the risks to the completion of the project, and creating a risk response plan if necessary 2. Mistake: The project manager fails to keep everyone informed. How to Avoid the Mistake: You and the team should create a communication plan that addresses how and when the team will communicate with each other, with the sponsor, with each stakeholder, and with each team member who is not actively involved in the project. This is often done with status reports (where the project is now), progress reports (how it got there) and change reports (major changes and how they affect the project). Stage #3- Executing: 1. Mistake: The project manager does not hold enough team meetings. How to Avoid the Mistake: Start every large project with a kickoff meeting to review the project action plan with the team and generate excitement, and then hold weekly team meetings to discuss progress. Were all activities completed on time and within budget? Did the team exceed, miss or meet targets for the week? Use this time to discuss and reconcile problems, conflicts, disputes and potential issues. 2. Mistake: The project manager does not celebrate success with the team. How to Avoid the Mistake: Celebration provides momentum for the team, so don’t wait until you achieve the “overall” success identified in the project action plan. Celebrate small milestones, perhaps on a weekly basis, by recognizing team and individual achievements. Stage #4- Monitoring: 1. Mistake: The project manager is not constantly looking for trouble. How to Avoid the Mistake: This fix is easy: constantly look for trouble! This means being aware of and managing any issue that might affect the outcome, time frame, or cost of the project. Examples include time slippage (keeping the project on schedule by tracking project activities), scope creep (prohibiting others from enlarging the project by saying “no”) and project changes (knowing how every change may impact — or jeopardize — the project’s success). 2. Mistake: The project manager loses track of the project’s outcome, time frame and cost by not using an appropriate tracking system. How to Avoid the Mistake: Use tracking systems (i.e.,Microsoft Project, QuickBooks, Excel) that will reveal any issues or problems so timely corrective action can be taken. Stage #5- Completing: 1. Mistake: The project manager attempts to complete or “hand off” the project without tying up all the loose ends. How to Avoid the Mistake: When 90 percent of the project has been completed, meet with the sponsor and primary stakeholders and develop a closing checklist and punch list to identify any remaining uncompleted tasks. Those items must be done before handing the project off or declaring the project completed. 2. Mistake: The project manager fails to debrief and, if warranted, celebrate with the team. How to Avoid the Mistake: Upon completion, a critical step is to hold a debriefing session about the project with the team using an after action review to share lessons learned, mistakes made and overcome, and successes achieved. You and the team can discuss what went right and what went wrong, and evaluate individual and team performance. If the project’s objectives were met, celebrate and recognize the individual and team efforts and accomplishments. While project management is replete with the opportunity to make mistakes, avoid the common ones during each phase, and virtually all of your projects will end in success.
728
- 10 to 15 minutes - Take turns introducing each other while pretending you are on a stage. - Stand up, bow, and tell all the good things you can about the other person, his hobbies, good qualities, etc. Be sure to over-dramatize. - Take turns introducing other family members, present or not. - Challenge your child to introduce a best friend or a favorite teacher. - Think of famous people and take turns introducing them. Copyright © 2004 by Susan Kettmann. Excerpted from The 2,000 Best Games & Activities with permission of its publisher, Sourcebooks, Inc. To order this book visit Amazon.com.
739
by Piter Kehoma Boll Let’s expand the universe of Friday Fellow by presenting a plant for the first time! And what could be a better choice to start than the famous Grandidier’s Baobab? Belonging to the species Adansonia grandidieri, this tree is one of the trademarks of Madagascar, being the biggest species of this genus found in the island. Reaching up to 30 m in height and having a massive trunk only branched at the very top, it has a unique look and is found only at southwestern Madagascar. However, despite being so attractive and famous, it is classified as an endangered species by IUCN Red List, with a declining population threatened by agriculture expansion. This tree is also heavily exploited, having vitamin C-rich fruits which can be consumed fresh and seeds used to extract oil. Its bark can also be used to make ropes and many trees are found with scars due to the extraction of part of the bark. Having a fibrous trunk, baoabs are able to deal with drought by apparently storaging water inside them. There are no seed dispersors, which can be due to the extiction of the original dispersor by human activities. Originally occuring close to temporary water bodies in the dry deciduous forest, today many large trees are found in always dry terrains. This probably is due to human impact that changed the local ecosystem, letting it to become drier than it was. Those areas have no or very poor ability to regenerate and probably will never go back to what they were and, once the old trees die, there will be no more baobabs there. - – - Baum, D. A. (1995). A Systematic Revision of Adansonia (Bombacaceae) Annals of the Missouri Botanical Garden, 82, 440-470 DOI: 10.2307/2399893 Wikipedia. Adamsonia grandidieri. Available online at <http://en.wikipedia.org/wiki/Adansonia_grandidieri>. Access on October 02, 2012. World Conservation Monitoring Centre 1998. Adansonia grandidieri. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.1. <www.iucnredlist.org>. Access on October 02, 2012.
605
DENVER – Put on your poodle skirts and tune in Elvis on the transistor radio, because it’s starting to look a lot like the 1950s. Unfortunately, this won’t be the nostalgic ’50s of big cars and pop music. The 1950s that could be on the way to Colorado is the decade of drought. So says Brian Bledsoe, a Colorado Springs meteorologist who studies the history of ocean currents and uses what he learns to make long-term weather forecasts. “I think we’re reliving the ’50s, bottom line,” Bledsoe said Friday morning at the annual meeting of the Colorado Water Congress. Bledsoe studies the famous El Niño and La Niña ocean currents. But he also looks at other, less well-known cycles, including long-term temperature cycles in the oceans. In the 1950s, water in the Pacific Ocean was colder than normal, but it was warmer than usual in the Atlantic. That combination caused a drought in Colorado that was just as bad as the Dust Bowl of the 1930s. The ocean currents slipped back into their 1950s pattern in the last five years, Bledsoe said. The cycles can last a decade or more, meaning bad news for farmers, ranchers, skiers and forest residents. “Drought feeds on drought. The longer it goes, the harder it is to break,” Bledsoe said. The outlook is worst for Eastern Colorado, where Bledsoe grew up and his parents still own a ranch. They recently had to sell half their herd when their pasture couldn’t provide enough feed. “They’ve spent the last 15 years grooming that herd for organic beef stock,” he said. Bledsoe looks for monsoon rains to return to the Four Corners and Western Slope in July. But there’s still a danger in the mountains in the summer. “Initially, dry lightning could be a concern, so obviously, the fire season is looking not so great right now,” he said. Weather data showed the last year’s conditions were extreme. Nolan Doesken, Colorado’s state climatologist, said the summer of 2012 was the hottest on record in Colorado. And it was the fifth-driest winter since record-keeping began more than 100 years ago. Despite recent storms in the San Juan Mountains, this winter hasn’t been much better. “We’ve had a wimpy winter so far,” Doesken said. “The past week has been a good week for Colorado precipitation.” However, the next week’s forecast shows dryness returning to much of the state. Reservoir levels are higher than they were in 2002 – the driest year since Coloradans started keeping track of moisture – but the state is entering 2013 with reservoirs that were depleted last year. “You don’t want to start a year at this level if you’re about to head into another drought,” Doesken said. It was hard to find good news in Friday morning’s presentations, but Bledsoe is happy that technology helps forecasters understand the weather better than they did during past droughts. That allows people to plan for what’s on the way. “I’m a glass-half-full kind of guy,” he said.
321
A new smartphone app aims to provide a cheaper alternative to ultrasound in Africa by bringing an old technique into the 21st century. "We couldn't hear anything,” says Aaron Tushabe, recounting a trip with two friends to the maternity ward of the main hospital in the Ugandan capital, Kampala. The student had been handed an ear-trumpet-like device called a Pinard horn, used to listen for the vital signs of a baby in a mother’s abdomen. Despite straining to hear against the murmur of the ward, Tushabe couldn’t hear any signs. Luckily, the problem was not with the baby, but the combination of what he calls a “rather primitive device”, and his lack of training. In fact, the Pinard horn, named after the French doctor who invented it back in the 19th Century, can be very effective in the right hands. It can determine the age, position and heart rate of the foetus, along with an indication of its overall health. But to do this consistently can take many years of practice. Meanwhile, in developing countries, “a woman dies from complications in childbirth every minute”, according to the UN, while every year “eight million babies die before or during delivery or in the first week of life”. The key to saving those lives, the UN says, is “access to skilled care during pregnancy, childbirth and the first month after delivery”. These kinds of statistics, along with their experience of using the Pinard horn, got the three computer science students thinking about whether they could improve the design. “We saw that technology gap and started thinking about how we might bridge it.” In developed countries, ultrasound is the answer. But these machines – responsible for those fuzzy black and white pictures that are liberally posted on Facebook, brought out at parties, and waived at co-workers when someone becomes pregnant – are expensive. Even if a hospital could afford one, few expectant mothers can afford the $10 scan in countries where many live below the poverty line. And so, a new project called WinSenga was born to build what Joshua Okello, one of the other students who visited the hospital, calls "an enhancement" to the Pinard horn. The new device still consists of a plastic trumpet, but with a highly sensitive microphone inside. The souped-up device, which is placed on a women's abdomen just like a regular horn, connects to a Windows-based phone running an app that, as Okello says, "plays the part of the midwife's ear." The system picks up the foetal heart rate, transmits it to the phone, and then the phone runs an analysis. The app, developed in conjunction with medics for the UN agency Unicef then recommends a course of action, if any, for the mother and her unborn child. "When I first heard the idea, I thought it was brilliant," says Davis Musinguzi, a medic and Unicef advisor. "But being software developers, they needed guidance on the medical component of the application." The doctor says he advised on the medical parameters, procedures and standards that needed to be part of the software. He also says he tried to ensure that the new device wouldn't disrupt the normal workflow of an antenatal visit, but rather help eliminate the bottlenecks. The value of going mobile is pretty clear, allowing carers to visit mothers wherever they are. "We envision a midwife being able to travel to rural areas on specific days, and then mothers could gather in, for example, a local church,” Tushabe says. “Then, the midwife could administer the antenatal diagnosis to all the mothers." Okello, Tushabe and their partner Josiah Kuvuma presented their idea earlier this year at an event sponsored by Microsoft called the Imagine Cup, which aims to solve pressing problems, particularly in the developing world. The event partly inspired the name. The “Win” part comes from the software giant’s own products, Okello tells me, while "Senga" refers to the local name for the aunt who used to help village mothers-to-be with their antenatal care and their births. The team went on to win the regional competition before losing out in the finals held at Sydney. However, the loss has not held them back. The team says they have since been approached for potential partnerships and are currently looking for funding to launch a six-month field trial of their system. If that's successful, then WinSenga could launch as a product. The team says its too early to talk about pricing, but they are heartened by the fact that the cost of smartphone handsets is rapidly dropping across Africa, making their system much more attractive to potential clients. While they wait for funding, the WinSenga team is far from idle. Despite the fact that all three team members still have busy university schedules, they have already launched an expanded version of the software designed to assist healthcare workers and mothers during labour. The group's website also promises a version called "WinSenga Plus", which would assist with postnatal care as well. And as if that isn't enough, WinSenga say they are almost ready to launch an Android version of their application, and will then start work on a version for iOS. The apps are all part of a new movement, says Dr Musinguzi, which is gathering momentum. "The use of mobile technology is a relatively new intervention to improving health services," he says. WinSenga and other devices and apps that are coming on to the market, he says, will have to prove themselves to healthcare professionals by "reducing the burden of doing what they have always done." It will take training and investment, he says, but it "will pay off in the long run”. It is a sentiment that Okello agrees with. "Communities that have healthy mothers are generally much more productive. It's all tied in."
165
I am trying to understand what is the difference between TCP Half Open Connection and TCP Half closed connection can any one tell what exactly are they? When TCP establishes a connection, it is considered guaranteed since there is a handshake that takes place: At that point the connection is established, and data begins to flow. In contrast, a UDP packet is not guaranteed, and is just sent in the hopes it gets there. Officially, according to the RFC's, a half-open TCP connection is when one side of the established connection has crashed, and did not send notification that the connection was ending. This is not the common usage today. Unofficially, if can refer to an embryonic connection, which is a connection in the process of being established. Half-closed is the opposite of that unofficial definition. It is a state somewhere in the middle where the computers are tearing down the established connection.
239
Throughout life there are many times when outside influences change or influence decision-making. The young child has inner motivation to learn and explore, but as he matures, finds outside sources to be a motivating force for development, as well. Along with being a beneficial influence, there are moments when peer pressure can overwhelm a child and lead him down a challenging path. And, peer pressure is a real thing – it is not only observable, but changes the way the brain behaves. As a young adult, observational learning plays a part in development through observing and then doing. A child sees another child playing a game in a certain way and having success, so the observing child tries the same behavior. Albert Bandura was a leading researcher in this area. His famous bobo doll studies found that the young child is greatly influenced by observing other’s actions. When a child sees something that catches his attention, he retains the information, attempts to reproduce it, and then feels motivated to continue the behavior if it is met with success. Observational learning and peer pressure are two different things – one being the observing of behaviors and then the child attempting to reproduce them based on a child’s own free will. Peer pressure is the act of one child coercing another to follow suit. Often the behavior being pressured is questionable or taboo, such as smoking cigarettes or drinking alcohol. Peer Pressure and the Brain Recent studies find that peer pressure influences the way our brains behave, which leads to better understanding about the impact of peer pressure and the developing child. According to studies from Temple University, peer pressure has an effect on brain signals involved in risk and reward department, especially when the teen’s friends are around. Compared to adults in the study, teenagers were much more likely to take risks they would not normally take on their own when with friends. Brain signals were more activated in the reward center of the brain, firing greatest during at risk behaviors. Peer pressure can be difficult for young adults to deal with, and learning ways to say “no” or avoid pressure-filled situations can become overwhelming. Resisting peer pressure is not just about saying “no,” but how the brain functions. Children that have stronger connections among regions in their frontal lobes, along with other areas of the brain, are better equipped to resist peer pressure. During adolescence, the frontal lobes of the brain develop rapidly, causing axioms in the region to have a coating of fatty myelin, which insulates them and causes the frontal lobes to more effectively communicate with other brain regions. This helps the young adult to develop judgment and self-control needed to resist peer pressure. Along with the frontal lobes contributing to the brain and peer pressure, other studies find that the prefrontal cortex plays a role in how teens respond to peer pressure. Just as with the previous study, children that were not exposed to peer pressure had greater connectivity within the brain as well as abilities to resist peer pressure. Working through Peer Pressure The teenage years are exciting years. The young adult is often going through physical changes due to puberty, adjusting to new friends and educational environments, and learning how to make decisions for themselves. Adults can offer a helping and supportive hand to young adults when dealing with peer pressure by considering the following: Separation: Understanding that this is a time for the child to separate and learn how to be his own individual is important. It is hard to let go and allow the child to make mistakes for himself, especially when you want to offer input or change plans and actions, but allowing the child to go down his own path is important. As an adult, offering a helping hand if things go awry and being there to offer support is beneficial. Talk it Out: As an adult, take a firm stand on rules and regulations with your child. Although you cannot control whom your child selects as friends, you can take a stand on your control of your child. Setting specific goals, rules, and limits encourages respect and trust, which must be earned in response. Do not be afraid to start talking with your child early about ways to resist peer pressure. Focus on how it will build your child’s confidence when he learns to say “no” at the right time and reassure him that it can be accomplished without feeling guilty or losing self-confidence. Stay Involved: Keep family dinner as a priority, make time each week for a family meeting or game time, and plan family outings and vacations regularly. Spending quality time with kids models positive behavior and offers lots of opportunities for discussions about what is happening at school and with friends. If at any time there are concerns a child is becoming involved in questionable behavior due to peer pressure, ask for help. Understand that involving others in helping a child cope with peer pressure, such as a family doctor, youth advisor, or other trusted friend, does not mean that the adult is not equipped to properly help the child, but that including others in assisting a child, that may be on the brink of heading down the wrong path, is beneficial. By Sarah Lipoff. Sarah is an art educator and parent. Visit Sarah’s website here. Read More →
404
There isn't a huge difference in the way residential homes are built in Japan compared to in the U.S., although the Japanese are more likely to invest in special earthquake engineering, particularly in commercial and higher-end residential buildings. Home builders in both Japan and the U.S. use a lot of wood-frame construction, which is flexible and tends to ride out a quake fairly well, said Heidi Faison, outreach director at the Pacific Earthquake Engineering Research Center in Berkeley, Calif. But wood frame structures do have potential vulnerabilities in two key areas: the foundation and the wall that supports a crawl space, which is called a cripple wall. She recommends home buyers hire an engineer to make sure the wood frame is bolted to the foundation. If a house has a crawl space underneath it or you need to climb a few steps to get up to the first floor, it likely is supported by a cripple wall, which can buckle in an earthquake. If you're in an earthquake-prone location, that space needs to be filled in with a solid material. Gary Ehrlick, a structural engineer and program manager for Structural Codes & Standards at the National Association of Home Builders, outlines some other house features to consider: • Look at the garage, if the house has one. A large garage door opening or a lot of big windows on the first floor, that can create a soft story -- an open space without enough support to withstand violent shaking. • Brick veneer can present a major hazard if it's not attached well. Brick was a problem in the 6.3 magnitude temblor that struck New Zealand last month. "It doesn't create as much of a hazard inside, but outside it can injure or kill,'' he said. • Houses built on a slope are often an issue. They need to be tied back well with footings. Also make sure the slope is stable. Liquefaction -- where saturated soil becomes liquid -- can be a problem and can occur when a building is located near a lake or river. In an earthquake, liquefaction can cause the ground to behave like quicksand, as seen in New Zealand and in the 1989 earthquake in Loma Prieta in the mountains of Santa Cruz, Calif. In Japan, most of the damage was actually inflicted by the subsequent tsunami, just as most of the destruction in the San Francisco quake of 1906 was caused by fires that ripped through the city after gas lines were ruptured. A disaster's chain of events makes the preparation scenario a bit more complicated. There is a growing interest in designing homes better able to survive a tsunami. The basic idea in tsunami design, as in flood-resistant construction is to get some of your structure up above the expected level of water, said Gary Ehrlick. "In commercial structures they talk about vertical evacuation zones.'' Under this theory, the first floor, built out of concrete or steel, is strong enough to withstand the pressure of the water. The "zone of refuge'' occupies the upper floors. Another concept that came out of the earthquake/tsunami that leveled Banda Aceh, Indonesia, on Check out our gallery of Day in 2004 is a house where the first floor allows the wave to wash through it, destroying the walls but preserving the foundation. This would be a concrete frame with columns or wall segments in each corner of the house. The walls are panels made out of something light, like bamboo or wood. After a disaster, such panels would be easy to replace. Check out our photo gallery of a tsunami-resistant home designed by Kazunori Fujimoto Architect & Associates: Gallery: See Photos of Earthquake Proof Home More on AOL Real Estate: Find out how to calculate mortgage payments. Find homes for sale in your area. Find foreclosures in your area. Get property tax help from our experts.
49
Definition of Alveoli Alveoli: The plural of alveolus. The alveoli are tiny air sacs within the lungs where the exchange of oxygen and carbon dioxide takes place. Last Editorial Review: 6/14/2012 Back to MedTerms online medical dictionary A-Z List Need help identifying pills and medications? Get the latest health and medical information delivered direct to your inbox FREE!
660
|Skip Navigation Links| |Exit Print View| |Oracle Solaris Administration: ZFS File Systems Oracle Solaris 11 Information Library| This chapter provides step-by-step instructions on setting up a basic Oracle Solaris ZFS configuration. By the end of this chapter, you will have a basic understanding of how the ZFS commands work, and should be able to create a basic pool and file systems. This chapter does not provide a comprehensive overview and refers to later chapters for more detailed information. The following sections are provided in this chapter:
845
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index] Breathing Apparatus in Tyrannosaurines The subject of various breathing apparati in extant animals as applied to dinosaurs is interesting, because there are obvious alternate systems which may be applied, and I think Ruben et al. 1999 did not confront this. Am hoping to get the article today, so bear my unread comment on that. Otherwise, there are apparently various breathing systems employed by modern animals, including: hepatic: liver is pushed into lungs to force expelling of breath. -- employed by crocs and lizards(?). continuous circulation: lungs are rigid, air is forced in and out by powered airsacs manipulated by movement of muscle compression. -- employed by birds. diaphragmatic: lungs are flexible and are compressed by movement of ribs, and expelled by reactive movement of diaphragm. -- employed by mammals and snakes(?). grastralic: gastralia compress internal organs to force flexible lungs to expell. -- ? I am unsure how turtles do it, but they seem to have had rigid lungs and ribs and would have to rely on other forces to breathe. Ruben et al. suggest that the first (hepatic) was employed by crocs and theropods, but what about those dinosaurs that lacked a mobile pubis or any other way of compressing the liver? Take it this way: tyrannosaurs have rigid backs, the pubis could not induce the position of the liver, and the back could not squeeze down to force the lungs to move; the gastralia or a continuous ventilation are the only other proposed breathing mechanisms through which it can "pump" it's lungs (as well as whatever turtles employ). The hepatic system doesn't work for tyrannosaurs, and unless I'm mistaken, it wouldn't for other rigid-backed dinosaurs. - Often, it is the man who is brought down the path to the end who does not see his own steps. - Jaime A. Headden Qilong, the website, at: DO YOU YAHOO!? Get your free @yahoo.com address at http://mail.yahoo.com
832
Examples of analysis sections An example from a law field report about a courtroom observation Click here to see the description section of this law field report. | The legal processes I observed in the district court hearing reflect to a certain extent Australian social values. Their purpose is to attempt to maintain an efficient court system and create justice before the law. Processes such as rules of evidence which ensure only the most applicable evidence is heard have been developed over many years, thus demonstrating an influence of past and future cases. Similarly, the doctrine of precedent ensures that similar cases have similar results. Procedural grounds for objections protect witnesses from harassment and potential confusion. Therefore, all of these legal processes create an environment in which changing social values will bring about complementary changes in court decisions. ||The meaning and theoretical significance of the observations described are explored Footnote 1: adapted from Woodward-Kron, R., Thomson, E. & Meek, J. (2000) A text based guide to academic writing. CD-Rom. Dept. Of Modern Languages, University of Wollongong. An example from an education field report about a classroom practicum experience | As I help the students I am conscious of the scaffolding Vygotsky described taking place. I observed other people such as teachers and parents scaffolding with their children. It was, therefore, interesting to realise that I was doing the same as I walked around the classroom helping the children with their tasks. Scaffolding helps the children to reach their zone of proximal development which in turn helps them to achieve more complex tasks. I have found my practical experience in the classroom has been full of examples supporting the theories of Piaget and Vygotsky and to a lesser extent Erikson. It is also good to see that these theories actually have real world application to child development. |Exploring the significance of practical experience and observations from a theoretical perspective Reflection about what the field experience has meant for theoretical understanding This is the analysis section from a history field report about historical monuments. Click here to see the description section of this history field report. The Bulli Coal Mining Company had 331 employees: this represented approximately 20% of the population of Bulli. Given the small size of the Bulli community, the population was calculated at 1352 persons in the 1891 census (Mitchell & Sherington, 1984: 42), and its dependence on the Bulli Coal Mining Company, the impact of a disaster of this magnitude was enormous. Henry Parkes and his government realising the hardship being experienced by the community, particularly the bereaved families, gave “official support to a public fund and established a board to distribute the money after investigating the needs of those bereaved” (Mitchell & Sherington, 1984: 58). The impact of the disaster is reflected in the structure of the monument. The monument was intended to last the test of time. Its shape, an obelisk, is unlike anything else in the area. This fact combined with the historical use of the obelisk, principally in ancient Egypt, suggests the memorial was considered important to the community. No reference is made on the monument, however, in regard to the date of the dedication or to who unveiled it which is significant given the government support of a disaster relief fund for the Bulli community in the wake of the disaster. Analysis of the event the monument commemorates Conclusion drawn about the structure and shape of the monument given Footnote 2: adapted from Flello, J. unpublished manuscript. An example from an education field report about a classroom practicum experience. (This example contains a mixture of description and analysis within a single paragraph. This is an alternative approach to having these two types of different writing in separate sections.) | Both childcare centres encouraged the children to think critically and reflectively about themselves and the wider community; for example, I observed an incident at centre 2 (14/5), where the teacher helped the children with an equity of access issue in the playground. Several children wanted to play on the slide and gym equipment but one child continually walked up the slide disrupting the pattern of play. The group of children began to speak loudly and harshly to him and threatened to kick him. The teacher was able to facilitate a resolution to this problem by getting the children to examine what was happening and think of alternative actions. There were all, in effect, being empowered to deal with the injustice they were facing in the playground rather than resorting to physical violence or giving up and playing elsewhere, as some children were about to do. ||Topic sentence: a general theoretical conclusion is drawn Conclusion illustrated by an example observed in the field Description of the observed event Theoretical perspective used to analyse the event Footnote 3: adapted from McNabb, Learning Skills Centre, University of Melbourne. Comments and questions should be directed to Unilearning@uow.edu.au
346
Matisse: Radical Invention, 1913-1917 July 18–October 11, 2010 Matisse conceived this "souvenir of Morocco" in 1912, stretched a canvas for it in 1913, and returned to the composition late in 1915, only to start again on a new canvas in early 1916. Black is the principal agent, at once simplifying, dividing, and joining the three zones of the canvas: the still life of melons and leaves on a gridded pavement, bottom left; the architecture with domed marabout, top left; and the figures, at right. Next to a seated Moroccan shown from behind, the large curving ocher shape and circular form derive from a reclining figure in the sketches. Above the shadowed archway, figures in profile may be discerned in the two windows: at right, the lower part of a seated man; at left, the upper part of a man with raised arms. Matisse built up the surface with thin layers of pigment, the color of the underlying layers modifying those on top. Painter Gino Severini reported that "Matisse said . . . that everything that did not contribute to the balance and rhythm of [this] work, had to be eliminated . . . as you would prune a tree." Matisse developed this painting of what he described as “the terrace of the little cafe of the casbah” in the years following two visits to Morocco, in 1912 and 1913. As he worked on various studies he eliminated details he felt were extraneous to the painting’s overall balance. A balcony with a flowerpot and a mosque behind it are at upper left, at lower left is a still life of vegetables, and to the right is a man wearing a round turban, seen from behind. Matisse’s generous application of black paint helps unify the three sections of the painting across its abstract expanse. Matisse: Radical Invention, 1913–1917, July 18–October 11, 2010 Director, Glenn Lowry: Matisse first conceived this painting in 1912, while he was visiting Morocco. But he didn't actually start the canvas until early 1916. Once he did, he continued working on it—with great focus and concentration—through the fall. Curator, John Elderfield: The forms are difficult to decipher. I know some people who have thought that what Matisse says are melons and leaves are in fact the Moroccans, but Matisse is insistent that they are not. I think one can clearly see the figure whose back is towards us. And if we look at it carefully we can see that that figure's grown in size. To the right of it, that black area does seem to derive from drawings of an arched doorway with light hitting the bottom, but the top part is in shadow, leading into another architectural space. And the two elements at the top are ones which we can trace back to drawings he made in Morocco—the one at the right, of a sort of seated figure, and the one to the left, more puzzling, but somewhat amusingly, in one of the drawings—and he refers to this in his letter—is of a figure who has got his arms raised to look through binoculars. All that remains is the forearms and part of the body, and Matisse is quite happy to have carried it to that point of almost unintelligibility. But I don't think the painting asks us to be specific about these forms. Curator, Stephanie D’Alessandro: I don't think so either. I think there's a level of memory and recollection, and maybe even nostalgia with this picture. Glenn Lowry: One aspect of Morocco that stayed with Matisse was the harsh contrast between the midday sun and the shade, evoked here in the black background. Conservator, Michael Duffy: You can see how it defines certain shapes. Particularly the shape in the middle, this curved shape, which is made up of ochre and white. The black edge on the left is painted over, so he actually defines the form further by overlaying the black paint. When you look closely at the black paint, you'll see that it covers areas of blue and pink underneath, and that gives the black a very warm color. And it's very typical of the way Matisse painted. Rather than blending his colors together, to achieve one color, he would typically layer colors, even black, over several layers, in order to build up a kind of rich, optical surface. MoMA Audio: Collection, 2008 Curator Emeritus, John Elderfield: This painting was made in 1915–1916 and is a remembrance of visits that Matisse made to Morocco. And while the paintings made in Morocco are beautifully, limpidly colored, obviously the remembrance is rather of the great heat, of contrasts of color in the conditions of very bright light. The Moroccans themselves are on the right on the terrace with their melons and gourds—the green and yellow forms at the left. We can see a figure with his back to us, and then, with more difficulty, figures in windows at the top. In the background is a mosque with a vase of blue and white flowers standing on the parapet. Matisse said that he put black in his pictures to simplify the composition. And indeed, through the teens and into the 1920s, he regularly puts in a little dosing of black to hold everything else in place. I think, unquestionably, he was thinking of shadow, and of the kind of stifling midday sun in North Africa. There is that element of renunciation of color and wanting to put in an element of real gravity in the composition. Its hard to imagine any other color doing it in that same way. The Museum of Modern Art , MoMA Highlights, New York: The Museum of Modern Art, revised 2004, originally published 1999, p. 79 The Moroccans marvelously evokes tropical sun and heat even while its ground is an enveloping black, what Matisse called "a grand black, . . . as luminous as the other colors in the painting." Utterly dense, this black evokes a space as tangible as any object, and allows a gravity and measured drama without the illusion of depth once necessary to achieve this kind of grandeur. The painting, which Matisse described as picturing "the terrace of the little café of the casbah," is divided into three: at the upper left, an architectural section showing a balcony with flowerpot and the dome of a mosque behind; a still life, of four green-leafed yellow melons at the lower left; and a figural scene in which an Arab sits with his back to us. To his right is an arched doorway, and windows above contain vestigial figures. The form to his left is hard to decipher, but has been interpreted as a man's burnoose and circular turban. During his visit to Morocco in 1912-13, Matisse had been inspired by African light and color. At the same time, he faced the challenge of Cubism, the leading avant-garde art movement of the period, and The Moroccans summarizes his memories of Morocco while also combining the intellectual rigor of Cubist syntax with the larger scale and richer palette of his own art. Matisse Picasso, February 13–May 19, 2003 Narrator: In 1911 and again in 1912 Matisse traveled to Morocco. Three years later, he tapped those memories for this big souvenir picture, The Moroccans. By that time he was deeply involved in the Cubist vocabulary of reduced geometric form. Curator, John Elderfield: What it shows is on the upper left a mosque with a vase of flowers on the right hand side. In the bottom left is a pavement with melons with their green leaves. And at the right, more difficult to figure out, various figures who are presumably sitting on some sort of terrace outside a cafe in Tangier. One can I think clearly understand the figure with his back to us, with a white turban and, blue shirt. And to the right, what looks like the top of an archway in shadow. Matisse talked about the black as being a way of representing heat and light. And as one gets further south, one gets these very strong black and white contrasts. It's also trying to convey some of the sense of the intense light, and the almost tangible heat of Tangier. Curator, Kirk Varnedoe: Certainly Picasso must have looked intensely at a major picture like this, and learned from it a new vocabulary of Cubism, more highly abstracted, more monumental. When you compare The Moroccans to Picasso's Three Musicians of 1921...what leaps out at you are certain similarities—the use of black for example. But Picasso unlike Matisse is not a traveler. Picasso often said, "If someone didn't come to the studio in the morning, I wouldn't have anything to paint in the afternoon." Narrator: The Three Musicians most likely represent the artist and his friends. Picasso himself is at the center, identified by the harlequin costume and guitar he often used as his symbols. To the right, the man dressed as a monk with a stylized beard is probably Picasso's friend the poet Max Jacob, who had entered a monastery after the First World War. And the large white figure with the clarinet may be another poet friend, Guillaume Apollinaire, who had died from war wounds. Kirk Varnedoe: The picture has a kind of gravity, a kind of sadness or melancholy, which is played off by small and amusing details, like the tiny little zig zags that represent the hand on the notes of music, or the dog that lies under the table to the left. So you imagine the music being played. Is it syncopated like a kind of bright jazz, and on the other hand melancholy like a threnody? And when you compare this in its detail, then you sense how monumental the Matisse is by comparison, and how in a certain sense impersonal it is.
899
Child and Adolescent Suicide Left untreated, depression can lead some youth to take their own lives. Suicide is the third leading cause of death for 15- to 24-year-olds and the sixth leading cause of death for 5- to 14-year-olds. Attempted suicides are even more common. Warning signs of suicide Four out of five teens that attempt suicide give clear warnings. If you suspect that a child or adolescent is suicidal, look for these warning signs: - Threats of suicide—either direct or indirect. - Verbal hints such as “I won’t be around much longer” or “It’s hopeless. - Obsession with death. - Overwhelming sense of guilt, shame or rejection. - Putting affairs in order (for example, giving or throwing away favorite possessions). - Sudden cheerfulness after a period of depression. - Dramatic change in personality or appearance. - Hallucinations or bizarre thoughts. - Changes in eating or sleeping patterns. - Changes in school performance. What Should Parents and Other Adults Do if They Think a Child Is Suicidal? - Ask the child or teen if he or she feels depressed or thinks about suicide or death. Speaking openly and honestly allows the child to confide in you and gives you a chance to express your concern. Listen to his or her thoughts and feelings in a caring and respectful manner. - Let the child or teen know that you care and want to help. - Supply the child or teen with local resources, such as a crisis hotline or the location of a mental health clinic. If the child or teen is a student, find out if there are any available mental health professionals at the school and let the child know about them. - Seek professional help. It is essential to seek expert advice from a mental health professional that has experience helping depressed children and teens. - Alert key adults in the child’s life—family, friends, teachers. Inform the child’s parents or primary caregiver, and recommend that they seek professional assistance for their child or teen. - Trust your instincts. If you think the situation may be serious, seek immediate help. If necessary, break a confidence in order to save a life. This will connect you with a crisis center in your area. Covenant House Nine Line This is a 24-hour teen crisis line. American Academy of Child and Adolescent Psychiatry American Association of Suicidology
296
NOLAN, MATTHEW (1834–1864). Matthew Nolan, Mexican War veteran, Texas Ranger, Nueces County sheriff, and Confederate cavalry officer, was born in 1834 in Providence, Rhode Island. He was the son of Irish immigrants. Some sources claim he was born in New York. His parents died when he and his older sister Mary and younger brother Tom were children, leaving them on their own. Mary married a soldier and enlisted her brothers as buglers in Zachary Taylor's Second Dragoons. She became a laundress so that she could travel with her husband and brothers to Texas on the eve of the Mexican War. They settled in Corpus Christi until Taylor moved his army to the Rio Grande valley at the beginning of the war. By this time Mary was working as a hospital matron. Matthew and Tom Nolan were at the battles of Palo Alto and Resaca de la Palma and traveled with the army until the end of the war in 1848 and then returned to Corpus Christi. In 1850 Nolan joined John S. "Rip" Ford's Texas Ranger unit as a bugler where he distinguished himself in a May 26, 1850, skirmish with Comanche Indians near Fort Merrill. In Ford's memoirs he wrote that Nolan "rushed barefoot through prickly pear to get a shot at the retreating foe." Nolan stayed with Rip Ford and the Texas Rangers during the 1850s and fought minor territorial battles throughout Texas. In 1858 Nolan was elected sheriff of Nueces County, and he named his brother Tom a deputy sheriff. At the outbreak of the Civil War, Nolan raised a company of volunteers from Corpus Christi and joined the Second Texas Cavalry. He fought along the Mexican border with his former commander Ford. He returned to Corpus Christi to marry Margaret J. McMahon on May 22, 1862. Nolan rejoined his regiment to take part in the January 1, 1863, recapture of Galveston Island. His actions in the battle of Galveston led to his promotion to major. Later in 1863 Nolan was sent back to a volatile Corpus Christi to help keep the peace in South Texas and monitor the coast. Corpus Christi was equally divided between Northern and Southerner sympathizers. Ford employed Nolan to keep watch on Cecilio Balerio, Union sympathizer and rancher. With Ford's blessing, Nolan was reelected to county sheriff on August 1, 1864. His job was to arrest, "perfidious renegades." One of these "renegades," former sheriff H. W. Barry, was a Mexican War veteran who was providing cotton to Union ships in the Gulf of Mexico. Nolan reported to Ford that he had seen Barry in action. By December 1864 Corpus Christi was suffering the effects of war, and tensions ran high. On the night of December 22, 1864, Nolan and horse trader J. C. McDonald met outside of the Nolan home, and while they talked, two of Barry's stepsons, Frank and Charles Gravis, appeared and started an argument with Nolan. In the commotion that ensued, one of the Gravis brothers shot and fatally wounded Nolan. Other sources claim that Nolan was in the process of arresting McDonald, and the two brothers, intending to kill McDonald for seducing their sister, accidentally shot Nolan instead. Matthew Nolan is buried next to his brother Tom in the Old Bayview Cemetery in Corpus Christi. Murphy Givens, "Corpus Christi History: The Nolans arrive in Corpus Christi," Corpus Christi Caller–Times, August 23, 2000 (http://www.caller2.com/2000/august/23/today/murphy_g/2672.html), accessed March 23, 2011. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Stephanie P. Niemeyer, "NOLAN, MATTHEW," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/fno33), accessed May 22, 2013. Published by the Texas State Historical Association.
494
2. Physical Activity Twenty-four RCT articles were reviewed for the effect of physical activity on weight loss, abdominal fat (measured by waist circumference), and changes in cardiorespiratory fitness (VO2 max). Thirteen articles were deemed acceptable (346, 363, 365, 369, 375, 401, 404, 406, 432, 434, 445-447). Only one of these RCTs compared different intensities and format with a control group, although the goal was to increase physical activity and not specifically to produce weight loss (401). Results from this trial were subsequently reported after 2 years, but these no longer included the control group (447). One additional study did not have a no-treatment control group but compared three active treatment groups with each other: diet only, exercise only, and combination exercise plus diet (448). Most RCTs described the type of physical activity as cardiovascular endurance activities in the form of aerobic exercise such as aerobic dancing, brisk walking, jogging, running, riding a stationary bicycle, swimming, and skiing, preceded and followed by a short session of warmup and cool-down exercises. Some physical activity programs also included unspecified dynamic calisthenic exercises (363, 369, 406, 446). The intensity of the physical activity was adapted to each individual and varied from 60 to 85 percent of the individual's estimated maximum heart rate, or was adjusted to correspond to approximately 70 percent of maximum aerobic capacity (VO2 max). The measure of physical fitness included VO2 max. The frequency of physical activity varied from three to seven sessions a week and the length of the physical activity session ranged from 30 to 60 minutes. Some physical activity programs were supervised, and some were home-based. Adherence to the prescribed physical activity program was recorded and reported in some studies and not mentioned in others. Most studies did not estimate the caloric expenditure from the physical activity or report calorie intake. The duration of the intervention varied from 16 weeks to 1 year; six articles reported on trials that lasted at least 1 year (346, 363, 375, 401, 406, 432). Rationale: Twelve RCT articles examined the effects of physical activity, consisting primarily of aerobic exercise, on weight loss compared to controls (346, 363, 365, 369, 375, 401, 404, 406, 432, 434, 445, 446). Ten of the 12 RCT articles reported a mean weight loss of 2.4 kg (5.3 lb) (or 2.4 percent of weight) (363, 369, 375, 406, 419, 432, 434) or a mean reduction in BMI of 0.7 kg/m2 (2.7 percent reduction) (346, 365, 401) in the exercise group compared to the control group. In three of these ten studies, the weight loss was < 2 percent of body weight (< 2 kg) (4.4 lb) (369, 375, 434). In contrast, two RCTs showed no benefit on weight from exercise, reporting weight gain in the exercise group compared to the control group (445, 446). In one of these studies, the control group received only diet advice but nevertheless lost 9 kg (19.8 lb), whereas the exercise group lost only 7 kg (15.4 lb) (445). In the second study, there was a total of only 10 participants, all having noninsulin-dependent diabetes mellitus, and the control group lost 3 kg (6.6 lb) whereas the exercise group lost only 2 kg (4.4 lb) (446). A meta-analysis of 28 publications of the effect on weight loss of exercise compared to diet or control groups showed that aerobic exercise alone produces a modest weight loss of 3 kg (6.6 lb) in men and 1.4 kg (3.1 lb) in women compared to controls (449). Ten articles reported on RCTs that had a diet-only group in addition to an exercise-only group (346, 363, 365, 369, 375, 406, 432, 434, 445, 448). In every case except one (365),the exercise-only group did not experience as much weight loss as the diet-only group. The diet-only group produced approximately 3 percent, or 3 kg (6.6 lb), greater weight loss than the exercise-only group. No single study examined the length of the intervention in relation to the weight loss outcome. Only one study compared the effect on maximum oxygen uptake of different intensities and formats of physical activity over a 1-year follow-up (401) and 2-year follow-up period (447). Better adherence over 1 year was found if the exercise was performed at home rather than in a group setting, regardless of the intensity level. Subsequently, the different exercise groups were compared with each other over the longer term (2 years), and better long-term adherence was found in the higher intensity home-based exercise group compared to the lower intensity home-based or higher intensity group-based exercise groups (447). The question of whether physical activity enhances long-term maintenance of weight loss has not been formally examined in RCTs. Examination of long-term weight loss maintenance produced by physical activity interventions compared with diet-only interventions cannot easily be compared between RCTs because of numerous differences between studies with respect to design, sample size, intervention content and delivery, and characteristics of the study population samples. However, a number of analyses of observational and post hoc analyses of intervention studies have examined whether physical activity has a beneficial effect on weight. Cross-sectional studies have generally shown that physical activity is inversely related to body weight (450-454) and rate of weight gain with age (455). Longitudinal studies with 2 to 10 years of follow-up results have observed that physical activity is related to less weight gain over time (456-459), less weight gain after smoking cessation in women (460), and weight loss over 2 years (461). In addition, post hoc analyses of several weight loss intervention studies reported that physical activity was a predictor of successful weight loss (454, 462, 463). The results of these RCTs showed that physical activity produces only modest weight loss and observational analyses from other studies suggest that physical activity may play a role in long-term weight control and/or maintenance of weight loss. Rationale: Only three RCTs testing the effect of physical activity on weight loss also had measures of abdominal fat as assessed by waist circumference (365, 369, 375). One study demonstrated that physical activity reduced waist circumference compared with the control group (365), and another study showed a small effect on waist circumference (0.9 cm) in men but not women (375). One study in men showed a small increase in waist circumference (369). Weight loss was modest in all of these studies. These studies were not designed to test the effects of physical activity on abdominal fat independent of weight loss. However, large studies in Europe (464), Canada (453), and the United States (465-468) reported that physical activity has a favorable effect on body fat distribution. These studies showed an inverse association between energy expenditure through physical activity and several indicators of abdominal obesity, such as waist circumference and waist-to-hip and waist-to-thigh circumference ratios. Rationale: Eleven RCT articles testing the effect of physical activity alone on weight loss in men and women also included measures of cardiorespiratory fitness, as measured by maximal oxygen uptake (VO2 max) (346, 363, 369, 375, 401, 404, 406, 432, 434, 445, 446). All 11 showed that physical activity increased maximum oxygen uptake in men and women in the exercise groups by an average of 14 percent (ml/kg body weight) to 18 percent (L/min). Even in studies with modest weight loss (< 2 percent), physical activity increased VO2 max by an average of 12 percent (L/min) to 16 percent (ml/kg) (369, 375, 434). One study that compared different formats and intensities of physical activity on VO2 max reported that improvement in VO2 max was related to adherence to the physical activity regime. In that study, the lower intensity program was equally effective on VO2 max as a higher intensity program, largely as a result of different levels of adherence (401). The results of the RCTs strongly demonstrate that physical activity increases cardiorespiratory fitness in overweight and obese individuals.
33
Chronic diseases such as asthma, arthritis, cancer, heart disease and diabetes are ongoing, generally incurable illnesses. Collectively, chronic diseases are the number one cause of death and disability in the U.S. With a better understanding of the molecular basis of health and disease, biotech scientists are developing improved methods for diagnosing, treating and preventing chronic disease. Diabetes is the seventh leading cause of death in the U.S., and the number of new cases continues to rise by epidemic proportions. Diabetes patients use insulin, a biotech therapy, to control their critically important glucose levels. Insulin use can also reduce the risk of diabetes complications like eye, kidney and nerve disease by 40 percent. Scientists have been studying the cone snail since the 1970s to better understand our nervous system. Through this research, scientists were able to isolate the peptides in the venom of the cone snail, A synthetic version of one of those peptides, manufactured by Elan under the brand name PRIALT®, was approved by the FDA in 2004 as a treatment for adults with severe chronic pain.
31
Acer griseum (Paperbark Maple) is a species of maple native to central China, in the provinces of Gansu, Henan, Hubei, Hunan, Shaanxi, Shanxi, and Sichuan, at altitudes of 1,500–2,000 m. It is a small to medium-sized deciduous tree, reaching 6-9 m (20-30 ft) tall, 5-6 m (15-25 ft) wide, with a trunk up to 70 cm (2 ft) in diameter. The bark is smooth, shiny orange-red, peeling in thin, papery layers; it may become fissured in old trees. The shoots are densely downy at first, this wearing off by the second or third year and the bark exfoliating by the third or fourth year. The leaves are compound, with a 2–4 cm petiole with three leaflets, each 3-10 cm long and 2-6 cm broad, dark green above, bright glaucous blue-green beneath, with several blunt teeth on the margins. Paperbark Maple is widely grown as an ornamental plant in temperate regions. It is admired for its decorative exfoliating bark, translucent pieces of which often stay attached to the branches until worn away. It also has spectacular autumn foliage which can include red, orange and pink tones. In the world of trees, Acer griseum is a small species, generally growing to around 12m in height. But what it lacks in stature, it more than makes up for in beauty. Its copper-red bark ensures that it is easy to identify, even in winter, as does its technique of bark renewal. As the old sheaves of bark die, they peel themselves off revealing the young, smooth bark beneath. This self-exfoliation, although unusual, is not unique and also occurs in several species of birch. At the turn of the 20th century, the Royal Botanic Gardens at Kew sent a young botanist by the name of Ernest Henry Wilson to China. As well as looking into the effect the charcoal industry was having on the forests, Wilson was also asked by his financier, the Veitch Nursery, to find interesting plants. The plan was to find the handkerchief tree Davidia involucrata which had been described, but never collected. Eventually, Wilson found the location where the tree was last seen only to find a tree stump and a newly-erected hut built from its timber! Over the next decade or so, 'Chinese' Wilson, as he was to become known, brought back more than 1,000 garden plants - more than any other collector. This eventually included the handkerchief tree, the regal lily Lilium regale and also the Paperbark Maple Acer griseum.
816
The CEI is currently involved in many international space missions and projects. Gaia: Gaia was adopted within the scientific programme of the European Space Agency (ESA) in October 2000. The mission aims to: measure the positions of ~1 billion stars both in our Galaxy and other members of the Local Group, with an accuracy down to 20 µas, perform spectral and photometric measurements of all objects, derive space velocities of the Galaxy's constituent stars using the stellar distances and motions, and create a three-dimensional structural map of the Galaxy. The gathered large datasets will provide astronomers with a wealth of information covering a wide range of research fields: from solar system studies, galactic astronomy, cosmology to general relativity. The CEI is involved in a number of different projects including modelling the CCDs designed for the Gaia mission to simulate the charge trapping effect of radiation damage, analysis of the BP/RP and RVS radiation campaign datasets and the development of the data processing pipeline. IXO: The International X-ray Observatory (IXO) is a new X-ray telescope with joint participation from NASA, the European Space Agency (ESA), and Japan's Aerospace Exploration Agency (JAXA). This project supersedes both NASA's Constellation-X and ESA's XEUS mission concepts. IXO is a next-generation facility designed to examine three main areas: black holes and matter under extreme conditions, formation and evolution of galaxies, clusters and large scale structure, and the life cycles of matter and energy. The IXO optics will have 20 times more collecting area at 1 keV than any previous X-ray telescope. The focal plane instruments will deliver up to 100-fold increase in effective area for high resolution spectroscopy from 0.3-10 keV, deep spectral imaging from 0.2-40 keV over a wide field of view, unprecedented polarimetric sensitivity, and microsecond spectroscopic timing with high count rate capability. The CEI is currently developing instrumentation for the X-ray Grating Spectrometer (XGS) readout. Euclid: Euclid is a medium class mission candidate for launch in 2017 as part of the Cosmic Vision 2015-2025 programme and will spend five years in orbit at L2. The mission is a combination of two missions: the Dark UNiverse Explorer (DUNE) and the SPectroscopic All Sky Cosmic Explorer (SPACE). The primary goal is to study the dark universe by means of two main cosmological probes: the Weak Lensing (WL) technique which maps the distribution of dark matter and measures the properties of dark energy in the universe and the Baryonic Acoustic Oscillations (BAO) technique which uses the scales in the spatial and angular power spectra as "standard rulers" to measure the equation of state and rate of change of dark energy. The CEI is working with a number of institutes, including ESA and Mullard Space Science Laboratory (MSSL) on the characterisation of the radiation effects to the Euclid CCDs. This work involves pre- and post-irradiation characterisation of the e2v CCD204s provided by ESA, based on the same architecture as the CCD203 which is proposed to be used onboard Euclid. A model of the CCD204 pixel structure is being created to explore the electron density for charge storage as a function of signal size and being used in CTE simulations under a variety of signal conditions to predict CTE effects at the mid and end of mission. The aim is to provide recommendations on CCD design modifications for improved radiation tolerance, device operation and shielding. Chandrayaan-1 & 2: The Indian Space Research Organisation Chandrayaan-1 spacecraft was launched on the 22nd of October 2008. It spent nine months in a 100 km circular orbit around the Moon before communication was lost. During this time it surveyed the around 15 percent of the lunar surface providing a map of chemical characteristics and 3-dimensional topography. The spacecraft carried a number of instruments including a terrain mapping camera, infrared spectrometers, and the Chandrayaan-1 X-ray Spectrometer (C1XS). The C1XS instrument consisted of 24 e2v technologies CCD54 swept-charge device silicon X-ray detectors arranged in 6 modules that will carried out high quality X-ray spectroscopic mapping of the Moon using the technique of X-ray fluorescence in the energy range 0.5-10 keV. The CEI was involved in performing the proton radiation damage assessment for the CCD54 devices recommending instrument shielding, operating temperature and operating potentials. The pre-flight characterisation of the 14 modules available for flight selection was also conducted recommending ten modules suitable for use in the instrument. The ESA space environment information system (SPENVIS) software was utilised to estimate the worse case end of life 10 MeV equivalent proton fluence, which was used to irradiate a number of CCD54 devices to investigate their post irradiation performance. The CEI continues to be involved in the instrument in an advisory role on the observed radiation effects to the CCD54 devices. Chandrayaan-2 is the second Indian lunar mission, to be launched in 2014 into a 200 km polar orbit where is will use and test various new technologies. The spacecraft will include a number of instruments, one being the Chandrayaan-2 Large Area Soft X-ray Spectrometer (CLASS) instrument which is a continuation of the successful C1XS instrument. CLASS will map the abundance of major rock forming elements on the lunar surface, mapping elemental abundances with a nominal spatial resolution of 25 km. The instrument uses the second generation swept charge device, CCD236, and has a geometrical area three times that of C1XS which will allow for data collection at low levels of solar activity. The CEI will provide assistance with the characterisation of the SCDs and an analysis of the impact of radiation damage on their performance. Initial studies have demonstrated a factor of two improvement in radiation hardness, further optimisation and a more detailed investigation into device performance is currently underway. UKube-1: The UK Space Agency is planning to launch their first cubesat later this year. The cubesat platform allows fast mission turnaround of small payloads, allowing more groups to be involved with the missions. After launch in late 2011, the satellite will spend 1 year in a low Earth orbit (~400km), with a view to operating for a further 2 years if successful. The CEI has successfully bid for design and production of a single payload, working in tandem with Clydespace to develop a payload responsible for imaging of the Earth with narrow and wide field imagers. In addition the group plans to include an imager to monitor radiation damage effects on the 0.18µm CMOS sensors, which have been previously characterised on the ground. This is the first such instrument under total development by the group, and will provide ample training and expansion of knowledge of mission development within the group. Click Here to access the mission page for more information.
408
The etymology of the term 'gurdwara' is from the words 'Gur (ਗੁਰ)' (a reference to the Sikh Gurus) and 'Dwara (ਦੁਆਰਾ)' (gateway in Gurmukhi), together meaning 'the gateway through which the Guru could be reached'. Thereafter, all Sikh places of worship came to be known as gurdwaras. All About Sikhs AllAboutSikhs.com is a comprehensive web site on sikhism, sikh history and philosophy, customs and rituals,sikh way of life, social and religious movements, art and architecture, sikh scriptures,sikh gurudwaras. Based on the belief in One God, the Sikh religion recognizes the equality of all human beings, and is marked by rejection of idolatry, ritualism, caste and asceticism. This website serves to heighten the awareness of Sikhism and hopefully can be of some use to seekers of knowledge.
789
January 10th, 2004 01:15 AM Yeah, arbitrary precision is very confusing. I've tried to make my own like that with huge string allocations (acturally char arrays) and compare numbers one by one. I don't know what happened to it, but I probably never completed it anyways. And that recursive function looks really....recursive. I don't see any code to exit out of the loop, IE it will continue to call it self over and over until the program finally kills the heap/stack or something. I don't quite know what it is... Also, even if it wasn't a static, it would still always be zero (if you managed to get an answer out of it since it loops forever). Why? Since there is no checking to see if X is less than or equal to 1. Without that it would multiply it by 0 when x = 0, and thus the answers would be cleared. Well the only reason I can acturally comment on it is because I have my C book here with an example of a recursive function to calculate factorials... heh. Sadly I don't think I'm *allowed* to post the code up. Sorry. The next best thing - Using Linked Lists of Integers to store the factorial of 1000... http://www.codeguru.com/algorithms/factorial.shtml I don't happen to think that the code there is easily understandable though... January 10th, 2004 01:32 AM Tim_axe, sorry to be pedantic, but if you look at his code, x would never be zero, that wasn't the issue. The fact is that it is pointless even having the x if you set total to 0 at the start because total will never increase, and that's even if you ignore that there is no way to exit the recursion. Also, because total is a local variable and you are not sending it as an argument, total would have died at the beginning of each recursion in any case. A slightly more functional function, if still not very good would be: Please note that even that code is completely useless :P I just felt like correcting the other code (too tired to check who it was that posted it, and I can't see just now anyway :P) void genFactorial(int x, int t) int total = t; // you probably wouldn't even need total now void main(void) // yes, I know it isn't totally correct Oh god that makes me look sad. Tim_axe, I'm sorry. I realise what you are saying now (couldn't really see it properly, my contact lenses are fscked atm). I left what I said in incase someone already had seen it so that you know I was apologising [/edit] January 10th, 2004 04:07 AM Don't worry about it gothic_type, I can see where I sort of rambled on in my post and just messed up what I was trying to point out. I'll work off of your code since it has the basic framework and I want to save typing... *Opens up DevC++* *Finds C Programming Book* *Returns to Open Window* Okay, this following code compiles, and works fine for me. If not, please debug it yourself. :P Also, don't give it an integer bigger than 14. It will crash or mess up something (Integer Overflow). Same goes for letters, and decimal numbers. Integers only... Enjoy. Also, it won't do your 100 or whatnot. Only up to 14... Check the link in my previous post to find code for bigger numbers. I haven't tried it myself... using namespace std; unsigned int genFactorial(unsigned int x); cout<<"Blah. Give me an Integer or I'll crash: "; cout<<"\nI think that is: "<<genFactorial(fact)<<endl; cout<<"If that's wrong, it's 42!"<<endl; unsigned int genFactorial(unsigned int x) x *= genFactorial(x-1); January 10th, 2004 08:33 AM Tim_axe, nice prog; appears to work correctly (despite the fact that I attempted to get gcc to compile it to begin with which made it unhappy ) Now all we need to do is edit the program so that it can do 100, but still using integers :P. BTW -- Tim_axe, I think we must be the only people sad enough to have kept on posting to this topic...I think everyone else bailed :P January 12th, 2004 07:27 PM well you guys are using recursion, which isnt going to be good when you are computing 100 because you will run out of space in RAM. Support your right to arm bears. ^^This was the first video game which i played on an old win3.1 box January 12th, 2004 10:26 PM White_Eskimo, the only reason I (and I think Tim_axe as well) was using recursion was because Striek had suggested it. His code was really messed up, so I was trying to "fix" it while also attempting to point out the multiplying by zero error. Anyhow. Since no-one else really suggested/posted another method, at least it's something. :P January 13th, 2004 12:31 AM Well I guess we could continue it... Anyone else want to help out? So we can scrap the recursion idea because the program would eventually run out of heap/stack or something on really big numbers... But as to how will we use integers to store numbers that are over 32bits long... We might have to borrow from the code @ http://www.codeguru.com/algorithms/factorial.shtml by somehow adopting use of linked lists of integers. Although the code there already uses it to compute factorials... So, I guess it would be like this (insert really bad program outline here): Use types in number, ie 60. Allocate 59/60/61 integers (array), put numbers 60 to 1 in them. (we will somehow multiply them later on, which is what factorials do) Then we can allocate some more integers, say 1000 to be safe for now, and set each element to 0. This will hold our output, with each integer holding a single digit. 1000 here would mean we can store a 1000 digit answer for a factorial. My guess is that is the factorial of 150 or so? We then multiply the last two integers from the first array, ie with values 60 and 59. Store this result, ie 3540, in a temporary integer. We then seperate it into thousands, hundreds, tens, and ones, etc., and put it into the answer in those respective places in the answer array. This gets messy. We seperate the next integer into ones and tens, and then somehow multiply the answer array and this next integer, ie 58, and deal with carrying over numbers to keep a single digit to the answer. Move that into the answer, and repeat that process. The link does that somewhere, but with linked lists instead of arrays of integers. This technique could work well up until getting the factorial of about 65537, since 65537 * 65536 is a number over 32bits, the largest we could hold in an unsigned integer / long value on a normal 32bit PC... (first multiplacation step) It is sort of reinventing the wheel I guess since there is already code to do it. I could probably work on it when I'm not studying for finals this week. Hopefully we can figure this one out, lol. January 13th, 2004 04:55 PM Wow. I looked at that "algorithm"...I couldn't even have begun to think about coding that (mainly because I've never learned about linked lists and I don't know as much as I should about c++). January 14th, 2004 03:38 AM I tried to come up with my own code and it is useless. I can't get it to carry numbers over right and multiply them together the way I need to. It ended up using only the first unsigned long in the array of about 1000 of them. Anyways, I read through the comments of that tutorial, and came across this one. Man the person (Krishna Kumar Khatri) is good... Saves me from ripping out anymore of hair trying to get my own version to work. January 14th, 2004 03:47 PM lol. That code's still too confusing for me to understand without reading it over a couple of times :P. If you could somehow devise a way to multiply parts 1 to n in an array to output them without having to store them in a variable, then I've got a program that solves this problem. But I guess that was the whole problem in the first place :P Anyhow. I'm just annoyed that I couldn't think up a solution myself
850
After failing to win reelection to the Congress Morris moved to Philadelphia and resumed his law practice. A series of newspaper articles on finance secured him the post of assistant to Robert Morris (no relative) in handling the finances of the new government (1781-85). In this position he planned the U.S. decimal coinage system. As a member of the U.S. Constitutional Convention of 1787 Morris played an active role, defending a strong centralized government and a powerful executive, opposing concessions on slavery, and putting the Constitution into its final literary form. He remained, however, a champion of aristocracy who distrusted democratic rule. In 1789 Moris went to France as a private business agent, remained in Europe, and was appointed (1792) U.S. minister to France. During the French Revolution his sympathies lay with the royalists; he even helped plan a scheme to rescue Louis XVI. His recall was requested in 1794, but he traveled for several years before returning to America in 1798. From 1800 to 1803, Morris, a Federalist, was a U.S. senator from New York. He then retired to his estate. He condemned the War of 1812, going so far as to recommend the severance of the federal union. Morris was a strong advocate of the Erie Canal and served as chairman (1810-13) of the canal commission. See his Diary of the French Revolution (1939), edited by his great-granddaughter, Beatrix Cary Davenport; biographies by T. Roosevelt (1888, repr. 1972), D. Walther (tr. 1934), and R. Brookhiser (2003); M. M. Mintz, Gouverneur Morris and the American Revolution (1970). (born , Jan. 31, 1752, Morrisania house, Manhattan—died Nov. 6, 1816, Morrisania house) American statesman and financial expert. He was admitted to the bar (1771) and served in the New York Provincial Congress (1775–77) and the Continental Congress (1778–79). He distrusted the democratic tendencies of colonists who wanted to break with England, but his belief in independence led him to join their ranks. As assistant superintendent of finance (1781–85), he proposed the decimal coinage system that became the basis for U.S. currency. A delegate to the Constitutional Convention, he helped write the final draft of the Constitution of the United States. He served as minister to France (1792–94) and as a U.S. Senator (1800–03), and he was the first chairman of the Erie Canal commission (1810–16). Learn more about Morris, Gouverneur with a free trial on Britannica.com. Born in what is now part of New York City in 1752, Gouverneur Morris was of Welsh and Huguenot background. Morris graduated from King's College, known since the American Revolution as Columbia University, in 1768. He practiced law in the city starting in 1771. Morris had a wooden leg as a result of an accident that occurred while he was climbing onto a carriage without anyone tending to the horses, which suddenly took off, catching his left leg in one of the carriage wheels on May 14, 1780. Physicians told Morris that they had no choice but to remove the leg below the knee. On May 8, 1775, Morris was elected to represent his family estate in the New York Provincial Congress, an extralegal assembly dedicated to achieving independence. His advocacy of independence brought him into conflict with his family, as well as his mentor William Smith, who had abandoned the patriot cause when it moved towards independence. Despite an automatic exemption from military duty because of his handicap and his service in the legislature, he joined a special "briefs" club for the protection of New York City, a forerunner of the modern New York Guard. After the Battle of Long Island in August 1776, the British seized New York City and his family's estate. His mother, a Loyalist, gave the estate over to the British for military use. Because his estate was now in the possession of the enemy, he was no longer eligible for election to the New York state legislature and was instead appointed as a delegate to the Continental Congress. He took his seat in Congress on January 28, 1778 and was immediately selected to a committee in charge of coordinating reforms in the military with General Washington. On a trip to Valley Forge, he was so appalled by the conditions of the troops that he became the spokesman for the Continental Army in Congress and pushed for substantial reforms in the training and methods of the army. He also signed the Articles of Confederation in 1778. In 1779, he was defeated for re-election to Congress, largely because his advocacy of a strong central government was at odds with the decentralist views in New York. Defeated in his home state, he moved to Philadelphia to work as a lawyer and merchant. In Philadelphia, he was appointed assistant superintendent of finance (1781-1785), and was a Pennsylvania delegate to the Constitutional Convention in 1787, before returning to live in New York in 1788. During the convention, he was a friend and ally of George Washington and others who favored a stronger central government. Morris was elected to serve on a committee of five (chaired by William Samuel Johnson) that would draft the final language of the proposed Constitution. Catherine Drinker Bowen, in Miracle at Philadelphia, called Morris the committee's "amanuensis," meaning that it was his pen that was responsible for most of the draft. "An aristocrat to the core," Morris believed that "there never was, nor ever will be a civilized Society without an Aristocracy". He also thought that common people were incapable of self-government and feared that the poor would sell their votes to rich people, and consequently thought that voting should be restricted to property owners. Morris also opposed admitting new Western states on an equal basis with the existing Eastern states, fearing that the interior wilderness could not furnish "enlightened" statesmen. At the Convention he gave more speeches than any other delegate, totaling 173. He went to Europe on business in 1789 and served as Minister Plenipotentiary to France from 1792-1794. His diaries written during that time have become an invaluable chronicle of the French Revolution, capturing much of the turbulence and violence of that era. He returned to the United States in 1798 and was elected in 1800 as a Federalist to the United States Senate to fill the vacancy caused by the resignation of James Watson, serving from April 3, 1800, to March 3, 1803. He was an unsuccessful candidate for reelection in 1802. After leaving the Senate, he served as chairman of the Erie Canal Commission, 1810-1813. At the age of 57, he married Anne Cary ("Nancy") Randolph, who was the sister to Thomas Mann Randolph, husband of Thomas Jefferson's daughter Martha Jefferson Randolph. He died at the family estate of Morrisania and is buried at St. Ann's Episcopal Church in the Bronx borough of New York City. Morris's half-brother, Lewis Morris (1726-1798), was a signer of the Declaration of Independence. Another half-brother, Staats Long Morris, was a Loyalist and major-general in the British army during the American Revolution. His nephew, Lewis Richard Morris, served in the Vermont legislature and in the United States Congress. His grandnephew was William M. Meredith, United States Secretary of the Treasury under Zachary Taylor. Morris's great-grandson, also named Gouverneur (1876-1953), was an author of pulp novels and short stories during the early twentieth century. Several of his works were adapted into films, including the famous Lon Chaney, Sr. film The Penalty. Envoy to the Terror: Gouverneur Morris and the French Revolution.(Napoleon's Troublesome Americans: Franco-American Relations, 1804-1815)(Book review) Jun 22, 2007; Envoy to the Terror: Gouverneur Morris and the French Revolution. By Melanie Randolph Miller. (Dulles, VA: Potomac Books, 2005....
109
ASL Literature and Art This section is a collection of ASL storytelling, poetry, works of art, and other creative works. It also consists of posts on literary aspects of ASL. Speech language can convey sound effects in storytelling, whereas sign language can convey cinematic effects in storytelling. Poetry in sign language has its own poetic features such as rhymes, rhythms, meters, and other features that charactierize poetry which is not limited to speech. Explore ASL literary arts in this section including some visual-linguistic literary works in ASL and discussion. Selected works of interest Deconstruct W.O.R.D.: an original poetry performance. Knowing Fish: poetic narrative video. Compare three versions of the poem "Spring Dawn" originally written by Meng Hao-jan. The poem is translated by the literary artist Jolanta Lapiak into ASL in video and unique one-of-a-kind photograph print. Watch how ASL rhymes arise in this signed poem.
77
March 30, 2011 by Valerie Elkins The short answer is keizu. The longer answer is not so easy. There several reasons why it is difficult for those of Japanese ancestry living outside of Japan to trace their lineage. One of the main reasons is a lack of understanding of the language. I am not going to sugar coat it, learning Japanese is hard, BUT learning how to pronounce it is not. There are 5 basic vowel sounds in Japanese. They are always pronounced the same unlike in English! Vowel lengths are all uniformly short: |a||as in ‘father’| |e||as in ‘bet’| |i||as in ‘beet’| |u||as in ‘boot’| |o||as in ‘boat’| You do not need to know everything in Japanese but learning some genealogical terms is helpful. Glossary of Japanese genealogical terms to begin building your vocabulary. - koseki ~ household register, includes everyone in a household under the head of house (who usually was male) - koseki tohon ~ certified copy which recorded everything from the original record. - koseki shohon ~ certified copy which recorded only parts from the original. - joseki ~ expired register in which all persons originally entered have been removed because of death, change of residence, etc. A joseki file is ordinarily available for 80 years after its expiration. - kaisei genkoseki ~ revised koseki - honseki ~ permanent residence or registered address (i.e. person may move to Tokyo but their records remain in hometown city hall). - genseki ~ another name for honseki - kakocho ~ Buddist death register - kaimyo ~ Buddist name given to deceased person and recorded in kakocho. - homyo ~ Buddist name given to living converts, similar to homyo. - kuni ~ country or nation - ken ~ prefecture - shi ~ city - gun ~ county - to ~ metropolitan prefecture (Tokyo-to). Similar to ken. - do ~ urban prefecture (Hokkaido). Similar to ken. - fu ~ urban prefecture (Kyoto-fu, Osaka-fu) similar to ken. - ku ~ ward in some large cities (Sapparo, Sendai, Tokyo) divided in to town (cho). - cho ~ town - aza ~unorganized district - machi ~ town within a city (cho) or ward (ku), town within a county (gun). - chome ~ smaller division of a town (cho) in some neighborhoods. - mura or son ~ village within a county (gun). - koshu or hittousha or stainushi ~ head of household, the head of the family - zen koshu ~ former head of household - otto ~ husband - tsuma ~ wife - chichi or fu ~ father - haha or bo ~ mother - sofu ~ grandfather - sobo ~ grandmother - otoko or dan or nan ~ male, man, son - onna or jo ~ female, woman, daughter - ani or kei or kyou ~ older brother - otouto or tei ~ younger brother - ane or shi ~ older sister - imouto or mai ~ younger sister - mago or son ~ grandchild - himago or souson ~ great-grandchild - oi ~ nephew - mei ~ niece - youshi ~ adopted child or son - youjo ~ adopted daughter - muko youshi ~ a man without sons may adopt his eldest daughter’s husband as his own son and the young man will take his wife’s surname and be listed on her family’s koseki - seimei or shime ~ full name, family name - shussei or shusshou ~ birth - shibou ~ deceased - nen or toshi ~ year - gatsu, getsu or tsuki ~ month - hi or nichi or ka ~ day - ji or toki ~ hour, time - sai or toshi ~ age - issei ~ person born in Japan and later immigrate elsewhere - nisei ~ child/generation of issei and born outside of Japan - sansei ~ child/generation of nisei and born outside of Japan - yonsei ~ child/generation of sansei and born outside of Japan - gosei ~ child/generation of yonsei and born outside of Japan There is another Japanese term you really need to know. It is ganbatte which means ‘hang in there’ or ‘do your best’ and either one is will work. Category Uncategorized | Tags:
482
- Historic Sites The Winter Soldiers, December 1973 | Volume 25, Issue 1 That was the end of the campaigning in 1776. Washington went into winter quarters and waited for a spring that would bring new hardships. Most of his soldiers went home, to be replaced by unpromising recruits. But now, for the rest of the war, there would be among these green men a leavening of veterans who had seen Hessians surrender and British regulars run from a pitched battle and who would not forget the sight. As Ketchum writes, “The Americans’ revolution survived—survived in some mysterious way that no one could quite fathom—in no small part because of what George Washington and his soldiers achieved against all the odds that nature and a vastly superior military force could pit against them. … Because of their accomplishments, the waning days of 1776 were not the end of everything, but a new beginning.” The Winter Soldiers is the story of the men who brought about that new beginning; it is a story that cannot be told too often and has rarely been told so well.
617
Here are 15 ways kids can benefit from learning meditation techniques: 1. Meditation practice develops strength of character, as the child learns about virtuous living by thinking over the qualities of each virtue. 2. Meditation can help a child learn to think for themselves, and determine a best course of action by reflecting on possible solutions. 3. If the child is being raised in a particular religion, the quiet time of meditation is a chance to reflect on the spiritual lesson for the day. 4. Meditation is a positive activity that can be an example to a child's friends of a way to handle stress and work out problems. 5. Meditation feels good because a calm mind and relaxed body generate feelings of harmony. 6. As kids grow up and meditation deepens feelings of joy awaken within, and can be shared in daily living through caring actions, making meditation a win-win activity. 7. Learning even, regular breathing gives instant stress relief on a moment's notice. The regular even breathing is a bridge that ties body to mind; when breathing is made regular it calms the physical body. 8. Following the sequence within a meditation develops memory and the ability to concentrate, which carries over into school work. 9. Longer meditations give the body and mind time to deeply relax and center. 10. I found in teaching, that active or hyperactive kids moving in rhythm with the group, such as in walking meditation, derive a calming effect. 11. Meditation is non-competitive, and each can participate within the limits of their own ability, with adaptations if needed, or by working with a partner. 12. Meditation can be done individually or in a group setting and it does not need a special place to practice. 13. Beginning meditation practice in childhood sets up a lifetime habit for a way to handle stress, and as the child matures spiritual qualities can be added to meditate upon, such as kindness, loving, honesty, compassion. 14. Developing the discipline to sit still for meditation carries over into learning how to sit and concentrate to work out a problem or do school work. 15. Meditation feeds self-esteem: when in control of body and mind the youngster finds confidence to handle any situation competently. For offline reading God Speaks through the Holy Spirit More than 100 spiritual relationship topics with a reflective meditation to bring you into personal contact with the Holy Spirit. Uplifting to the goal of meditation, communion with our Creator, Sustainer. 198 pages. Meditation for all Kids Sitting, walking, dance and group circle meditations, along with positive affirmations, verses and benefits of meditation for kids of all ages and abilities in a 100 page book with illustrations. Article by Susan Helene Kramer
82
Although often used interchangeably, the words "fate" and "destiny" have distinct connotations. - Traditional usage defines fate as a power or agency that predetermines and orders the course of events. Fate defines events as ordered or "inevitable" and unavoidable. Classical and European mythology features three goddesses dispensing fate, known as Moirai in Greek mythology, as Parcae in Roman mythology, and as Norns in Norse mythology. They determine the events of the world through the mystic spinning of threads that represent individual human fates. - Destiny is used with regard to the finality of events as they have worked themselves out; and to that same sense of "destination", projected into the future to become the flow of events as they will work themselves out. In other words, "fate" relates to events of the future and present of an individual and in cases in literature unalterable, whereas "destiny" relates to the probable future. Fate implies no choice, but with destiny the entity participates in achieving an outcome that is directly related to itself. Participation happens willfully. In Hellenistic civilization, the chaotic and unforeseeable turns of chance gave increasing prominence to a previously less notable goddess, Tyche, who embodied the good fortune of a city and all whose lives depended on its security and prosperity, two good qualities of life that appeared to be out of human reach. The Roman image of Fortuna, with the wheel she blindly turned, was retained by Christian writers, revived strongly in the Renaissance and survives in some forms today. In daily language, "destiny" and "fate" are synonymous, but with regard to 19th century philosophy, the words gained inherently different meanings. For Arthur Schopenhauer, destiny was just a manifestation of the Will to Live, which can be at the same time living fate and choice of overrunning the fate same, by means of the Art, of the Morality and of the Ascesis. For Nietzsche, destiny keeps the form of Amor fati (Love of Fate) through the important element of Nietzsche's philosophy, the "will to power" (der Wille zur Macht), the basis of human behavior, influenced by the Will to Live of Schopenhauer. But this concept may have even other senses, although he, in various places, saw the will to power as a strong element for adaptation or survival in a better way. Nietzsche eventually transformed the idea of matter as centers of force into matter as centers of will to power as mankind’s destiny to face with amor fati. The expression Amor fati is used repeatedly by Nietzsche as acceptation-choice of the fate, but in such way it becomes even another thing, precisely a "choice" destiny. Many Greek legends and tales teach the futility of trying to outmaneuver an inexorable fate that has been correctly predicted. This form of irony is important in Greek tragedy, as it is in Oedipus Rex and in the Duque de Rivas' play that Verdi transformed into La Forza del Destino ("The Force of Destiny") or Thornton Wilder's The Bridge of San Luis Rey, or in Macbeth's uncannily-derived knowledge of his own destiny, which in spite of all his actions does not preclude a horrible fate. Other notable examples include Thomas Hardy's Tess of the d'Urbervilles, in which Tess is destined to the miserable death that she is confronted with at the end of the novel; Samuel Beckett's Endgame; the popular short story "The Monkey's Paw" by W.W. Jacobs. Destiny is a recurring theme in the literature of Hermann Hesse (1877–1962), including Siddharta (1922) and his magnum opus, Das Glasperlenspiel, also published as The Glass Bead Game (1943). The common theme of these works involves a protagonist who cannot escape a destiny if their fate has been sealed, however hard they try. Destiny is also an important plot point in the hit TV shows Lost, Heroes and Supernatural, as well a common theme in the Roswell TV series. Destiny is a recurring theme in the video-game franchise Kingdom Hearts, with Kingdom Hearts: Birth By Sleep having its story based around the concept of Destiny, and the tagline for the game stating "Destiny is never left to chance." Destiny is also a prominent theme in the anime Mawaru Penguindrum, which focuses on the concept that humans cannot escape from their own fate. See also |Wikiquote has a collection of quotations related to: Destiny| - Divine Providence - Lazy argument - Qadr, destiny in Islam - Russian avos' - Psychology of human destiny - "The Wheel of Fortune" remains an emblem of the chance element in fate(destiny). - Beyond Good & Evil 13, Gay Science 349 & Genealogy of Morality II:12 - Cornelius, Geoffrey, C. (1994). "The Moment of Astrology: Origins in Divination", Penguin Group, part of Arkana Contemporary Astrology series.
409
In 1765, the first census in Puerto Rico was conducted. It produced a count of 44,833 inhabitants, with 5,037 of those black slaves. Details about the free population were not specified in the count. The number of black slaves rose to 51,265 in 1846, one of the last counts before the abolition of slavery. The census of 1765 began a rich history of census taking. Under Spanish rule, the population was counted 13 times. A year after the United States invasion, the first census under U.S. command was conducted under the supervision of the United States Department of War. This census determined that at the end of the 19th century the island population was 953,243 inhabitants. Beginning in 1910, Puerto Rico was included in the United States Census and since then the Census Bureau, under the United States Commerce Department, on the first of April in years ending in zero, has provided official figures on the number of residents. Under United States rule, 11 censuses have been conducted. Reaching a level of one million inhabitants took more than 130 years, but the population rose to two million in less than fifty years. Passing three million was an even faster process, taking about four decades. It is expected that the 2010 census will show the Puerto Rico population stabilized around four million, which would imply that reaching the fourth million occurred in less than thirty years. Autor: gf Ana L. Dávila Román Published: September 20, 2010.
493
Mali has been engrossed in civil war since January 2012, when separatists in Mali’s northern Azawad region began demanding independence from the southern, Bamako-based government. After forcing the Malian military from the north, however, the separatist forces soon became embroiled in a conflict of their own, between the original Mouvement National pour la Libération de l’Azawad (MNLA) and extremist Islamist splinter factions closely linked with Al-Qaeda. On 11 January 2013, France responded to Mali’s urgent request for international assistance and initiated ‘Operation Serval’ to aid the recapture of Azawad and defeat the extremist group. From the 18th, West African states began reinforcing French forces with at least 3,300 extra troops. In a BBC ’From Our Own Correspondent’ editorial, Hugh Schofield wrote of ‘la Francafrique’, or France’s considerable interests in West Africa held over from the end of formal empire. In fits and spurts, France has sought to extract itself from la Francafrique and to seek a new relationship with the continent. But in the complex world of post-colonial relationships, such a move is difficult. France retains strong economic, political, and social links with West Africa. Paris, Marseille, and Lyon are home to large expatriate African communities. Opinions at l’Elycée Palace, too, have wildly shifted over the years. Jacques Chirac, at least according to Schofield, was ‘a dyed-in-the-wool Guallist’, and an ideological successor to a young François Mitterand who, in 1954, defiantly pronounced that ‘L’Algérie, c’est la France’. Nicolas Sarkozy, on the other hand, dramatically distanced himself both from Chirac and from the la Francafrique role. The problem is, at least in part, topographical in nature. West Africa’s geography is dangerous, vast, and difficult to subordinate. On the eve of much of West Africa’s independence from France in 1961, R J Harrison Church spoke of the so-called Dry Zone, the area running horizontally from southern Mauritania across central Mali and Niger, as the great “pioneer fringe” of the region’s civilization. David Hilling, in his 1969 Geographical Journal examination, added that by “taming” the Saharan interior, France gained an important strategic advantage over their British rivals in the early twentieth century, enjoying access to resources unavailable along the coast. But, as A T Grove discussed in his 1978 review, “colonising” West Africa was much easier said than done, and the French left a West Africa mired in dispute, open to incursions, and still heavily reliant on the former imperial power. The French relationship with the region’s extreme geography was difficult at best; political boundaries were similar to those of the Arabian Peninsula and the Rub ‘al-Khali in particular: fluid, ill-defined, and not always recognised by local peoples. European-set political boundaries only exacerbated tensions between indigenous constituencies who had little or no say in the border demarcations. French and African efforts to dam the Niger River, for instance, were hampered by high costs, arduous terrain, and political instability well into the 1960s. On independence, the French left what infrastructure they could, mostly in West Africa’s capital and port cities; the vast interiors were often left to their own devices. As a result of these events, France has maintained a large military, economic, and social presence in the region ever since. The difficulty is that such areas under weak political control, such as the Malian, Somalian, and Sudanese deserts, have become havens for individuals who wish to operate outside international and national law. R J Harrison Church, 1961, ‘Problems and Development of the Dry Zone of West Africa‘, The Geographical Journal 127 187-99. David Hilling, 1969, ‘The Evolution of the Major Ports of West Africa‘, The Geographical Journal 135 365-78. A T Grove, 1978, ‘Geographical Introduction to the Sahel‘, The Geographical Journal 144 407-15. Ieuan Griffiths, 1986, ‘The Scramble for Africa: Inherited Political Boundaries‘, The Geographical Journal 152 204-16. ‘Le Mali attend le renfort des troupes ouest-africaines‘, Radio France Internationale, 19 January 2013, accessed 19 January 2013. Hugh Schofield, ‘France and Mali: An “ironic” relationship’, BBC News, 19 January 2013, accessed 19 January 2013.
903
"We consider it essentially the child's personal time and don't feel it should be taken away for academic or punitive reasons," said Dr. Robert Murray, who co-authored the new policy statement for the American Academy of Pediatrics. Recess helps students develop communication skills, such as cooperation and sharing, and helps counteract the time they spend sitting in class, according to the statement. "The cognitive literature indicates that children are exactly as we are as adults. Whenever they're performing a complicated or complex task, they need time to process the information," said Murray, a professor at Ohio State University in Columbus. "Kids have to have that time scheduled. They're not given the opportunity to just get up and walk around for a few minutes," he added. Previous research, according to the statement's authors, found children pay closer attention and perform better mentally after recess. Last January, a review of 14 studies found kids who get more exercise from - among other things - recess and playing on sports teams tend to do better in school (see Reuters Health story of January 3, 2012 here: http://reut.rs/UcJhV0.) But a 2011 survey of 1,800 elementary schools found about a third were not offering recess to their third grade classes (see Reuters Health story of December 5, 2011 here: http://reut.rs/UcOqwt.) Murray told Reuters Health that schools in Japan offer children about 10 minutes of free time after every 50 minutes of class, which he said makes sense. "I think you can feel it if you go to a lecture that after 40 to 50 minutes of a concentrated activity you need to take a break," he said. Currently, the American Heart Association calls for at least 20 minutes of recess every day, but Murray said recess needs depend on the child. "Most schools - on average - are working on the framework of 15 to 30 minute bursts of recess once or twice a day," he said. There is, however, consensus on when in the day children's recess should take place. The U.S. Centers for Disease Control and Prevention and the U.S. Department of Agriculture both recommend schools schedule recess before lunch. Previous studies have found that children waste less food and behave better for the rest of the day when their recess is before their scheduled lunch, the pediatricians' statement notes. The statement also says schools should not substitute physical education classes for recess. "Those are completely different things and they offer completely different outcomes," said Murray. "(Physical education teachers are) trying to teach motor skills and the ability of those children to use those skills in a bunch of different scenarios. Recess is a child's free time." The pediatricians also warn against a recess that is too structured, such as having games led by adults. "I think it becomes structured to the point where you lose some of those developmental and social emotion benefits of free play," said Murray. "This is a very important and overlooked time of day for the child and we should not lose sight of the fact that it has very important benefits," he added. SOURCE: http://bit.ly/HjQ8dI Pediatrics, online December 31, 2012.
227
Cleopatra, queen of Egypt and lover of Julius Caesar and Mark Antony, takes her life following the defeat of her forces against Octavian, the future first emperor of Rome. Cleopatra, born in 69 B.C., was made Cleopatra VII, queen of Egypt, upon the death of her father, Ptolemy XII, in 51 B.C. Her brother was made King Ptolemy XIII at the same time, and the siblings ruled Egypt under the formal title of husband and wife. Cleopatra and Ptolemy were members of the Macedonian dynasty that governed Egypt since the death of Alexander the Great in 323 B.C. Although Cleopatra had no Egyptian blood, she alone in her ruling house learned Egyptian. To further her influence over the Egyptian people, she was also proclaimed the daughter of Re, the Egyptian sun god. Cleopatra soon fell into dispute with her brother, and civil war erupted in 48 B.C. Rome, the greatest power in the Western world, was also beset by civil war at the time. Just as Cleopatra was preparing to attack her brother with a large Arab army, the Roman civil war spilled into Egypt. Pompey the Great, defeated by Julius Caesar in Greece, fled to Egypt seeking solace but was immediately murdered by agents of Ptolemy XIII. Caesar arrived in Alexandria soon after and, finding his enemy dead, decided to restore order in Egypt. During the preceding century, Rome had exercised increasing control over the rich Egyptian kingdom, and Cleopatra sought to advance her political aims by winning the favor of Caesar. She traveled to the royal palace in Alexandria and was allegedly carried to Caesar rolled in a rug, which was offered as a gift. Cleopatra, beautiful and alluring, captivated the powerful Roman leader, and he agreed to intercede in the Egyptian civil war on her behalf. In 47 B.C., Ptolemy XIII was killed after a defeat against Caesar's forces, and Cleopatra was made dual ruler with another brother, Ptolemy XIV. Julius and Cleopatra spent several amorous weeks together, and then Caesar departed for Asia Minor, where he declared "Veni, vidi, vici" (I came, I saw, I conquered), after putting down a rebellion. In June 47 B.C., Cleopatra bore a son, whom she claimed was Caesar's and named Caesarion, meaning "little Caesar." Upon Caesar's triumphant return to Rome, Cleopatra and Caesarion joined him there. Under the auspices of negotiating a treaty with Rome, Cleopatra lived discretely in a villa that Caesar owned outside the capital. After Caesar was assassinated in March 44 B.C., she returned to Egypt. Soon after, Ptolemy XIV died, likely poisoned by Cleopatra, and the queen made her son co-ruler with her as Ptolemy XV Caesar. With Julius Caesar's murder, Rome again fell into civil war, which was temporarily resolved in 43 B.C. with the formation of the second triumvirate, made up of Octavian, Caesar's great-nephew and chosen heir; Mark Antony, a powerful general; and Lepidus, a Roman statesman. Antony took up the administration of the eastern provinces of the Roman Empire, and he summoned Cleopatra to Tarsus, in Asia Minor, to answer charges that she had aided his enemies. Cleopatra sought to seduce Antony, as she had Caesar before him, and in 41 B.C. arrived in Tarsus on a magnificent river barge, dressed as Venus, the Roman god of love. Successful in her efforts, Antony returned with her to Alexandria, where they spent the winter in debauchery. In 40 B.C., Antony returned to Rome and married Octavian's sister Octavia in an effort to mend his strained alliance with Octavian. The triumvirate, however, continued to deteriorate. In 37 B.C., Antony separated from Octavia and traveled east, arranging for Cleopatra to join him in Syria. In their time apart, Cleopatra had borne him twins, a son and a daughter. According to Octavian's propagandists, the lovers were then married, which violated the Roman law restricting Romans from marrying foreigners. Antony's disastrous military campaign against Parthia in 36 B.C. further reduced his prestige, but in 34 B.C. he was more successful against Armenia. To celebrate the victory, he staged a triumphal procession through the streets of Alexandria, in which he and Cleopatra sat on golden thrones, and Caesarion and their children were given imposing royal titles. Many in Rome, spurred on by Octavian, interpreted the spectacle as a sign that Antony intended to deliver the Roman Empire into alien hands. After several more years of tension and propaganda attacks, Octavian declared war against Cleopatra, and therefore Antony, in 31 B.C. Enemies of Octavian rallied to Antony's side, but Octavian's brilliant military commanders gained early successes against his forces. On September 2, 31 B.C., their fleets clashed at Actium in Greece. After heavy fighting, Cleopatra broke from the engagement and set course for Egypt with 60 of her ships. Antony then broke through the enemy line and followed her. The disheartened fleet that remained surrendered to Octavian. One week later, Antony's land forces surrendered. Although they had suffered a decisive defeat, it was nearly a year before Octavian reached Alexandria and again defeated Antony. In the aftermath of the battle, Cleopatra took refuge in the mausoleum she had commissioned for herself. Antony, informed that Cleopatra was dead, stabbed himself with his sword. Before he died, another messenger arrived, saying Cleopatra still lived. Antony had himself carried to Cleopatra's retreat, where he died after bidding her to make her peace with Octavian. When the triumphant Roman arrived, she attempted to seduce him, but he resisted her charms. Rather than fall under Octavian's domination, Cleopatra committed suicide on August 30, 30 B.C., possibly by means of an asp, a poisonous Egyptian serpent and symbol of divine royalty. Octavian then executed her son Caesarion, annexed Egypt into the Roman Empire, and used Cleopatra's treasure to pay off his veterans. In 27 B.C., Octavian became Augustus, the first and arguably most successful of all Roman emperors. He ruled a peaceful, prosperous, and expanding Roman Empire until his death in 14 A.D. at the age of 75.
368
In 1929, the wild financial speculation of the Roaring Twenties came to a sudden halt in October when the stock market began to slide. Banker's Committee Stops Panic of '29 Worries spread through the economic community about the passing of the Smoot-Hawley Tariff Act. Tariffs had always been a point of contention among Americans, even spurring South Carolina to threaten secession over the Tariff Act of 1828. Producers such as farmers and manufacturers called for protective tariffs while merchants and consumers demanded low prices. The American economy soared while post-war Europe rebuilt in the '20s, and the Tariff Act of 1922 skimmed valuable revenue from the nation's income that would otherwise have been needed as taxes. The country barely noticed, and the economy surged forward as new technological luxuries became available as well as new disposable income. Meanwhile, however, the nation faced an increasingly difficult drought while food prices continued to drop during Europe's recovery. Farmers were stretched thinner and thinner, prompting calls for protective agricultural tariffs and cheaper manufactured goods. In his 1928 presidential campaign, Herbert Hoover promised just that, and as the legislature met in 1929, talks on a new tariff began. Led by Senator Reed Smoot (R-Utah) and Representative Willis C. Hawley (R-Oregon), the bill quickly became more than Hoover and the farmers had bargained for as rates would increase to a level exceeding 1828 for industrial products as well as agricultural. A new story by Jeff ProvineThe revenue would be a great boon, but it unnerved economists, who wondered if it could kill the economic growth already slowing by a dipping real estate market. The weakened nerves shifted from economists to investors, who took the heated debate in the Senate as a clue that times may become rough and decided to get out of the stock market while they could. Prices had skyrocketed over the course of the '20s as the middle class blossomed and minor investors came into being. Another hallmark of the '20s, credit, enabled people to buy stock on margin, borrowing money they could invest at what they hoped would be a higher percentage. The idea of a "money-making machine" spread, and August of 1929 showed more than $8.5 billion in loans, more than all of the money in circulation in the United States. The market peaked on September 3 at 381.17 and then began a downward correction. At the rebound in late October, panicked selling began. On October 24, what became known as "Black Thursday", the market fell more than ten percent. On Friday, it did the same, and the initial outlook for the next week was dire. Amid the early selling in October, financiers noted that a crash was coming and met on October 24 while the market plummeted. The heads of firms and banks such as Chase, Morgan, and the National City Bank of New York collaborated and finally placed vice-president of the New York Stock Exchange Richard Whitney in charge of stopping the disaster. Forty-one-year-old Whitney was a successful financier with an American family dating back to 1630 and numerous connections in the banking world who had purchased a seat on the NYSE Board of Governors only two years after starting his own firm. Whitney's initial strategy was to replicate the cure for the Panic of 1907: purchasing large amounts of valuable stock above market price, starting with the "blue chip" favorite U.S. Steel, the world's first billion-dollar corporation. On his way to make the purchase, however, Whitney bumped into a junior who was analyzing the banking futures based on the increase of failing mortgages from failing farms and a weakening real estate market. He suggested that the problems of the new market were caused from the bottom-up, and a top-down solution would only put off the inevitable. Instead of his ostentatious show of purchasing to show the public money was still to be had, Whitney decided to use the massive banking resources behind him to support the falling. He made key purchases late on the 24th, and then his staff worked through the night determining what stocks were needlessly inflated, what were solid, and what could be salvaged (perhaps even at a profit). Stocks continued to tumble that Friday, but by Monday thanks to word-of-mouth and glowing press from newspapers and the new radio broadcasts, Tuesday ended with a slight upturn in the market of .02 percent. Numerically unimportant, the recovery of public support was the key success. With the initial battle won, Whitney spearheaded a plan to salvage the rest of the crisis as real estate continued to fall and banks (which were quickly running out of funds as they seized more and more of the market) would soon have piles of worthless mortgaged homes and farms. Banks organized themselves around the Federal Reserve, founded in 1913 after a series of smaller panics and determined rules that would keep banks afloat. Further money came from lucrative deals with the wealthiest men in the country such as John D. Rockefeller, Henry Ford, and the Mellons of Pittsburgh. Businesses managed to continue work despite down-turning sales through loans, though the unemployment rate did increase from 3 to 5 percent over the winter. The final matter was the question of international trade. As the Smoot-Hawley Tariff Act continued in the Senate, economists predicted retaliatory tariffs from other countries to kill American exports, but Washington turned a deaf ear. Whitney decided to protect his investments in propping up the economy by investing with campaign contributions. Democrats took the majority as the Republicans fell to Whitney's use of the press to blame the woes of the economy on Congressional "airheads". Representative Hawley himself lost his seat in the House, which he had held since 1907, to Democrat William Delzell. President Hoover, a millionaire businessman before entering politics, noted the shift, but remained quiet and dutifully vetoed the new tariff. By 1931, it became steadily obvious that America had shifted to an oligarchy. The banks propped up the market and were propped up themselves by a handful of millionaires. If Rockefeller wanted, he could single-handedly pull his money and collapse the whole of the American nation. Whitney took greater power as Chairman of the Federal Reserve, whose new role controlled indirectly everything of economic and political worth. As the Thirties dragged on, the havoc of the Dust Bowl made food prices increase while simultaneously weakening the farming class, and Whitney gained further power by ousting Secretary of Agriculture Arthur Hyde and installing his own man as a condition for Hoover's reelection in '32. Chairman Whitney would "rule" the United States, wielding public relations power and charisma to give Americans a strong sense of national emergency and patriotism during times like the Japanese War in '35 (which secured new markets in East Asia) and the European Expedition in '39. He employed the Red Scare to keep down ideas of insurrection and used the FBI as a secret police, but his ultimate power would be that, at any point, he could tamper with interest rates or stock and property value, and the country would spiral into rampant unemployment and depression, dragging the rest of the world with it.
382
Invalid Forensic Science Testimony and Wrongful Convictions Flawed testimony by forensic experts contributed to the conviction of innocent defendants, according to a new study co-written by University of Virginia Law School professor Brandon Garrett. The findings are featured in an article, "Invalid Forensic Science Testimony and Wrongful Convictions," published in the March, 2009 issue of the Virginia Law Review. Read the article in Garrett and Peter Neufeld, co-director of the Innocence Project, studied the transcripts of 137 trials in which prosecution forensic analysts testified, and the defendants were exonerated years later by post-conviction DNA testing. The pair found that in 60 percent of those wrongful conviction cases, forensic analysts gave "invalid testimony that overstated the evidence," Garrett said. "What we mean by 'invalid' is simply that the testimony was unscientific or contrary to empirical data," he said. "Just because a wrong statistic was offered, does not mean that the testimony necessarily caused the wrongful conviction. However, these powerful examples support efforts to adopt and enforce scientific standards governing forensic reporting and testimony." The flawed testimony uncovered by Garrett and Neufeld included erroneous or unsupported testimony about the accuracy and results of forensic techniques, including hair comparison, bite-mark comparison, fingerprint comparison and even DNA testing. The study originated with a request to Garrett — who conducted previous research on wrongful convictions — from a National Academy of Sciences committee examining the needs of the forensic science community, asking him to present at one of the committee's public hearings. Garrett and Neufeld then spent more than a year compiling and analyzing trial transcripts from the cases of people later exonerated by DNA evidence. Several scientists and forensic scientists also reviewed the categories used for analysis and examined transcripts in particular cases. The majority of the cases were rape cases from the 1980s, and many included testimony about forensic techniques that are still used today, said Neufeld, co-founder of Innocence Project, a national litigation and public policy organization that uses DNA testing to exonerate wrongfully convicted people and seeks to reform the criminal justice system to prevent injustice. The National Academy of Sciences report, "Strengthening Forensic Science in the United States: A Path Forward," was released in February, and recommends the establishment of a national institute of forensic science, an independent scientific entity to adopt and enforce standards for forensic report writing and testimony. "With the exception of nuclear DNA analysis ... no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source," the National Academy of Sciences report says. In their study, Garrett and Neufeld found that forensic analysts often testified that a particular piece of evidence — such as a hair or a fingerprint — was closely connected to the innocent defendant, despite the fact that no scientific data permitted analysts to reach such conclusions. For example, in one case an analyst told a jury that only 5 percent of the population had a certain type of hair pigment discovered at a crime scene, and that the defendant was among them. But there is no empirical data about the frequency of particular hair pigments, Garrett said. "These trial transcripts were fascinating to read, because in retrospect we know that all of the defendants were innocent," he said. "Yet few have looked at these records. Even after these wrongful convictions came to light, crime laboratories rarely conducted audits or investigations to review the forensic evidence presented at the trial." The study doesn't speak to the state of mind of analysts; it's impossible to tell from trial materials whether they were simply inexperienced or poorly supervised or acting in bad faith, Garrett said. "Nor do we know how many cases involved concealment of forensic evidence," he said. "In 13 exonerees' cases it has since come to light that forensic evidence was concealed that would have helped to prove innocence. Similarly, we do not know from reading trial transcripts in how many cases errors were made in the laboratory, although in a few exonerees' cases in which the underlying evidence was re-examined, gross errors have come to light." The study also notes that the criminal justice system is not well-suited to prevent unscientific testimony. Reasons include that the presentation of forensic evidence is almost entirely one-sided, Garrett said. "Defense counsel rarely cross-examined analysts concerning invalid testimony and rarely retained experts, since courts routinely deny funding for defense experts." Only 19 of the eventual exonerees whose cases were examined had defense experts. "Prosecutors, moreover, presented erroneous accounts of the forensic evidence during closing arguments," Garrett said. The study's authors agree with the National Academy of Sciences report's assessment that a set of national scientific standards should be established to ensure the valid presentation of forensic analysis. Neufeld called the report "a major breakthrough toward ensuring that so-called scientific evidence in criminal cases is solid, validated and reliable."" ||Truth in Justice
5
Discovered in 1988, the Roman Hippodrome in Beirut is situated in Wadi Abou Jmil, next to the newly renovated Jewish Synagogue in Downtown Beirut. This monument, dating back for thousands of years, now risks to be destroyed. The hippodrome is considered, along with the Roman Road and Baths, as one of the most important remaining relics of the Byzantine and Roman era. It spreads over a total area of 3500 m2. Requests for construction projects in the hippodrome’s location have been ongoing since the monument’s discovery but were constantly refused by former ministers of culture of which we name Tarek Metri, Tamam Salam and Salim Warde. In fact, Tamam Salam had even issued a decree banning any work on the hippodrome’s site, effectively protecting it by law. Salim Warde did not contest the decree. Current minister of culture Gabriel Layoun authorized constructions to commence. When it comes to ancient sites in cities that have lots of them, such as Beirut, the current adopted approach towards these sites is called a “mitigation approach” which requires that the incorporation of the monuments in modern plans does not affect those monuments in any way whatsoever. The current approval by minister Layoun does not demand such an approach to be adopted. The monument will have one of its main walls dismantled and taken out of location. Why? to build a fancy new high-rise instead. Minister Layoun sees nothing wrong with this. In fact, displacing ruins is never done unless due to some extreme circumstances. I highly believe whatever Solidere has in store for the land is considered an “extreme circumstance.” The Roman Hippodrome in downtown Beirut is considered as one of the best preserved not only in Lebanon, but in the world. It is also the fifth to be discovered in the Middle East. In fact, a report (Arabic) by the General Director of Ruins in Lebanon, Frederick Al Husseini, spoke about the importance of the monument as one that has been talked about in various ancient books. It has also been correlated with Beirut’s infamous ancient Law School. He speaks about the various structures that are still preserved and only needing some restoration to be fully exposed. He called the monument as a highly important site for Lebanon and the world and is one of Beirut’s main facilities from the Byzantine and Roman eras, suggesting to work on preserving and making this site one of Beirut’s important cultural and touristic locations. His report dates back from 2008. MP Michel Aoun, the head of the party of which Gabriel Layoun is part, defended his minsiter’s position by saying that: “there are a lot of discrepancies between Solidere and us. Therefore, a minister from our party cannot be subjected to Solidere. Minister Layoun found a way, which is adopted internationally, to incorporate ancient sites with newer ones… So I hope that media outlets do not discuss this issue in a way that would raise suspicion.” With all due respect to Mr. Aoun and his minister but endangering Beirut’s culture to strip away even more of the identity that makes it Beirut is not something that should concern him or Solidere. What’s happening is a cultural crime to the entirety of the Lebanese population, one where the interests of meaningless politicians becomes irrelevant. Besides, for a party that has been anti-Solidere for years, I find it highly hypocritical that they are allowing Solidere to dismantle the Roman Hippodrome. The conclusion is: never has a hippodrome been dismantled and displaced in any parts of the world. Beirut’s hippodrome will effectively become part of the parking of the high-rise to be built in its place. No mitigation approach will be adopted here. It is only but a diversion until people forget and plans go well underway in secrecy. But the time for us to be silent about this blatant persecution of our history cannot continue. If there’s anything that we can do is let the issue propagate as much as we can. There shouldn’t be a Lebanese person in the 10452 km2 that remains clueless about any endangered monument for that matter. Sadly enough, this goes beyond the hippodrome. We have become so accustomed to the reality of it that we’ve become very submissive: the ancient Phoenician port is well behind us, there are constructions around the ancient Phoenician port of Tyre and the city itself risks of being removed off UNESCO’s list for Cultural Heritage Sites. The land on which ancient monuments are built doesn’t belong to Solidere, to the Ministry of Culture or to any other contractor – no matter how much they’ve paid to buy it. It belongs to the Lebanese people in their entirety. When you realize that of the 200 sites uncovered at Solidere, those that have remained intact can be counted with the fingers of one hand, the reality becomes haunting. It’s about time we rise to our rights. Beirut’s hippodrome will not be destroyed.
54
January 21, 2010 An estimated 430,000 children worldwide became infected with HIV in 2008, mostly through birth or breastfeeding from an HIV-infected mother. Many regions of the world are gaining increased access to complex antiretroviral drug regimens for preventing HIV transmission from a mother to her child. However, these strategies have not yet been directly compared with simpler antiretroviral drug regimens in terms of their safety, efficacy, feasibility and cost-effectiveness. On January 15, a large, multinational clinical trial began to determine how best to reduce the risk of HIV transmission from infected pregnant women to their babies during pregnancy and breastfeeding while preserving the health of these children and their mothers. The PROMISE (“Promoting Maternal-Infant Survival Everywhere”) study aims to enroll 7,950 HIV-infected women who are pregnant or have recently given birth and 5,950 HIV-exposed infants of these women. The participants will come from as many as 18 countries whose levels of resources range from high to low. The International Maternal Pediatric Adolescent AIDS Clinical Trials network is conducting the study with funding from the National Institute of Allergy and Infectious Diseases and the Eunice Kennedy Shriver National Institute of Child Health and Human Development, both part of the National Institutes of Health. Led by protocol chair Mary Glenn Fowler, M.D., M.P.H., of the Makerere University–Johns Hopkins University Research Collaboration in Kampala, Uganda, the study team expects results in five to six years. The HIV-infected women eligible to participate in PROMISE do not yet qualify for treatment—that is, their CD4+ T cell count, a measure of immune health, exceeds the level (350 cells per cubic millimeter of blood) at which highly active antiretroviral therapy (HAART) generally is recommended. HAART consists of a potent combination of three or more antiretroviral drugs. The study addresses four distinct research questions. Most volunteers will participate in multiple components of the study to answer these questions. The first component will examine which of two proven strategies is safer and more effective at preventing mother-to-child HIV transmission before and during delivery: giving HIV-infected pregnant women a three-antiretroviral-drug regimen beginning as early as 14 weeks of gestation, or giving them the antiretroviral drug zidovudine beginning as early as 14 weeks of pregnancy and a single dose of the antiretroviral drug nevirapine during labor. The regimen of zidovudine and nevirapine is the standard of care in many countries for women who do not yet require treatment for their HIV infection. Some 4,400 women will be assigned at random to receive either one of these two interventions. The second component of the PROMISE study will compare the safety and efficacy of two methods of preventing mother-to-child HIV transmission during breastfeeding. The study team will assign 4,650 mother-infant pairs at random either to receive a daily dose of infant nevirapine or to have the mothers take a three-antiretroviral-drug regimen throughout breastfeeding. The third component of the PROMISE study will examine the effects of short-term use of a three-antiretroviral-drug regimen during pregnancy and breastfeeding to prevent mother-to-child HIV transmission on the health of HIV-infected mothers who do not yet need treatment. For such women, it remains unclear whether stopping the three-drug regimen after giving birth or ceasing to breastfeed would compromise their health. Although past studies have shown that interrupting treatment with antiretroviral drugs has a negative effect, the conditions in those studies are different enough from the conditions of the PROMISE study to make extrapolating the results difficult, according to the study investigators. The 4,675 women participating in this third component of PROMISE will be assigned at random either to stop the three-antiretroviral-drug regimen after giving birth or weaning, or to continue the drug regimen indefinitely. The health of these two groups will be compared. In addition, the women who receive the time-limited three-drug regimen will be compared with the women who participated in the first component of PROMISE and did not receive the three-drug regimen, but rather took zidovudine during pregnancy and single-dose nevirapine during labor. The last component of the PROMISE study involves protecting the health of HIV-exposed but uninfected infants. In resource-limited settings, it is standard to give the antibiotic cotrimoxazole once daily to infants exposed to HIV at birth until the infant has stopped breastfeeding and is known to be HIV-uninfected. While cotrimoxazole prophylaxis improves the survival rate of HIV-infected infants, it is not known whether continuing to administer the drug after weaning similarly would benefit HIV-exposed but uninfected children. In this fourth component of the PROMISE study, nearly 2,290 HIV-exposed but uninfected, weaned infants under one year old will be assigned at random either to continue receiving cotrimoxazole or to receive a placebo through age 18 months. Neither the mothers of the infants nor the study team will know which infants are in which group. The study will determine whether continuing cotrimoxazole prophylaxis in HIV-exposed, uninfected infants from the time they stop breastfeeding through age 18 months decreases their risk of illness and death without causing side effects or generating bacterial resistance to cotrimoxizole. Media inquiries can be directed to the NIAID Office of Communications at 301-402-1663, firstname.lastname@example.org. The NICHD sponsors research on development, before and after birth; maternal, child, and family health; reproductive biology and population issues; and medical rehabilitation. For more information, visit the Institute’s Web site at http://www.nichd.nih.gov/. NIAID conducts and supports research—at NIH, throughout the United States, and worldwide—to study the causes of infectious and immune-mediated diseases, and to develop better means of preventing, diagnosing and treating these illnesses. News releases, fact sheets and other NIAID-related materials are available on the NIAID Web site at www.niaid.nih.gov. About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit www.nih.gov. NIH...Turning Discovery Into Health ® back to top Last Updated January 21, 2010
259
These are poly-aromatic compounds, insoluble in n-heptane, with a number of carbon atoms greater than 50. The asphalthene content of a crude may be the cause of deposits in inter-changers and/or lines. In fact, the mix of a crude having a high asphalthene content with a paraffinic crude can displace the balance of the asphalthenes, precipitating them. A high asphalthene content ensures that the vacuum pitch will be suitable for producing asphalt. ASTM D86 distillation is a test that measures the volatility of gasoline, kerosene and diesel. Basic Sediment and Water (BSW) The BSW relates to the content of free water (not dissolved) and sediments (mud, sand) in the crude. It is important that its reading is low in order to avoid dirtiness and difficulties during the crude processing, in which the steam produced by the free water can damage the oven. It is reported as a percentage in volume over the crude. This is the weight of the residue remaining after the combustion of a fuel sample. It represents the facility of a heavy fuel to produce particles during combustion. This is the measurement of the mass of a volume. It is expressed in kilograms per liter, or grams per cubic centimeter. Density depends on the temperature as this affects the volume of the substances. Temperature at which a liquid stops flowing when cooled, through the precipitation of crystals of solid paraffin. The draining temperature is very important as, in the unloading of paraffinic crudes using sea terminals with underwater pipelines of a certain length, the temperature of the crude can fall below the draining point, creating deposits of wax or solid paraffin in the pipelines, thus obstructing the flow. This is the minimum temperature at which the vapors of a product flash or detonate momentarily when a flame is applied in controlled conditions. It represents the maximum temperature at which a product can be stored or transported in safe conditions. This is the temperature at which the crystals formed during the cooling of a product sample disappear completely when the temperature rises in a controlled way The metals content of a crude, vanadium and nickel, gives us an indication of their content in the heaviest products obtained in the refining. This is important because, for example, the metals in gas oil in vacuum are poison for the catalytic and hydro-cracking catalysts. A high vanadium content or metals in the combustible oil may cause oven and boiler tube breakage problems because they form corrosive products during combustion. Number of cetane This measures the ease with which the spontaneous ignition occurs of diesel oil using a standardized engine and a reference fuel. The cetane rating is determined by making a comparison of the ignition time of a mix of cetane (C16)) and hepta-methyl-nonane (C 15), which has the same delay time in ignition as the fuel being examined. The cetane rating measured is the percentage of the cetane compound in the cetane/hepta-methyl-nonane mix. The C16 has a cetane rating equal to 100 (it is an easily-ignited paraffin) and C15 has a cetane rating equal to 0 (as being a slow-combustion aromatic). A high cetane rating represents a high ignition quality or a short delay time between the fuel injection and the start of combustion. The diesel engine uses a high compression ratio to produce the spontaneous ignition of the diesel, instead of a spark as in the case of the internal combustion engine. The compressed air temperature in the diesel engine is sufficiently high to fire the diesel. The lineal paraffins have a high cetane rating and therefore burn well; on the other hand, the aromatics are of a low cetane rating and burn badly, producing deposits of carbon and the production of black smoke. For that reason, high-quality diesel should have an aromatic content compatible with the specified cetane rating. The cetane rating can be calculated based on the volatility (corresponding to the temperature of 50% distilled) and the density of the diesel and is called Calculated Cetane Rating. The reason for using the formula is the high cost of the cetane engine. Octane number (NOR) The RVP and NOR are the most important parameters of gasoline quality. The NOR measures the resistance of the gasoline to self-ignition or premature detonation in an engine's functioning conditions. Self-ignition is noted for the hammering or noise produced when the gasoline self-ignites, detonating before the cylinder compresses all the gasoline and air mixture, losing power. The detonation produces sound waves that are detected using special microphones. The octane rating is measured by comparing the noise of the detonation made by a reference fuel mixture in a standardized engine with that for which the fuel examined is made. The reference fuels are iso-octane (2, 2, 4 trimethyl pentane), with an octane rating equal to 100 (high resistance to hammering) and the n-heptane which has an octane rating of zero (very low resistance). The octane rating determined is the percentage in volume of iso-octane in the iso-octane/heptane mixture. Fuels with a high octane rating have greater resistance to premature detonation than those of a lower octane rating. In addition, fuels with a high octane rating can be used in engines with a high compression ratio, which are more efficient. There are two types of engine for the determination of the octane rating of gasoline. One uses the Research method and the other the Motor method. The Research method represents the behavior of an engine in cities at low and moderate speeds. The Motor method represents situations with fast acceleration, like climbing gradients or overtaking. There is another way of expressing the octane rating of a gasoline which is called Highway Octane. The Highway Octane rating is expressed as the sum of the Research octane and the Motor octane ratings divided by 2. The Highway Octane rating is used in the United States while the Research method is used in Chile. Reid Vapor Pressure (RVP) The Reid vapor pressure is an empirical test that measures the pressure in pounds per square inch (psi) exercised by the vapors or light components of the crude or of an oil product, in a closed container at a temperature of 100 °F (38 °C). A high vapor pressure of the crude tells us that light products are present in it and that they will burn in the torch in the processing if there is no suitable recovery system. In the case of an internal combustion engine, excessive vapor pressure will cause a blockage which will impede the flow of gasoline. Crude oil contains salt (NaCI) which comes from the oil fields or the sea water used as ballast by oil tankers. It is necessary to extract the salt with desalination equipment before the crude oil can enter the atmospheric distillation oven in order to avoid corrosion that is produced in the upper part of the atmospheric tower. The salt decomposes and produces chlorhydric acid. It is expressed in grams of salt per cubic meter of crude. The temperature at which some products inflame spontaneously in contact with the air (without flame), probably due to the heat the show oxidation produces, which accumulates, raising the temperature to the inflammation point. Fortunately, the oil distillers have very high self-ignition temperatures and are therefore difficult to achieve; this is 450ºC in gasoline. Oily rags, on the other hand, self-ignite easily and cause fires and so should be suitably destroyed. The ratio of the weight of a substance and the weight of an equal volume of water at the same temperature. In oil, the API specific gravity is used which is measured with hydrometers that float in the liquid. The API grades are read directly on the scale that stands above the liquid at the flotation line point. The API scale arose from the facility of graduating the hydrometer rod uniformly. °API =141.5/(specific gravity) -131.5 The °API determines whether the crude or product is light or heavy and enables us to calculate the tons of this unloaded. A light crude has an API of 40-50 while a heavy one has 10-24. Sulfur and the API are the properties with the greatest influence on the price of crude. This is the resistance to degradation through heat or oxidation of an oil product. Products containing olefinic material are unstable and susceptible to degradation. The sulfur content permits the foreseeing of difficulties in meeting product and atmospheric emission specifications, as treatment units are needed to meet these; it is also poison for some catalysts. It also enables us to see whether the plant metallurgy is the most suitable for processing it. It is expressed as a percentage in weight of sulfur. Sulfuric acid (H2S) A prior knowledge of the sulfuric acid content of the crude permits preventive actions and avoids accidents to people. The sulfhydric acid is very dangerous because it anesthetizes the olfactory nerve which prevents people from being aware of the situation and is mortal in small quantities. Personnel working in contact with the crude have therefore to wear protection equipment and personal sulfhydric acid sensors. This is the degree of resistance of a liquid to flow. The greater the viscosity, the greater the resistance to flow. Viscosity is affected by the temperature, reducing it when the latter rises. It is measured by using special viscosimeters and is expressed in USS (Universal Saybolt Seconds), FSS (Furol Saybolt Seconds) and in centi-stokes. Viscosity is important for fuel injection in engines and burners. It is also critical in the pumping of crude oil and products by pipeline. A higher viscosity than that designed for will reduce the desired flow and make a greater pump motor capacity necessary. The viscosity also affects measuring instrument factors, altering the readings. The measurement of the facility with which a product vaporizes. Volatile products have high steam pressure and a low boiling point. It is measured through the ASTM D86 test and is expressed as the temperature at which certain volumes are distilled.
468
Stuttering: Risk Factors During an evaluation for Reference stuttering Opens New Window, a health professional will consider a child's risk factors to help find out whether the problem is temporary (normal disfluency) or likely to persist (developmental stuttering). Risk factors (things that increase risk) for stuttering include:Reference 1 - Having a family member whose stuttering did not resolve on its own. - Being male. Boys are more likely than girls to keep stuttering. - The age that it starts. Children who start to stutter before age 3½ are more likely to outgrow it than children who start to stutter at an older age. - The amount of time that it's lasted. A child who has stuttered for at least 6 months may be less likely to outgrow it on his or her own. If it's lasted longer than 12 months, there's even less of a chance that a child will outgrow it on his or her own. - How clear the speech is. A child who speaks clearly with few, if any, speech errors may be more likely to outgrow stuttering than a child whose speech errors make him or her hard to understand. - Having speech irregularities that have lasted 18 months or more. Usually each risk factor taken individually is not significant. Rather, the strength of each risk factor and how many are present can help a health professional determine whether stuttering is likely to be a long-term problem. |By:||Reference Healthwise Staff||Last Revised: Reference August 7, 2012| |Medical Review:||Reference Susan C. Kim, MD - Pediatrics Reference Louis Pellegrino, MD - Developmental Pediatrics
326
BURKE TEACHERS STUDY, PREPARE FOR NEW CURRICULUM Burke County schools are preparing for a curriculum change coming in August as North Carolina joins more than 40 other states in adopting the Common Core State Standards. A pamphlet Burke schools will send to parents next month explains the new way of teaching, testing and holding principals accountable.“These standards describe what students are supposed to know from kindergarten through 12th grade,” the pamphlet reads. “They define the reading, writing and math knowledge and skills needed at each grade level. Each year builds on the next so that by high school graduation, young people are prepared to go to college or to enter the workplace.”Burke school officials attended a crash course in Common Core last month as North Carolina Superintendent of Public Instruction June Atkinson toured the state to talk about the upcoming changes.
370
Oil & Natural Gas Projects Transmission, Distribution, & Refining Multispectral and Hyperspectral Remote Sensing Techniques for Natural Gas Transmission Infrastructure Systems The goal is to help maintain the nation's natural gas transmission infrastructure through the timely and effective detection of natural gas leaks through evaluation of geobotanical stress signatures. The remote sensing techniques being developed employ advanced spectrometer systems that produce visible and near infrared reflected light images with spatial resolution of 1 to 3 meters in 128 wavelength bands. This allows for the discrimination of individual species of plants as well as geological and man-made objects, and permits the detection of biological impacts of methane leaks or seepages in large complicated areas. The techniques employed do not require before-and-after imagery because they use the spatial patterns of plant species and health variations present in a single image to distinguish leaks. Also, these techniques should allow discrimination between the effects of small leaks and the damage caused by human incursion or natural factors such as storm run off, landslides and earthquakes. Because plants in an area can accumulate doses of leaked materials, species spatial patterns can record time-integrated effects of leaked methane. This can be important in finding leaks that would otherwise be hard to detect by direct observation of methane concentrations in the air. This project is developing remote sensing methods of detecting, discriminating, and mapping the effects of natural gas leaks from underground pipelines. The current focus is on the effects that the increased methane soil concentrations, created by the leaks, will have on plants. These effects will be associated with extreme soil CH4 concentrations, plant sickness, and even death. Similar circumstances have been observed and studied in the effects of excessive CO2soil concentrations at Mammoth Mountain near Mammoth Lakes California, USA. At the Mammoth Mountain site, the large CO2 soil concentrations are due to the volcanic rumblings of the magma still active below Mammoth Mountain. At more subtle levels this research has been able to map, using hyperspectral air borne imagery, the tree plant stress over all of the Mammoth Mountain. These plant stress maps match, and greatly extend into surrounding regions, the on-ground CO2 emission mapping done by the USGS in Menlo Park, California. In addition, vegetation health mapping along with altered mineralization mapping at Mammoth Mountain does reveal subtle hidden faults. These hidden faults are pathways for potential CO2 leaks, at least near the surface, over the entire region. The methods being developed use airborne hyperspectral and multi-spectral high-resolution imagery and very high resolution (0.6 meter) satellite imagery. The team has identified and worked with commercial providers of both airborne hyperspectral imagery acquisitions and high resolution satellite imagery acquisitions. Both offer competent image data post processing, so that eventually, the ongoing surveillance of pipeline corridors can be contracted for commercially. Current work under this project is focused on detecting and quantifying natural gas pipeline leaks using hyperspectral imagery from airborne or satellite based platforms through evaluation of plant stress. Lawrence Livermore National Laboratory (LLNL) – project management and research products NASA – Ames – Development of UAV platform used to carry hyperspectral payload HyVista Corporation– Development and operation of the HyMap hyperspectral sensor Livermore, CA 94511 The use of geobotanical plant stress signatures from hyperspectral imagery potential offers a unique means of detecting and quantifying the existence of natural gas leaks from the U.S. pipeline infrastructure. The method holds the potential to cover large expanses of pipeline with minimal man effort thus reducing the potential likelihood that a leak would go undetected. By increasing the effectiveness and efficiency of leak detection, the amount of gas leaked from a site can be reduced resulting in decreased environmental impact from fugitive emissions of gas, increased safety and reliability of gas delivery and increase in overall available gas; as less product is lost from the lines. The method chosen for testing these techniques was to image the area surrounding known gas pipeline leaks. After receiving notice and location information for a newly discovered leak from research collaborator Pacific Gas and Electric (PG&E), researchers determined the area above the buried pipeline to be scanned, including some surrounding areas thought to be outside the influence of any methane that might percolate to within root depth of the surface. Flight lines were designed for the airborne acquisition program and researchers used a geographic positioning system (GPS) and digital cameras to visually record the soils, plants, minerals, waters, and manmade objects in the area while the airborne imagery was acquired. After the airborne imagery set for all flight lines was received (including raw data, data corrected to reflectance including atmospheric absorptions, and georectification control files), the data was analyzed using commercial computer software (ENVI) by a team of researchers at University of California, Santa Cruz (UCSC), Lawrence Livermore National Laboratory (LLNL), and one of the acquisition contractors. - Created an advanced Geographic Information System (GIS) that will be able provide dynamic integration of airborne imagery, satellite imagery, and other GIS information to monitor pipelines for geobotanical leak signatures. - Used the software to integrate hyperspectral imagery, high resolution satellite imagery, and digital elevation models of the area around a known gas leak to determine if evidence of the leak could be resolved. - Helped develop hyperspectral imagery payload for use on an unmanned aerial vehicle developed by NASA-Ames. - Participated in DOE-NETL sponsored natural gas pipeline leak detection demonstration in Casper, Wyoming on September 13-17, 2004. Using both the UAV hyperspectral payload (~1000 ft), and Hyvista hyperspectral platform (~5000 ft) to survey for plant stress. Researchers used several different routines available within the ENVI program suite to produce “maps” of plant species types, plant health within species types, soil types, soil conditions, water bodies, water contents such as algae or sediments, mineralogy of exposed formations, and manmade objects. These maps were then studied for relative plant health patterns, altered mineral distributions, and other categories. The researchers then returned to the field to verify and further understand the mappings, fine-tune the results, and produce more accurate maps. Since the maps are georectified and the pixel size is 3 meters, individual objects can all be located using the maps and a handheld GPS. These detailed maps show areas of existing anomalous conditions such as plant kills and linear species modifications caused by subtle hidden faults, modifications of the terrain due to pipeline work or encroachment. They are also the “baseline” that can be used to chart any future changes by re-imaging the area routinely to monitor and document any effects caused by significant methane leakage. The sensors used for image acquisition are hyperspectral scanners, one of which provides 126 bands across the reflective solar wavelength region of 0.45 – 2.5 nm with contiguous spectral coverage (except in the atmospheric water vapor bands) and bandwidths between 15 – 20 nm. This sensor operates on a 3-axis gyro-stabilized platform to minimize image distortion due to aircraft motion and provides a signal to noise ratio >500:1. Geo-location and image geo-coding is achieved with an-on board Differential GPS (DGPS) and an integrated IMU (inertial monitoring unit). During a DOE – NETL sponsored natural gas leak detection demonstration at the National Petroleum Reserve 3 (NPR3) site of the Rocky Mountain Oilfield Testing Center (RMOTC) outside of Casper, Wyoming, the project utilized hyperspectral imaging of vegetation to sense plant stress related to the presence of natural gas on a simulated pipeline using actual natural gas releases. The spectral signature of sunlight reflected from vegetation was used to determine vegetation health. Two different platforms were used for imaging the virtual pipeline path: a Twin Otter aircraft flying at an altitude of about 5,000 feet above ground level that imaged the entire site in strips, and an unmanned autonomous vehicle (UAV) flying at an altitude of approximately 1,000 feet above ground level that imaged an area surrounding the virtual pipeline. The manned hyperspectral imaging took place on two days. Wednesday, September 9 and Wednesday, September 15. The underground leaks were started on August 30. This was done to allow time for the methane from the leaks to saturate the soils and produce plant stress by excluding oxygen from the plant root systems. On both days, the entire NPR3-RMOTC site was successfully imaged. At that time of year, the vegetation at NPR3-RMOTC was largely in hibernation. The exception was in the gullies where there was some moisture. Therefore, the survey looked for unusually stressed plant “patches” in the gullies as possible leak points. Several spots were found in the hyperspectral imagery that had the spectral signature typical of sick vegetation that were several pixels in diameter in locations in the gullies or ravines along the virtual pipeline route. Due to the limited vegetation along the test route the successful detection of natural gas leaks through imaging of plant stress was limited in success. The technique did demonstrate an ability to show plant stress in areas near leak sites but was less successful in determining general leak severity based on those results. In areas with much denser vegetation coverage and less dormant plant life the method still shows promise. | Airborne hyperspectral imagery unit - close-up || Airborne hyperspectral imagery unit - on plane Overall results from the DOE-NETL sponsored natural gas leak detection demonstration can be found in the demonstration final report [PDF-7370KB] . Current Status and Remaining Tasks: All work under this project has been completed. Project Start: August 13, 2001 Project End: December 31, 2005 DOE Contribution: $966,900 Performer Contribution: $0 NETL – Richard Baker (email@example.com or 304-285-4714) LLNL – Dr. William L. Pickles (firstname.lastname@example.org or 925-422-7812) DOE Leak Detection Technology Demonstration Final Report [PDF-7370KB] DOE Fossil Energy Techline: National Labs to Strengthen Natural Gas Pipelines' Integrity, Reliability Status Assessment [PDF-26KB]
97
Most exercise-related injuries have the same basic cause - the overstressing of muscles, tendons, ligaments, bones, and other tissue. With sufficient precautions and care, risks can be minimized. Warming up slowly and cooling down properly can help prevent many stress injuries. To be effective, your warm-up and cool-down exercises should use the same muscles as your main exercise. For example, if you jog, begin by walking for several minutes, then jog slowly, before breaking into a full stride. Do this before and after your regular exercise. Every athlete should include a 15-minute warm up and cool down program as part of the workout. This will increase flexibility, reduce muscle soreness, and improve overall performance. Other good principles to follow during exercise are: know your body's limitations and warning signals; drink plenty of water; and never combine heavy eating with heavy exercising. For more information on the benefits of warming up and cooling down, consult a physician.
331
Autism and the Recovery Act Service in the Field: National Health Service Corps (NHSC) The NHSC is a network of primary health care professionals working in underserved communities across the country. To support their service, the Recovery Act allocated funding to the NHSC to provide more than 4,000 clinicians with financial support in the form of loan repayment. One NHSC loan repayor receiving Recovery Act funds has become involved in treatment/support of ASD patients. Andrea Kinlen, Ph.D., a licensed clinical psychologist working in McPherson, Kansas, is working with patients and families to provide testing and therapy services for ASD patients. As Dr. Kinlen continued to spend time with patients, learning about their struggles and telling them they weren’t alone, an idea for a local support group emerged. Encouraged by the support of her supervisor and funding provided by a partial community grant, Dr. Kinlen engaged the families she counseled, other clinics, and local schools. As families came together, they were able to share stories and discuss challenges. Although the ages of the children and levels of severity vary, members of the group have found much common ground. Autism spectrum disorder (ASD) is a group of complex neurodevelopmental disorders characterized by social impairments; communication difficulties; and restricted, repetitive, and stereotyped patterns of behavior. These characteristics can range in impact from mild to significantly disabling. The Centers for Disease Control and Prevention estimates that an average of 1 in 110 children in the United States has ASD. Symptoms usually begin to appear before age three and can cause delays or problems that continue through adulthood. Early detection of ASD and intervention can greatly increase a child’s ability to learn new skills and improve overall quality of life. The cost of ASD to affected people, their families, and society is enormous. Children with ASD have a wide range of healthcare and services needs, and their families typically lose income, often as a result of one parent leaving the workforce in order to care for and meet his or her child’s special health and educational needs. A great majority of adults with ASD struggle with ongoing and mostly unmet needs for employment, housing, services, and supports. Lifetime costs to care for an individual with ASD have been estimated to be $3.2 million. With funding from the American Recovery and Reinvestment Act (Recovery Act), the U.S. Department of Health and Human Services has been able to accelerate work in promising areas of ASD research.* National Institutes of Health (NIH) NIH invested $122 million in Recovery Act funds for groundbreaking research on ASD that otherwise would not have been possible, in areas such as screening, early detection, potential interventions and therapeutics, and in revealing the precise causes and mechanisms underlying this disorder (which are still largely unknown). Research area highlights include: - Aiding Diagnosis: A two-site study at the University of Michigan-Ann Arbor and the Cincinnati Children’s Hospital is adapting the interview tool that is the current gold standard for diagnosing ASD into a brief parent interview that can be done over the telephone. This new tool may help reduce screening costs, which could mean that more children are able to be screened, and that screening could occur earlier in a child’s life, leading to starting treatments sooner for the child. Additionally, reduced costs would help researchers to quickly identify potential participants for ASD studies. As part of the research process, study investigators performed free ASD screenings in Ohio and Michigan to test their changes to the interview tool. These screenings have reached children from many families for whom a screening might otherwise have been too costly to afford. Read more. - Addressing Disparities: Investigators at Florida State University are exploring the significant racial and ethnic minority disparities that exist in the early diagnosis of ASD, often delaying minority access to beneficial early interventions and services. The results of this research will lead to culturally sensitive screening and evaluation methods that may decrease the age at which all children with ASD are diagnosed. Read more. - Possible Causes: Although the exact causes of ASD are still unknown, research suggests an interaction between environmental factors and genetic predisposition. A number of Recovery Act-funded studies are using advanced DNA sequencing technology that allows for quick study of many genes at a time, while others are focusing on a variety of other potential causes. For example, one study at the University of California-Davis is exploring the role of infection during pregnancy on raising the risk of ASD in a mouse model. Research suggests that a mother’s immune response to infection may affect levels of immune molecules in the fetal brain, impacting brain development and possibly contributing to ASD. Read more. - Improving Interactions: Most children with ASD seem to have no reaction to other people or may respond atypically to others’ emotions. Such behavior can isolate children with ASD from their peers. NIH awarded a Challenge grant to support the development and testing of a new computer-assisted program at the University of California, San Diego to train children with ASD how to respond to others’ facial expressions (for example, widening one’s eyes, wrinkling one’s nose, etc.), and how to produce facial expressions conveying particular emotions to others. Read more. - Potential Therapeutics: In another study, researchers at the Mount Sinai School of Medicine will determine whether a certain hormone improves social cognition in adults with ASD—a potentially new treatment for social impairment linked to ASD. There has been little headway in the development of pharmacological treatments for social impairment, and it is widely acknowledged that such treatments are needed as an alternative or addition to behavioral interventions. Read more. - Skill Development: Employment can provide greater independence to people with ASD, but symptoms of the disorder often pose major social and communication barriers. In response to this issue, researchers at Do2Learn.com developed the free JobTIPS website. JobTIPS presents job-seeking resources to youth with ASD, and also provides detailed explanations of how to behave in specific situations, such as what to say and not say to a potential employer, and how to disclose their diagnosis. Two autism research centers at the University of North Carolina at Chapel Hill and Emory University, Atlanta, will help evaluate the effectiveness of JobTIPS in helping teens and young adults with ASD to learn new job-related skills and apply them in real world situations. Read more. Read more about NIH’s Recovery Act-funded investments in ASD and learn how these investments are moving science forward sooner than anticipated in addressing some of the most significant challenges to understanding and treating ASD. Agency for Healthcare Research & Quality (AHRQ) AHRQ is using $1.4 million in Recovery Act funding for research that seeks to provide patients, clinicians, and others with evidence-based information to make informed decisions about health care—including research on ASD intervention strategies: - Communicating ASD Treatments: There is no cure and no consensus regarding which intervention strategy is most effective for treating ASD. Given the complexity of ASD and associated therapies, it is clear that teachers, clinicians, and families need guidance in selecting appropriate treatments. AHRQ-funded research is developing, implementing, and evaluating strategies for disseminating information online about treatments for autism and ASD to over 16,000 individuals in important clinician, parent and teacher audiences. This novel approach will accelerate the translation of new scientific evidence on ASD therapeutics into practice and decision-making in families, the education system, the health care system, and public policy. Read more. Centers for Disease Control and Prevention. Prevalence of autism spectrum disorders - Autism and Developmental Disabilities Monitoring Network, United States, 2006. Morbidity and Mortality Weekly Report (MMWR) Surveillance Summaries. December 2009;58(10):1-20. Montes G, Halterman JS. Association of childhood autism spectrum disorders and loss of family income. Pediatrics. April 2008;121(4):e821-6. Ganz ML. The lifetime distribution of the incremental societal costs of autism. Archives of Pediatrics & Adolescent Medicine. April 2007;161(4):343–9. *Projects cited in this report are examples of Recovery Act funding being applied to autism, not a comprehensive listing of Recovery Act-funded projects.
578
Karuk Tribe: Learning from the First Californians for the Next California Editor's Note: This is part of series, Facing the Climate Gap, which looks at grassroots efforts in California low-income communities of color to address climate change and promote climate justice. This article was published in collaboration with GlobalPossibilities.org. The three sovereign entities in the United States are the federal government, the states and indigenous tribes, but according to Bill Tripp, a member of the Karuk Tribe in Northern California, many people are unaware of both the sovereign nature of tribes and the wisdom they possess when it comes to issues of climate change and natural resource management. “A lot of people don’t realize that tribes even exist in California, but we are stakeholders too, with the rights of indigenous peoples,” says Tripp. Tripp is an Eco-Cultural Restoration specialist at the Karuk Tribe Department of Natural Resources. In 2010, the tribe drafted an Eco-Cultural Resources Management Plan, which aims to manage and restore “balanced ecological processes utilizing Traditional Ecological Knowledge supported by Western Science.” The plan addresses environmental issues that affect the health and culture of the Karuk tribe and outlines ways in which tribal practices can contribute to mitigating the effects of climate change. Before climate change became a hot topic in the media, many indigenous and agrarian communities, because of their dependence upon and close relationship to the land, began to notice troubling shifts in the environment such as intense drought, frequent wildfires, scarcer fish flows and erratic rainfall. There are over 100 government recognized tribes in California, which represent more than 700,000 people. The Karuk is the second largest Native American tribe in California and has over 3,200 members. Their tribal lands include over 1.48 million acres within and around the Klamath and Six Rivers National Forests in Northwest California. Tribes like the Karuk are among the hardest hit by the effects of climate change, despite their traditionally low-carbon lifestyles. The Karuk, in particular have experienced dramatic environmental changes in their forestlands and fisheries as a result of both climate change and misguided Federal and regional policies. The Karuk have long depended upon the forest to support their livelihood, cultural practices and nourishment. While wildfires have always been a natural aspect of the landscape, recent studies have shown that fires in northwestern California forests have risen dramatically in frequency and size due to climate related and human influences. According to the California Natural Resources Agency, fires in California are expected to increase 100 percent due to increased temperatures and longer dry seasons associated with climate change. Some of the other most damaging human influences to the Karuk include logging activities, which have depleted old growth forests, and fire suppression policies created by the U.S. Forest Service in the 1930s that have limited cultural burning practices. Tripp says these policies have been detrimental to tribal traditions and the forest environment. “It has been huge to just try to adapt to the past 100 years of policies that have led us to where we are today. We have already been forced to modify our traditional practices to fit the contemporary political context,” says Tripp. Further, the construction of dams along the Klamath River by PacifiCorp (a utility company) has impeded access to salmon and other fish that are central to the Karuk diet. Fishing regulations have also had a negative impact. Though the Karuk’s dependence on the land has left them vulnerable to the projected effects of climate change, it has also given them and other indigenous groups incredible knowledge to impart to western climate science. Historically, though, tribes have been largely left out of policy processes and decisions. The Karuk decided to challenge this historical pattern of marginalization by formulating their own Eco-Cultural Resources Management Plan. The Plan provides over twenty “Cultural Environmental Management Practices” that are based on traditional ecological knowledge and the “World Renewal” philosophy, which emphasizes the interconnectedness of humans and the environment. Tripp says the Plan was created in the hopes that knowledge passed down from previous generations will help strengthen Karuk culture and teach the broader community to live in a more ecologically sound way. “It is designed to be a living document…We are building a process of comparative learning, based on the principals and practices of traditional ecological knowledge to revitalize culturally relevant information as passed through oral transmission and intergenerational observations,” says Tripp. One of the highlights of the plan is to re-establish traditional burning practices in order to decrease fuel loads and the risk for more severe wildfires when they do happen. Traditional burning was used by the Karuk to burn off specific types of vegetation and promote continued diversity in the landscape. Tripp notes that these practices are an example of how humans can play a positive role in maintaining a sound ecological cycle in the forests. “The practice of utilizing fire to manage resources in a traditional way not only improves the use quality of forest resources, it also builds and maintains resiliency in the ecological process of entire landscapes” explains Tripp. Another crucial aspect of the Plan is the life cycle of fish, like salmon, that are central to Karuk food traditions and ecosystem health. Traditionally, the Karuk regulated fishing schedules to allow the first salmon to pass, ensuring that those most likely to survive made it to prime spawning grounds. There were also designated fishing periods and locations to promote successful reproduction. Tripp says regulatory agencies have established practices that are harmful this cycle. “Today, regulatory agencies permit the harvest of fish that would otherwise be protected under traditional harvest management principles and close the harvest season when the fish least likely to reach the very upper river reaches are passing through,” says Tripp. The Karuk tribe is now working closely with researchers from universities such as University of California, Berkeley and the University of California, Davis as well as public agencies so that this traditional knowledge can one day be accepted by mainstream and academic circles dealing with climate change mitigation and adaptation practices. According to the Plan, these land management practices are more cost effective than those currently practiced by public agencies; and, if implemented, they will greatly reduce taxpayer cost burdens and create employment. The Karuk hope to create a workforce development program that will hire tribal members to implement the plan’s goals, such as multi-site cultural burning practices. The Plan has a long way to full realization and Federal recognition. According to the National Indian Forest Resources Management Act and the National Environmental Protection Act, it must go through a formal review process. Besides that, the Karuk Tribe is still solidifying funding to pursue its goals. The work of California’s environmental stewards will always be in demand, and the Karuk are taking the lead in showing how community wisdom can be used to generate an integrated approach to climate change. Such integrated and community engaged policy approaches are rare throughout the state but are emerging in other areas. In Oakland, for example, the Oakland Climate Action Coalition engaged community members and a diverse group of social justice, labor, environmental, and business organizations to develop an Energy and Climate Action Plan that outlines specific ways for the City to reduce greenhouse gas emissions and create a sustainable economy. In the end, Tripp hopes the Karuk Plan will not only inspire others and address the global environmental plight, but also help to maintain the very core of his people. In his words: “Being adaptable to climate change is part of that, but primarily it is about enabling us to maintain our identity and the people in this place in perpetuity.” Dr. Manuel Pastor is Professor of Sociology and American Studies & Ethnicity at the University of Southern California where he also directs the Program for Environmental and Regional Equity and co-directs USC’s Center for the Study of Immigrant Integration. His most recent books include Just Growth: Inclusion and Prosperity in America’s Metropolitan Regions (Routledge 2012; co-authored with Chris Benner) Uncommon Common Ground: Race and America’s Future (W.W. Norton 2010; co-authored with Angela Glover Blackwell and Stewart Kwoh), and This Could Be the Start of Something Big: How Social Movements for Regional Equity are Transforming Metropolitan America (Cornell 2009; co-authored with Chris Benner and Martha Matsuoka).
865
Constantinople AgreementArticle Free Pass Constantinople Agreement, (March 18, 1915), secret World War I agreement between Russia, Britain, and France for the postwar partition of the Ottoman Empire. It promised to satisfy Russia’s long-standing designs on the Turkish Straits by giving Russia Constantinople (Istanbul), together with a portion of the hinterland on either coast in Thrace and Asia Minor. Constantinople, however, was to be a free port. In return, Russia consented to British and French plans for territories or for spheres of influence in new Muslim states in the Middle Eastern parts of the Ottoman Empire. This first of a series of secret treaties on the “Turkish question” was never carried out because the Dardanelles campaign failed and because, when the British navy finally did reach Istanbul in 1918, Russia had made a separate peace with Germany and declared itself the enemy of all bourgeois states, France and Britain prominent among them. What made you want to look up "Constantinople Agreement"? Please share what surprised you most...
442
Our newest title Busy with Bugs Extremely Interesting Things to Do with Bugs Illustrated by Margaret Brandt National Best Books 2010 Award Children's Educational category USA Book News For kids who love bugs and bug adventures, Busy with Bugs is a guide for exploring the miniature world of tiny animals. This book buzzes with information, fun facts, and 160 extremely interesting things to do with bugs. It teaches children to explore the anatomy, behavior, life cycles, and habitats of bugs in their own backyards and school yards. Reference pages include a glossary packed with information, a lists of useful resources, and an invaluable guide to Find out more Creek Books is a small publisher committed to "teaching kids to care for the Earth." Our books are designed to motivate children to listen to, learn from, and love the world of nature. We believe that children who learn to love our Earth today will protect it tomorrow. On our web site, you can look at our books and find out how to order them. In you¡¯ll find fascinating information, and projects and activities to try at home or school. We are working to develop our site into an ideal place for kids ¡ª a place where they can have fun learning about our Earth. the Squirrel House, a photo journal about a family of squirrels which live at Trickle Creek. You'll find plans for building a squirrel house, other squirrel activities, and photos of baby squirrels. Cooper to download a complete teacher's guide (preK-2) for teaching about rainforests. You'll also find free paper dolls, coloring pages, bookmarks, a pattern for making a bromeliad and a poison-arrow frog, and an interview with Toni Albert, the author of Saving the Rain Forest with Cammie and Cooper. Buy the hardcover edition of I Heard the Willow Weep for $6.00. (Retail price: $15.95) This is the perfect book for Earth Day ¡ª and Earth Day is everyday! You'll find it in our Half-Price Warehouse.
784
Cannibalism: The Ancient Taboo in Modern Times Sexual cannibalism is considered to be a psychosexual disorder, which involves a person sexualizing the consumption of another person's flesh. This does not necessarily suggest that the cannibal achieves sexual gratification only in the act of consuming human flesh, but also may release sexual frustration or pent up anger. Sexual cannibalism is considered to be a form of sexual sadism and is often associated with the act of necrophilia (sex with corpses). There have been several high profile cases, which have involved sexual cannibalism, including that of Andrei Chikatilo, Edward Gein, Albert Fish, Armin Mewes and Jeffrey Dahmer. During the 1920's Americans were confronted with the horrors of Albert Fish who was said to have raped, murdered and eaten a number of children. Fish was a sexual cannibal in the truest sense of the term and claimed to have experienced enormous sexual pleasure when he imagined eating a person or when he actually indulged his fantasies. Andrei Chikatilo, a Russian serial killer, was responsible for the murders of scores of young boys and girls. During most of his life, Chikalito suffered from impotency and was only able to achieve sexual gratification from the torture and murder of other people. He would often mutilate and then consume the flesh of his victims, including the breasts, genitalia and internal sex organs, as well as other body parts. It is possible that he also achieved sexual gratification when cannibalizing. Chikatilo claimed that he was disgusted by the "loose morals" of many of his victims, who served as painful reminders of his own sexual incompetence. Moira Martingale writes in Cannibal Killers that many of the murders Chikatilo committed came after viewing sexually explicit or violent videos. Edward Gein, a farmer from Plainfield, Wisconsin was as believed to have killed at least three people including his brother, a bar keeper named Mary Hogan and the owner of the local hardware store, Bernice Worden. In 1957, police searched Gein's home and found the body of Worden along with the remains of over fifteen other women. A majority of the remains found at the crime scene were robbed from a nearby cemetery. Gein was believed to have had sexual contact with the corpses. He was also an admitted transvestite, who found delight in dismembering the bodies and peeling away the skin of the corpses so that he could wear them around the house. Gein was known to have cannibalized some of the bodies, including Worden's whose heart was in a pan on the stove at the time police conducted their search of the house. Whether Gein sexualized the consumption of his victims was unclear. However, there was a strong relationship between his necrophilia and cannibalistic behavior. Intriguingly, some people that claim to be cannibals have admitted to feeling a sense of euphoria and/or intense sexual stimulation when consuming human flesh. In an article written by Clara Bruce titled Chew On This: You're What's for Dinner, anthropophagists compared eating human flesh with having an orgasm. The experience was further believed to cause an out-of-body-experience causing effects comparable to taking mescaline. According to Lesley Hensel, author of Cannibalism as a Sexual Disorder, eating human flesh can cause an increase in levels of vitamin A and amino acids, which can cause a chemical effect on the blood and in the brain. This chemical reaction could possibly lead to the altered states that some cannibals have claimed to have experienced. However, this theory has not been substantiated by scientific evidence. In Fascination with Cannibalism has Sexual Roots, Josh Cannon writes about psychologist Steven Scher and his team who conducted one of the only known studies on sex and cannibalism at Eastern Illinois University in 2002. The study surveyed several groups of people who were asked questions pertaining to cannibalism and sexual interests. The results of the study found that people were more likely to eat someone that they were sexually attracted to than not. This suggests that there might be a significant sexual component in the practice of cannibalism.
25
When someone tells that there was no successful French tank, especially in WWI - don't you believe him! The Renault FT or Automitrailleuse à chenilles Renault FT modèle 1917, inexactly known as the FT-17 or FT17, was a French light tank; it is among the most revolutionary and influential tank designs in history. The FT was the first operational tank with an armament in a fully rotating turret, and its configuration with the turret on top, engine in the back and the driver in front became the conventional one, repeated in most tanks until today; at the time it was a revolutionary innovation, causing armour historian Steven Zaloga to describe the type as "the world's first modern tank". Studies on the production of a new light tank were started in May 1916 by the famous car producer Louis Renault. The evidence strongly suggests that Renault himself drew up the preliminary design, being unconvinced that a sufficient power/weight ratio could be achieved for the medium tanks requested by the military. One of his most talented designers, Rodolphe Ernst-Metzmaier, prepared the final drawings. Though the project was far more advanced than the two first French tanks about to enter production, the Schneider CA1 and the heavy St. Chamond, Renault had at first great trouble getting it accepted. Even after the first British use of tanks, on 15 September 1916, when the French people called for the deployment of their own chars, the production of the light tank was almost cancelled in favour of that of a superheavy tank (the later Char 2C). However, with the unwavering support of Brigadier General Jean-Baptiste Eugène Estienne (1860–1936), the "Father of the Tanks", and the successive French Commanders in Chief, who saw light tanks as a more feasible and realistic option, Renault was at last able to proceed with the design. However, competition with the Char 2C was to last until the very end of the war. The prototype was slowly refined during the first half of 1917. Early production FTs were often plagued by radiator fan belt and cooling system problems, a characteristic that persisted throughout World War I. Only 84 were produced in 1917 but 2,697 were delivered before the end of the war. At least 3,177 were produced in total, perhaps more; some estimates go as high as 4,000 for all versions combined. However, 3,177 is the delivery total to the French Army; 514 were perhaps directly delivered to the U.S. Army and three to Italy - giving a probable total production number of 3,694. The tanks had at first a round cast turret; later either an octagonal turret or an even later rounded turret of bent steel plate (called Berliet turret after one of the many coproducing factories). The latter two could carry a Puteaux SA 18 gun, or a 7.92 mm Hotchkiss machine gun. In the U.S., this tank was built on a licence as the Six Ton Tank Model 1917 (950 built, 64 before the end of the war). There is a most persistent myth about the name of the tank: "FT" is often supposed to have meant Faible Tonnage, or, even more fanciful: Franchisseur de Tranchées (trench crosser). In reality, every Renault prototype was given a combination code; it just so happened it was the turn of "FT". Another mythical name is "FT-18" for the guntank. A 1918 maintenance manual describes the FT as the Char d'Assault 18HP, a reference to the horsepower of the engine. FTs captured and re-used by the Germans in World War II were re-designated Panzerkampfwagen FT 18. Either of these might have led to the confusion. Also in "FT 75 BS", the "BS" does not mean Batterie de Support but "Blockhaus Schneider", a reference to the short 75mm Schneider gun with which it was fitted. The FT was widely used by the French and the US in the later stages of World War I, after 31 May 1918. It was cheap and well-suited for mass production. It reflected an emphasis on quantity, both on a tactical level: Estienne proposed to overwhelm the enemy defences by a "swarm" of light tanks, and on a geostrategic level: the Entente was thought to be able to gain the upper hand by outproducing the Central Powers. A goal was set of 12,260 to be manufactured (4,440 of which in the USA) before the end of 1919. After the war, FTs were exported to many countries (Poland, Finland, Estonia, Lithuania, Romania, Yugoslavia, Czechoslovakia, Switzerland, Belgium, Netherlands, Spain, Brazil, Turkey, Iran, Afghanistan and Japan). As a result, FT tanks were used by most nations having armoured forces, invariably as their first tank type, including the United States. They took part in many later conflicts, such as the Russian Civil War, Polish-Soviet War, Chinese Civil War, Rif War and Spanish Civil War. FT tanks were also used in the Second World War, among others in Poland, Finland, France and Kingdom of Yugoslavia, although they were completely obsolete by then. In 1940 the French army still had eight battalions equipped with 63 FTs each and three independent companies with ten each, for a total organic strength of 534, all with machine guns. Many smaller units, partially raised after the invasion, also used the tank. This has given rise to the popular myth that the French had no modern equipment at all; in fact they had more modern tanks than the Germans; the French suffered from tactical and strategic weaknesses rather than from equipment deficiencies. When the German drive to the Channel cut off the best French units, as an expediency measure the complete French materiel reserve was sent to the front; this included 575 FTs. Earlier 115 sections of FT had been formed for airbase-defence. The Wehrmacht captured 1,704 FTs. A hundred were again used for airfield defence, about 650 for patrolling occupied Europe. Some of the tanks were also used by the Germans in 1944 for street-fighting in Paris. By this time they were hopelessly out of date. The FT was the ancestor of a long line of French tanks: the FT Kégresse, the NC1, the NC2, the Char D1 and the Char D2. The Italians produced as their standard tank the FIAT 3000, a moderately close copy of the FT: The Soviet Red Army captured fourteen burnt-out Renaults from White Russian forces, and rebuilt them at the Krasnoye Sormovo Factory in 1920. The Soviets claimed to have originally manufactured these Russkiy Reno tanks, but they actually produced only one exact copy, named 'Freedom Fighter Comrade Lenin'. When Stalin began the arms race of the Thirties, the first completely Soviet-designed tank was the T-18, a derivation of the Renault with sprung suspension: In all, the FT was used by Afghanistan, Belgium, Brazil, the Republic of China, Czechoslovakia, Estonia, Finland, France, Nazi Germany, Iran, Japan, Lithuania, the Netherlands, Poland, Romania, the Russian White movement, the Soviet Union, Spain, Sweden, Switzerland, Turkey, Norway, the United Kingdom, the United States and the Kingdom of Yugoslavia. Two interesting variants:
84
Cardio and weight training are two important components of well-balanced fitness plans. Knowing which types of exercises fall into which category can help you construct your workouts more effectively, and can give you more options so you don’t get bored with your workouts. When designing your fitness plan, consider adding some stretching and flexibility training. Simply add a few minutes of stretching at the end of your cardio and weight training workouts. Cardio, or aerobic, exercises require you to move the large muscle groups in your hips, legs and arms. The movement must be continuous and rhythmic, and you must move for a sustained period. The Centers for Disease Control and Prevention recommends that adults get at least 2 1/2 hours of moderate-intensity cardio exercise each week. Alternately, you can perform an hour and 15 minutes of vigorous cardio exercise. By regularly performing cardio exercise, you can improve your cardiovascular health, manage your weight more easily and enjoy a sense of well-being. Examples of Cardio Exercises Examples of cardio exercise include basketball, biking, brisk walking, dancing, jogging, jumping rope, rowing, running, swimming, tennis and water aerobics. Even some household chores -- such as mowing the lawn -- can count as cardio. The intensity with which you perform these activities determines whether they are moderate or vigorous. Moderate-intensity cardio exercises will raise your heart rate and make you breathe faster, while still allowing you to talk. When performing vigorous cardio exercises, you will find it difficult to say more than just a few words before pausing to breathe. Weight Training Exercises In weight-training exercises, you generally focus on one muscular group at a time, doing enough repetitions of the exercise to tire the muscles. The Centers for Disease Control and Prevention recommends that adults do weight-training exercises at least twice a week. Ideally, target all your major muscle groups: chest, shoulders, arms, abdomen, back, hips and legs. By performing weight-training exercises, you can build strength in your muscles, increase the amount of lean muscle in your body, manage your weight more easily and lower your risk of injury. Examples of Weight-Training Exercises You can perform weight-training exercises with free weights -- dumbbells and barbells. Examples of these exercises include biceps curls and squats. Weight machines offer other options for weight training. You can perform leg presses, standing calf raises, chest flyes and pull-down exercises on these machines. Other types of weight-training exercises require you to move your own body weight. These include triceps dips, situps and pushups. Although technically not weight-training exercises, resistance band exercises offer more options for strength training. You can perform variations on traditional weight-training exercises -- such as biceps curls, lunges, chest flyes and overhead presses -- with these resistance bands.
10
Potatoes are one of the easiest plants to grow because they are not extremely picky about the soil that they are grown in. Growing the best potatoes, however, requires preparation and diligence during the growing stages. As potato plants spread and the potatoes start to grow, the gardener needs to be sure that the tubers remain covered with soil. Chit (pre-sprout) the seed potatoes. This step is optional but recommended. Cut out the crown sprouts or eyes to allow the potato to grow sprouts from the shoulders and sides. To find the crown sprout, look closely at the potato for a cluster of more than four or five eyes. Cut out these eyes with a potato peeler and set the potatoes in an egg carton, crown side up. Check the potatoes every few days and mist them lightly with a water/fertilizer mix. When the sprouts have grown to about an inch, they are ready to plant. Clear the planting area of all weeds and debris, using a shovel and garden rake. Remove any rocks and roots, as they can interfere with the growth of the potatoes. Prepare the soil to approximately 1 foot deep. Spread a 2-inch layer of compost or planting mix onto the cleared soil and mix in with a shovel. Dig into the soil with the shovel and turn the dirt several times until the compost and soil are mixed thoroughly. Plant the sprouted (or unsprouted) seed potatoes at a depth of about 1 inch and about 1 foot apart. If the seed potatoes are small, less than 2 oz. each, they do not need to be cut; if they are larger than 2 oz., cut the potatoes into chunks that include at least one eye each. Each chunk should be no smaller than 1 square inch. Water the potatoes thoroughly at least once each week. Over-watering will cause potatoes that have black centers, but potatoes need to be watered consistently to avoid misshapen tubers. Check the leaves and stems daily for insect damage and treat as necessary. Bring in dirt from other areas of the garden to bury the tubers as they grow in size; do not let the potatoes become uncovered while growing. Harvest the potatoes at any desired stage of growth. New potatoes are small, tender potatoes and are used in many recipes. Allow baking potatoes to grow larger.
843
Understanding SQL's underlying theory is the best way to guarantee that your SQL code is correct and your database schema is robust and maintainable. On the other hand, if you're not well versed in the theory, you can fall into several traps. In SQL and Relational Theory, author C.J. Date demonstrates how you can apply relational theory directly to your use of SQL. With numerous examples and clear explanations of the reasoning behind them, you'll learn how to deal with common SQL dilemmas, such as: - Should database access granted be through views instead of base tables? - Nulls in your database are causing you to get wrong answers. Why? What can you do about it? - Could you write an SQL query to find employees who have never been in the same department for more than six months at a time? - SQL supports "quantified comparisons," but they're better avoided. Why? How do you avoid them? - Constraints are crucially important, but most SQL products don't support them properly. What can you do to resolve this situation? Database theory and practice have evolved since Edgar Codd originally defined the relational model back in 1969. Independent of any SQL products, SQL and Relational Theory draws on decades of research to present the most up-to-date treatment of the material available anywhere. Anyone with a modest to advanced background in SQL will benefit from the many insights in this book.
363
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. use in Native American music ...Native Americans developed lingua francas in order to facilitate trade and social interaction; in these areas, song texts may feature words from a lingua franca. Many Native American songs employ vocables, syllables that do not have referential meaning. These may be used to frame words or may be inserted among them; in some cases, they constitute the entire song text. Vocables are a fixed... What made you want to look up "vocable"? Please share what surprised you most...
823
Bipolar Disorder: Hypomanic Episodes Hypomanic episodes can occur in people who have mood disorders. Hypomanic episodes are less severe than manic episodes, although a hypomanic episode can still interfere with your ability to function properly. Hypomania may be diagnosed if: - A distinct period of elevated or irritable mood occurs in which the mood is clearly different from a regular nondepressed mood. - Three or more of the following symptoms last for a significant period of time: - Inflated self-esteem or unrealistic feelings of importance - Decreased need for sleep (feels rested after only a few hours of sleep) - Racing thoughts or flight of ideas - Being easily distracted - An increase in goal-directed activity (work or personal) - Irresponsible behaviors that may have serious consequences, such as going on shopping sprees, engaging in increased sexual activity, or making foolish business investments - The mood or behavior change is noticeable to others. - The episode is not severe enough to cause impairment in social or job functioning and does not require hospitalization. - The symptoms are not caused by Reference substance abuse Opens New Window. If you feel that you or someone you care about may be experiencing a hypomanic episode, contact your doctor to discuss the possible causes and the treatment options. |By:||Reference Healthwise Staff||Last Revised: Reference March 1, 2012| |Medical Review:||Reference Patrice Burgess, MD - Family Medicine Reference Lisa S. Weinstock, MD - Psychiatry
794
Are Black & White Colors? Is Black a Color? Is White a Color? The answer to the question - "Are black and white colors?" - is one of the most debated issues about color. Ask a scientist and you'll get a reply based on physics: “Black is not a color, white is a color.” Ask an artist or a child with crayons and you'll get another: “Black is a color, white is not a color.” (Maybe!) There are four sections on this page that present the best answers. # 1 - The First Answer: Color Theory #1 - Color as Light Black is not a color. White is a color. # 2 - The Second Answer: Color Theory #2 - Color as Pigment or Molecular Coloring Agents Black is a color. White is not a color Comments from color pros: More about black & white How Colors Exist A basic understanding of how colors are created is the first step in providing correct answers. Here are two examples: The color of a tangible object is the result of pigments or molecular coloring agents. For example, the color of a red apple (in the illustration at the left) is the result of molecular coloring agents on the surface of the apple. Also, a painting of a red apple is the result of red pigments used to create the image. The colors of objects viewed on a television set or on a computer monitor are the result of colored light (in the illustration at the right). If you're not familiar with how colors are created by light, look at your monitor or television screen close up. Put your eye right up against the screen. A small magnifying glass might help. This is what you will see: A simplified way to explain it is that the color of a red apple on a computer or television is created by photons of red light that are transmitted within the electronic system. It's also important to understand the concept of "primary" colors. The fundamental rule is that there are three colors that cannot be made by mixing other colors together. These three, red, blue, and yellow, are known as the primary colors. Now that we've described two different categories of colors (pigment and light-generated) and have a definition of primary colors, the answer to whether black and white are colors can be answered. (Additive Color Theory) Red, Green, and Blue (The primary colors of light) Are black and white colors when generated as light? Black and white cats generated on a television. These colors are created by light. 1. Black is the absence of color (and is therefore not a color) When there is no light, everything is black. Test this out by going into a photographic dark room. There are no photons of light. In other words, there are no photons of colors. 2. White is the blending of all colors and is a color. Light appears colorless or white. Sunlight is white light that is composed of all the colors of the spectrum. A rainbow is proof. You can't see the colors of sunlight except when atmospheric conditions bend the light rays and create a rainbow. You can also use a prism to demonstrate this. Fact: The sum of all the colors of light add up to white. This is additive color theory. When you're finished with black & white, explore some real colors at Color Matters: The Meanings of Color Color Theory 2 - Color as Pigment or Molecular Coloring Agents (Subtractive Color Theory) |Red, Yellow, and Blue| (The primary colors of pigments in the art world) |Cyan, Magenta, and Yellow| (The primary colors of inks in the printing industry) * Are black and white colors when they exist as pigments or as molecular coloring agents? Black and white cats created by colored crayons. This is color generated by pigments. Black and white cats. The colors of the fur is the result of molecules. 1. Black is a color. (Chemists will confirm this!) Here's a simple way to show how black is made: Combine all three primary colors (red yellow and blue) using a liquid paint or you even food coloring. You won't get a jet black, but the point will be clear. The history of black pigments includes charcoal, iron metals, and other chemicals as the source of black paints. Resource: History of Pigments Therefore, if someone argues that black is the absence of color, you can reply, “What is in a tube of black paint?” However, you must add the fact that black is a color when you are referring to the color of pigments and the coloring agents of tangible objects. 2. White is not a color. ... but .... in some cases you could say that white is a color. The grey area: However, when you examine the pigment chemistry of white, ground-up substances (such as chalk and bone) or chemicals (such as titanium and zinc) are used to create the many nuances of white in paint, chalk, crayons - and even products such as Noxema. It's worth noting that white paper is made by bleaching tree bark (paper pulp). Therefore, you could say that white is a color in the context of pigment chemistry. More Information about CMYK primary colors: In theory, mixing equal amounts of three primary colors should produce shades of grey or black when all three are fully saturated. In the print industry, cyan, magenta and yellow tend to produce muddy brown colors. For this reason, a fourth "primary" pigment, black, is often used in addition to the cyan, magenta, and yellow colors. There's more to color than black and white! When you're finished with this article, discover the 3 most important things about color at Color Matters. See Basic Color Theory Vision and Reflection The final answer to whether black and white are colors takes other factors into consideration. Colors exist in the larger context of human vision. Consider the fact that there are three parts to the process of the perception of color. 1. The medium - The color as it exists as a pigment/colorant (such as the color of a tangible object) or as light (such as the color of an image on a television screen). 2. The sender - How the color is transmitted. 3. The receiver - How humans see color. In other words, how we receive information about color. (If a tree falls in the forest and there is nobody around does it make a sound? Does a color exist if there is no one to see it?) Are black and white colors? The best answer combines both of the theories described in Part 1 and Part 2. Pigments and coloring agents (as described in Part 1) are only half of the answer. Here's how we see color: The color of a tangible object originates as a molecular coloring agent on the surface of the apple. We see the color of an object because that object reflects “a color” to the eye. Every color is the effect of a specific wavelength. Link to ElecroMagnetic Color at Color Matters. In the case of the apple, we see the color red because the red apple reflects the specific wavelength of red (640nm is red). The same theory applies to black and white. Are black and white colors? 1. Black is not a color; a black object absorbs all the colors of the visible spectrum and reflects none of them to the eyes. The grey area about black: - A black object may look black, but, technically, it may still be reflecting some light. For example, a black pigment results from a combination of several pigments that collectively absorb most colors. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called "black." In reality, what appears to be black may be reflecting some light. - In physics, a black body is a perfect absorber of light. 2. White is a color. White reflects all the colors of the visible light spectrum to the eyes. Explore some real colors at Color Matters: The Meanings of Color The colors we see are simply a degree of how much of this color present in light is reflected. To be completely accurate, a color reflects the wavelengths in the NM range that our retinal cones respond to. The medium is the process of reflection of the wavelength of the color. The receiver is our eyes which receive the wavelength of the color. Comments from colors pros: More about black & white More about color vision: How the Eye Sees Color
394
Insomnia Might Boost Heart Failure Risk WEDNESDAY, March 6 (HealthDay News) -- Insomnia may triple the risk of developing heart failure, a large new study from Norway suggests. Heart problems definitely lead to sleep problems, said lead researcher Dr. Lars Laugsand, but his team tried to determine whether the reverse might also be true. "Insomnia is a frequent and easily recognized, potentially manageable and treatable condition," said Laugsand, a postdoctoral fellow in the department of public health at the Norwegian University of Science and Technology, in Trondheim. Laugsand added that the researchers found an association between insomnia and heart failure, not that insomnia actually causes heart failure. "We still do not know whether heart failure is really caused by insomnia, and it is still unclear why insomnia is linked to higher heart failure risk," he said. Heart failure is a chronic condition in which the heart does not pump blood efficiently enough to meet the body's needs. There are some indications that a biological cause might explain an insomnia-heart failure connection, Laugsand said. "One possible mechanism could be that insomnia activates stress responses in the body that might negatively affect heart function," he explained. "If our results are confirmed by others and there is a real causal association, evaluation of insomnia symptoms might have consequences for cardiovascular prevention," Laugsand added. The report was published March 6 in the online edition of the European Heart Journal. To measure the effect of insomnia on the risk of heart failure, Laugsand's team collected data on more than 54,000 men and women who took part in a Norwegian study on public health factors between 1995 and 1997. None of the participants had heart failure at the start of the study. As part of the study, researchers asked about the quality of the participants' sleep and if they had difficulty going to sleep and staying asleep. After 11 years of follow-up, more than 1,400 participants had developed heart failure, Laugsand's group found. People who had multiple insomnia symptoms had a threefold increased risk of developing heart failure, compared to people who slept well. When depression and anxiety were accounted for, the risk was slightly more than fourfold. Specifically, having difficulties going to sleep and staying asleep almost every night, and feeling tired in the morning more than once a week, were associated with an increased risk of heart failure, compared to people who never or rarely suffered from these symptoms. These findings remained even after the researchers took age, sex, marital status, education, shift work, blood pressure, cholesterol, diabetes, weight, physical activity, smoking, alcohol use and previous heart attacks into account. Dr. Gregg Fonarow, professor of cardiology at the University of California, Los Angeles, said,"Heart failure results in substantial [illness], mortality and health care expenditures." Insomnia has been associated with an increased risk for cardiovascular events and death, and two earlier studies have suggested that insomnia may also be associated with the risk of heart failure, he noted. Insomnia can increase the body's inflammatory and stress responses, said Fonarow, who's also a spokesman for the American Heart Association. "Activation of these systems, as well as other mechanisms, may link insomnia to an increased risk of developing heart failure and other cardiovascular disease," he said. "However, whether preventing or treating insomnia would lower the risk of developing heart failure requires further study." To learn more about insomnia, visit the National Sleep Foundation. SOURCES: Lars Laugsand, M.D., postdoctoral fellow, department of public health, Norwegian University of Science and Technology, Trondheim; Gregg Fonarow, M.D., spokesman, American Heart Association, and professor, cardiology, University of California, Los Angeles; March 6, 2013, European Heart Journal, onlineRelated Articles - Underactive Thyroid and Heart Failure a Bad Combination: Study May 22, 2013 - Having Both Migraines, Depression May Mean Smaller Brain May 22, 2013 Learn More About Sharp Sharp HealthCare is San Diego's health care leader with seven hospitals, two medical groups and a health plan. Learn more about our San Diego hospitals, choose a Sharp-affiliated San Diego doctor or browse our comprehensive medical services. Copyright ©2012 HealthDay. All rights reserved.
118
This article published by Yongyut Trisurat, Anak Pattanavibool, George A. Gale and David H. Reed in CSIRO Publishing. Wildlife Research. 2010, 37, 401-412, demonstrates how the CAP principles helped assess wildlife population viability for multiple species in the Western Forest Complex in Thailand. If you wish to request a copy of the article, please contact Yongyut Trisurat (email@example.com). Context. Assessing the viability of animal populations in the wild is difficult or impossible, primarily because of limited data. However, there is an urgent need to develop methods for estimating population sizes and improving the viability of target species. Aims. To define suitable habitat for sambar (Cervus unicolor), banteng (Bos javanicus), gaur (Bos gaurus), Asian elephant (Elephas maximus) and tiger (Panthera tigris) in the Western Forest Complex, Thailand, and to assess their current status as well as estimate how the landscape needs to be managed to maintain viable populations. Methods. The present paper demonstrates a method for combining a rapid ecological assessment, landscape indices, GIS-based wildlife-habitat models, and knowledge of minimum viable population sizes to guide landscape-management decisions and improve conservation outcomes through habitat restoration. Key results. The current viabilities for gaur and elephant are fair, whereas they are poor for tiger and banteng. However, landscape quality outside the current distributions was relatively intact for all species, ranging from moderate to high levels of connectivity. In addition, the population viability for sambar is very good under the current and desired conditions. Conclusions. If managers in this complex wish to upgrade the viabilities of gaur, elephant, tiger and banteng within the next 10 years, park rangers and stakeholders should aim to increase the amount of usable habitat by ~2170 km2 or 17% of existing suitable habitats. The key strategies are to reduce human pressures, enhance ungulate habitats and increase connectivity of suitable habitats outside the current distributions. Implications. The present paper provides a particularly useful method for managers and forest-policy planners for assessing and managing habitat suitability for target wildlife and their population viability in protected-area networks where knowledge of the demographic attributes (e.g. birth and death rates) of wildlife populations are too limited to perform population viability analysis.
658
Early Detection is Key in the Fight Against Ovarian Cancer Northwestern Memorial experts urge women to recognize warning signs; receive appropriate screenings Ovarian cancer is a rare but often deadly disease that can strike at any time in a woman’s life. It affects one in 70 women and in the past was referred to as a silent killer, but researchers have found that there are symptoms associated with ovarian cancer that can assist in early detection. Experts at Northwestern Memorial say the best defense is to make use of preventive methods, understand the risks and recognize potential warning signs of ovarian cancer. “Currently, there is no reliable screening test to identify early ovarian cancer. Women need to focus on good health habits, listen to their bodies and tell their doctor if a change occurs,” said Diljeet Singh, MD, gynecological oncologist and co-director of the Ovarian Cancer Early Detection and Prevention Program at Northwestern Memorial Hospital. Catching ovarian cancer early increases five-year survival odds from 30 percent to more than 90 percent. But the symptoms of ovarian cancer often mimic other less dangerous conditions making it difficult to recognize. Singh says women should be aware of possible early warning signs which include: • Pelvic or abdominal pain • Difficulty eating or feeling full quickly • Urinary symptoms (urgency or frequency) • Increased abdominal size (pants getting tighter around waist) Singh comments that the frequency and number of symptoms is important and women who experience a combination of these symptoms almost daily for two to three weeks should see their doctor. Doctors say it is not clear what causes ovarian cancer but there are factors that increase the odds of developing the disease including carrying a mutation of the BRCA gene, having a personal history of breast cancer or a family history of ovarian cancer, being over the age of 45 or if a woman is obese. If a woman is high-risk, doctors recommend screening begin at age 20 to 25, or five to 10 years earlier than the youngest age of diagnosis in the family. In addition, there are genetic tests available that can identify women who are at a substantially increased risk. While ovarian cancer is difficult to detect, specialized centers such as the Northwestern Ovarian Cancer Early Detection and Prevention Program, a collaborative effort between the hospital and the Robert H. Lurie Comprehensive Cancer Centerof Northwestern University, have strategies for monitoring women at risk. Patients are monitored with physical examinations, ultrasound and blood tests every six months. “The goals of the program are to help women understand their personal risks and what they can do to decrease their risk, to help develop methods of early detection and prevention and to identify women who would benefit from preventive surgery,” said Singh, also an associate professor at the department of obstetrics and gynecology at Northwestern University Feinberg School of Medicine and member of the Lurie Cancer Center. Studies have shown there are ways to reduce the risk of developing the disease. Women who use birth control pills for at least five years are three-times less likely to develop ovarian cancer. In addition, permanent forms of birth control such as tubal ligation have been found to reduce the risk of ovarian cancer by 50 percent. In cases where women have an extensive family history of breast or ovarian cancer, or who carry altered versions of the BRCA genes, may receive a recommendation to remove the ovaries and fallopian tubes which lowers the risk of ovarian cancer by more than 95 percent. “Eating a diet rich in fruits and vegetables, getting regular exercise, maintaining a normal body weight and managing stresses are all ways women can help decrease their risk of ovarian cancer,” added Singh. Treatment for ovarian cancer usually begins with surgery to determine if the cancer has spread. Doctors at Northwestern Memorial also use a form of chemotherapy called intraperitoneal chemotherapy, which is injected directly into the abdominal cavity and has been linked to a 15-month improvement in survival. “The best scenario would be to prevent this cancer entirely but until that day comes women need to focus on good health behaviors, listen to their bodies and know their family history” stated Singh. For more information please call 312-926-0779. Jennifer Monasteri, Manager
319
By Vendor > AgBio Communications > grassland plants of south dakota and the northern great plains grassland plants of south dakota and the northern great plains "Learning to identify and understand the plants that produce the forage, provide the cover, protect the soil, and enrich our lives in many ways is an essential first step to conserving our native grassland, whether they are the grasslands we own and manage, or the grasslands we hunt and hike over, or simply the grasslands we view from our vehicles. This photographic guide can help you discover and learn about the plants inhabitating our northern prairies and plains." David J. Ode, Botanist/Ecologist, South Dakota Game, Fish and Parks Department Authors are James R. Johnson and Gary E. Larson $19.95 and Free Shipping As Done By: AgBio Communications — Click to view all items by this member.
29
I do a lesson with tints and shades that requires no drawing skills. The kids make a large triangular banner using their first or last initial-usually 24x18 or 24x36 if you have that sized paper. They make the letter on a piece of 9x12 size paper. They can embellish a simple block or puffy letter with a few swirls or whatever, but keep it simple enough to paint in the shape with whatever their color choice is. Then transfer the letter onto the triangle, near the center of the upper part of the wide end. I have them tape the letter in place behind the triangle and trace through on the light box or at the window. This is a monochromatic painting the way I do it with my 3rd graders. the choose one color and use it straight from the bottle to paint the letter shape. This is the only shape with unmixed colors. Then they divide the background behind the letter into large shapes using straight or curved lines (nothing too nervous, if you know what I mean. Just simple lines). I usually limit the number of extra shapes to about 10-12 and remind the kids that the letter is the star of this show, not the background Then they start with tints (color plus white) and paint in half the shapes. Do the same with shades (color plus black) to finish the background area. when dry, we outline with a wide sharpie marker to neaten up the painted edges between shapes and around the letter. Sometimes we edge the whole thing with colored construction paper strips to make the banner a bit more Concepts used are always mixing from light to dark to avoid needing gallons of white paint to lighten up a dark blue, for example. Start with the lightest tint and paint a shape, progressing darker by adding into that bowl more of the prime color. Same with shades, starting with a drop of black into the prime color, painting a shape, and progressing with more black til all the shapes are done. Saves a lot of paint that way! The kids usually pick a color that will match their bedroom at home . My third graders love this project and look forward to it each year. If you need a photo, I can take one and send it tomorrow. I am off Mondays.
55
Maui’s death in set net takes species one step closer to extinction WWF-New Zealand’s Executive Director Chris Howe says: “This death of a Maui’s dolphin is a tragedy for a species that is down to only about 100 individuals. Set nets in Maui’s habitat continue to pose an unacceptable risk to these dolphins. Until we get set nets out of the shallow coastal waters where they live, more Maui’s will needlessly get entangled and drown. The species could be extinct within our generation without urgent action.” Maui’s dolphins, a subspecies of the South Island’s Hector’s dolphins, are found only off the west coast of the North Island. They are the world’s rarest marine dolphin, classified internationally as critically endangered. The Ministry of Agriculture and Forestry (MAF) yesterday released a statement saying they believe that the dead animal was a Maui’s, not a Hector’s dolphin as originally reported, because of the location of its death. The dead dolphin was returned to the sea by the fisher. MAF claimed the death “occurred outside of the current known range of Maui’s dolphins, as well as outside the current restrictions.” However there have been independent verified sightings of Maui’s dolphins in the coastal waters off Taranaki in recent years, and WWF-New Zealand is urging MAF and the government to extend protection measures throughout the Maui’s historical range to give the species the best chance of survival and recovery. Despite fishing restrictions announced in 2008, Maui’s are not currently protected throughout their entire range. WWF is calling on the government to extend protection measures into harbours and the southern extent of their current range, along with better monitoring and policing of regulations. WWF- New Zealand is urging all members of the public who see a Maui’s dolphin – noted for their rounded dorsal fin - to report it to a special sightings hotline, 0800 4 MAUIS. Mr Howe says: “Every sighting of one of these rare and precious dolphins matters. The more we know about where Maui’s range and their movements, the better we can protect them. “WWF will continue to speak out on behalf of all those New Zealanders who want to stop the extinction of Maui’s dolphins, and urge the government to extend the current protection measures before it is too late.”
656
December 19, 2012 Share Email Print Kate Wong is an editor and writer at Scientific American covering paleontology, archaeology and life sciences. Follow on Twitter PALEO DIET: Analyses of tartar on the teeth of Australopithecus sediba show that this early human species ate bark and other unexpected foods. Image: Kate Wong Recent years have brought considerable riches for those of us interested in human evolution and 2012 proved no exception. New fossils, archaeological finds and genetic analyses yielded thrilling insights into the shape of the family tree, the diets of our ancient predecessors, the origins of art and advanced weaponry, the interactions between early Homo sapiens and other human species, and other facets of our ancestors’ lives. The list below highlights the discoveries that most captivated me in a year of revelations about the way we were. Did I miss your favorite? Let me know in the comments. - A 3.4 million-year-old fossil foot suggests a second lineage of hominins (creatures more closely related to us than to our closest living relatives, chimpanzees) may have lived alongside Lucy’s kind and spent more time in the trees than on the ground. - Fossils from Kenya dating to between 1.87 million and 1.95 million years ago rekindle debate over whether our own genus, Homo, split into multiple lineages early on. - Analysis of tartar, molar wear and tooth chemistry in the nearly two-million-year-old hominin known as Australopithecus sediba shows that it had an unexpected diet, including tree bark. - A shift in the technology and diet of early Homo around two million years ago may have doomed large carnivores - Tiny bits of burned plants and bone from a South African cave show that humans had tamed fire by 1 million years ago–some 600,000 year earlier than had previously been documented. - Our ancestors began making multicomponent tools in the form of deadly stone-tipped spears 500,000 years ago—200,000 years earlier than previously thought. - Cave paintings in Spain are the oldest in the world and are sufficiently ancient to be the creations of Neandertals. - Neandertals hunted birds for their fashionable feathers for thousands of years and may have exploited certain plants for their medicinal properties–compelling evidence that our hominin cousins were cognitively sophisticated. - Reconstructed genome of the Denisovans–an enigmatic group of archaic hominins—confirms that early Homo sapiens interbred with them and reveals new details of their genetic legacy. - Whole-genome sequencing of modern hunter-gatherers from Africa turns up loads of previously unknown genetic variants and indicates that early Homo sapiens interbred with another hominin species long ago in Africa. - Paleoanthropology’s hobbit, a tiny hominin species called Homo floresiensis, gets a new face thanks to forensic reconstruction–and the result is startlingly familiar. - Stone tools and preserved poop from Oregon add to mounting evidence that the early human colonization of the Americas was more complex than scholars once envisioned. - Study finds that mom’s metabolism—not the size of the pelvis—limits gestation length to nine months, providing a new explanation for why humans give birth to helpless babies. About the Author: Kate Wong is an editor and writer at Scientific American covering paleontology, archaeology and life sciences. Follow on Twitter @katewong Rights & Permissions
294
Australian Bureau of Statistics 6202.0 - Labour Force, Australia, Jun 2012 Quality Declaration Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 12/07/2012 |Page tools: Print Page Print All RSS Search this Product| UNDERSTANDING THE AUSTRALIAN LABOUR FORCE USING ABS STATISTICS In order to understand what is happening in Australian society, or our economy, it is helpful to understand people’s patterns of work, unemployment and retirement. ABS statistics can help to build this picture. Fifty years ago, the majority of Australians who worked were men working full-time. Most worked well into their 60s, sometimes beyond, and if they were not working most were out looking for work until that age. The picture now is very different. Far more people work part-time, or in temporary or casual jobs. Retirement ages vary much more, with a greater proportion of men not participating in the labour force once they are older than 55. Nowadays. 45% of working Australians are women, compared with just 30% fifty years ago. These are profound changes that have helped shape 21st Century Australia. This note explains some of the key labour force figures the ABS produces that can be used to obtain a better picture of the labour market. Every month, the ABS runs a Labour Force Survey across Australia covering almost 30,000 homes as well as a selection of hotels, hospitals, boarding schools, colleges, prisons and Indigenous communities. Apart from the Census, the Labour Force Survey is the largest household collection undertaken by the ABS. Data are collected for about 60,000 people and these people live in a broad range of areas and have diverse backgrounds - they are a very good representation of the Australian population. From this information, the ABS produces a wide variety of statistics that paint a picture of the labour market. Most statistics are produced using established international standards, to ensure they can be easily compared with the rest of the world. The ABS has also introduced new statistics in recent years that bring to light further aspects of the labour market. It can be informative to look at all of these indicators to get a grasp of what is happening, particularly when the economy is changing quickly. One thing to remember about the ABS labour force figures is that when a publication states that, for example, 11.4 million Australians are employed, the ABS has not actually checked with each and every one of these people. In common with most statistics produced, the ABS surveys a sample of people across Australia and then scales up the results – based on the latest population figures - to give a total for the whole country. Because the figures are from a sample, they are subject to possible error. The Labour Force Survey is a large one, so the error is minimised. The ABS provides information about the possible size of the error to help users understand how reliable the estimates are. The above diagram shows the break down of the civilian population into the different groups of labour force participation. Each pixel represents about 1000 people as at September 2011. According to established international standards, everyone who works for at least one hour or more for pay or profit is considered to be employed. This includes everyone from teenagers who work part-time after school, to a partially retired grandparent helping out at the school canteen. While it is unreasonable to expect a family to survive on the income of an hour of work per week, one could also argue that all work, no matter how small, contributes to the economy. This definition of 'one hour or more' - which is an international standard - means that ABS' employment figures can be compared with the rest of the world. Now it is, of course, easy to argue that someone who works 2 or 3 hours per week is not really “employed”. But a definition is required. And any cut-off point is open to debate. Imagine if ABS defined being ‘employed’ as working 15 hours a week. Would it be reasonable to argue that someone who works 14.5 hours is unemployed, but 15 hours is not? It is also a mistake to assume that all persons who work low hours would prefer to work longer hours, and are therefore 'hidden' unemployment. Most people who work less than 15 hours a week are not seeking additional hours, although of course there are some who are. The issue of underemployment is further discussed below. Rather than open up such discussions, the ABS prefers to use the international standard and the ABS also encourages people to consider other indicators to form a better picture of what is happening. Alongside the total employed figures, full-time and part-time estimates are provided to better inform on the different kinds of employment, and a detailed breakdown by the number of hours worked is also provided to allow for customised definitions of 'employment.' Commentators often refer to the rise in employment as the number of new jobs created each month. This can be misleading, because the ABS doesn't actually measure the number of jobs. This might sound like semantics, but if a person in the Labour Force Survey who is employed gains a second part-time job at the same time as their main job, this would have no impact on the employment estimate - the Labour Force Survey does not count jobs, it counts people. It is also important to bear in mind that if the relative growth in population is greater than the number of new people in employment, there might actually be an increase in the employment figure, but a lower percentage of people with jobs. It is often informative to look at the proportion of people in employment. This measure, called the employment to population ratio, is the number of employed people expressed as a percentage of the civilian population aged over 15. This removes the impact of population growth to give a better picture of labour market dynamics over time. AGGREGATE MONTHLY HOURS WORKED Instead of counting how many people are working, another way of looking at how much Australians are working is to count the total number of hours worked by everyone. This is measured by a statistic produced by the ABS called Aggregate monthly hours worked, and it is measured in millions of hours. This can sometimes be more revealing of what is happening in the labour market, particularly in a weakening economy where a fall in hours worked can usually be seen before any fall in the number of people employed. PEOPLE WHO ARE NOT WORKING: THE UNEMPLOYED AND OTHERS There are many reasons why Australians do not work. Some have retired and are not interested in going back to work. Some are staying home to look after children and plan on going back to work once the kids have grown older. Some are out canvassing for work every day while others have given up looking. The ABS separates all of these people into those who are unemployed and those who are not by asking two simple questions: If you were given a job today, could you start straight away? and Have you taken active steps to look for work? Only those who are ready to get back into work, and are taking active steps to find a job, are classed as unemployed. Some people might like to work, but are not currently available to work - such as a parent who is busy looking after small children. Other people might want to work but have given up actively looking for work - such as a discouraged job seeker who only half-heartedly glances at the job adds in the newspaper but doesn't call or submit any applications. These people are not considered to be unemployed, but are regarded as being marginally attached to the labour force. They can be thought of as 'potentially unemployed' when, or if, their circumstances change, but are regarded as being on the fringe of labour force participation until then. It is important to note that the ABS unemployment figures are not the same as the data that Centrelink collects on the number of people receiving unemployment benefits. The ABS bases its figures on asking people directly about their availability and steps to find work. In this way, policy decisions about, for example, the criteria for the receipt of unemployment benefits have no impact on the way that the unemployment figures are measured. LABOUR FORCE AND PARTICIPATION RATE The size of the labour force is a measure of the total number of people in Australia who are willing and able to work. It includes everyone who is working or actively looking for work - that is, the number of employed and unemployed together as one group. The percentage of the total population who are in the labour force is known as the participation rate. The unemployment rate is the percentage of people in the labour force who are unemployed. This is a popular measure around the world for tracking a country’s economic health as it removes all the people who are not participating (such as those who are retired). Because the unemployment rate is expressed as a percentage, it is not directly influenced by population growth. The underemployment rate is a useful companion to the unemployment rate. Instead of looking at the people who are unemployed, the underemployment rate captures those who are currently employed, but are willing and able to work more hours. It highlights the proportion of the the labour force who work part-time but would prefer to work full-time. This is sometimes referred to as the 'hidden' potential in the labour force. The underemployment rate can be an important indicator of changes in the economic cycle. During an economic slow down, some people lose their jobs, become unemployed and contribute to a rising unemployment rate. But while this is happening, there might well be others who remain working but have their hours reduced; for example from full-time to part-time. As long as they want to work more hours, they are classed as underemployed, and contribute to the underemployment rate. LABOUR FORCE UNDERUTILISATION RATE The labour force underutilisation rate combines the unemployment rate and the underemployment rate into a single figure that represents the percentage of the labour force that is willing and able to do more work. It includes people who are not currently working and want to start, and those who are currently working but want to - and can - work more hours. It provides an alternative – and more complete - picture of labour market supply than the unemployment rate, as changes in the underutilisation rate capture both changes in unemployment and underemployment, indicating the spare capacity in the Australian labour force. For any queries regarding these measures or any other queries regarding the Labour Force Survey estimates, contact Labour Force on Canberra 02 6252 6525, or via email at email@example.com. These documents will be presented in a new window. This page last updated 8 August 2012
601
UNESCO describes the Site Piedmont and Lombardy Sacred Mounts as follows: The nine northern Italy’s Sacred Mounts are composed of groups of chapels and other architectural elements built within the end of XV and the end of XVII century and dedicated to different aspects of Christian Faith. In addition to their symbolic religious meanings, these places are characterized by a remarkable beauty thanks to the careful integration of architectural elements into the surrounding natural landscapes: hills, woods and lakes. Furthermore they boast very important works of art like frescos and statues. UNESCO World Heritage Committee has put the Site on its List for the following reasons: The realization of an architectural and sacred art’s work into a natural landscape for didactic and religious aims, finds its highest expression in northern Italy Sacred Mounts and it has deeply influenced following developments of this phenomenon in the rest of Europe. Sacred Mounts of northern Italy represent the successful integration into a landscape of remarkable beauty of architecture and fine arts realized for religious aims during a critical period of Catholic Church history. UNESCO Site’s Sacred Mounts are mainly located in the Alpine arc, close to lakes or at the bottom of valleys crossed by river Po’s tributaries and by important old routes. The complexes’ panoramic and prominent locations on hills or mountains and their subdivisions into chapels, let them become territorial reference points easy to identify; we don’t have to forget that creating sort of territorial reference points was probably one of the aims for such locations, considering that sacred mountains strategical position marks the northern limit of the Po valley, which was, at that time, symbolically protected by them, that is by the Christianity itself. There is an important thread ideally binding these nine Sacred Mounts: the first crib of Greccio, made by Saint Francis of Assisi, the Holy Representations, the protection of the Holy Land, the Sacred Mounts foundations (first by gerosolimitani then by counter-reformists) and the Viae Crucis, almost all the above stories in fact tell us about Franciscan fathers who had been fundamental for all complexes’ building. Some had been the inventors, like Bernardino Caimi in Varallo, Tommaso of Firenze in Montaione and Michelangelo of Montiglio in Belmonte; some the designers, like Cleto of Castelletto Ticino in Orta, or the preachers like Giovan Battista Aguggiari in Varese, Fedele of San Germano in Oropa, Gioacchino of Cassano and Andrea of Rho in Domodossola. More we remeber that between 1731 and 1751 it was again the Franciscan Leonardo of Porto Maurizio who built five hundred and seventy-two Viae Crucis throughout Italy; we owe nineteenth-century-restoration and renewal of Crea’s Sacred Mount to Franciscan fathers Costantino Cerri and Giuseppe Latini. Observants or Capuchins, that are disciples of the poor of Assisi, are the most sensitive and active champions of the theatre-, sculpture- and painting-representations. Dedications and histories told in each complexe remind both of pre-existent local devotions and of religious and cultural foundation times. Therefore in Varallo prevails the representation of Jesus Christs’ life, in Orta of Saint Francis of Assisi, in Oropa of the Virgin Mary, while in Varese and in Ossuccio they represent the Marian Rosary prayer with its fifteen Misteries. Wishing to follow again the steps of Christ’s Passion, like the Jesus’s Via Dolorosa in Jerusalem (that is ‘Very Painful Viae’), they modified the Sacred Mount of Crea and built the Sacred Mounts Calvaries of Domodossola and Belmonte. In Ghiffa instead Holy Trinity’s devotion resulted so difficult and abstract to represent that, in the end, they went back in part to a much usual and immediate issue: the Via Crucis.
247
Weevils are actually defined as any beetle from the Curculionoidea super-family. With that said, there are over 60,000 species of weevils, however, keep in mind that although there are some types of beetles that include “weevil” in their common name, they may not actually be a part of the weevil super-family. Weevils, also commonly known as snout beetles, are usually best characterized by the shape of their heads because they elongate out into a snout. This characteristic is usually common amongst all weevil species. Weevils are actually so broad in the variety of species that they can infest a variety of areas. Some species are considered pantry pests while others are wood-attacking insects. Furthermore, some may just be an occasional invader and a nuisance inside the home with they enter to overwinter. Weevils are found all throughout the United States and even in Canada.>> Read More Depending on the species, the size of weevils varies from about 3 to 11 mm long. Their bodies have an elongated shape or can be almost oval in shape. Their heads protrude out into a snout shape with mandibles for feeding at the end. The color will also vary widely depending on the species of weevil. They can range from almost all colors. Some species will have a uniform color whereas others will have a significant design or pattern on their shells. There are several common weevil species that will invade your home. In the United States, there are several different representative species. By identifying the species, a homeowner can better adjust the control method according to where they are infesting and how they are invading your home. Since weevil larvae burrow underground and remain they reach maturity, they are rarely seen. Also, adult weevils are nocturnal and since most of their activity is done during the night, they are also not often spotted. They will remain hidden on the plant that they are infesting or in leaf litter and soil during the daytime. If the plant is disturbed, they can be noticed dropping to the soil and scurrying off into the litter or debris around the plant. Most gardeners will notice the presence of adult weevils by the plant aesthetics. The tips of the leaves will seem chewed off and the surface looks spotted. All species of weevils will enter structures when outdoor conditions have been too adverse or unfavorable. Usually, it will be towards the end of the season when plants die off. The key to proper removal is identification. Because different species will attack or infest different plants or locations around your home, it is the most important to identify the weevil before control methods are applied. Also, exclusion is an important method to get rid of weevils in your house. Exclusion will involve sealing cracks and crevices around the structure. Screens to windows and doors should be tight fitted and in good condition. Exclusion methods should be applied either before or after they enter the home. Treatment to outdoor ornamental gardens with an appropriately labeled pesticide will also discourage infestation. Use appropriately labeled insecticides when notching of plant leaves first begin to appear. This is usually going to be in late June to early July. For indoor control, apply an appropriately labeled insecticide in areas like the attic, wall void, near the baseboard area, window sills, door frames, and etc. Keep in mind that all product labels must be thoroughly reviewed before any applications and treated areas must be left alone to dry before you can come in contact with those areas again. Do You Have a Question about this Product? Ask Our Experts! For us to ship it to you, it will cost $32.77 for the pint size:Talstar pro 1 pintWe do have larger sizes available and all orders include shipping
196
Other Dairy Products ( Originally Published 1939 ) It has been known for a long time that babies fed with boiled milk thrived better than those fed with regular milk. It was supposed that this improved quality lay in the destruction of pathogenic organisms. Gradually a wealth of data has been accumulating which shows that boiling or other treatment of milk increases its digestibility. After Buckley had showed that the physical nature of the curd of milk is important in determining the food value of milk,' and Ladd had published some chemical data that showed that homogenized milk produced a soft curd in the infant's stomach and was similar to breast milk in this respect,' Washburn and Jones showed that homogenization of milk produced curds which were much more flocculent and friable than those of regular milk, although this property was not reflected in any improved nutrition of his experimental animals, young pigs. The recent work of Hill is credited with giving this subject of the digestive quality of milk an emphasis which has found important application in the commercial production of soft curd milk. Interest has been further stimulated competitively by reason of the inroads that the evaporated milk industry has made into the bottled trade, largely by reason of the superior properties of the canned product in infant feeding. This has led to laboratory activity directed toward devising processes for imparting soft curd properties and for measuring curd hardness rather than toward ascertaining to what extent, if any, these treatments actually improve the digestibility of the milk. The scientific literature leaves the subject in a very con-fused state. Whatever improved digestibility there is seems to result entirely from the speeding of the passage of the milk from the stomach and not from any increased food value or degree of assimilation.' The whole subject is excellently reviewed by Doan in the Journal of Dairy Science, 21, 739-756 (1938). NATURAL SOFT-CURD MILK Hill found that the milk of different cows possessed unequal digestibility, and that many infants could tolerate milk from certain cows but not that from others. In general, this improved tolerance was associated with milk of relatively low total solids content, al-though this relationship did not seem to be exclusively specific. Soft-curd milk was produced by cows of different breeds and was fairly uniform over the lactation period of a given cow. This property enabled herdsmen to select cows for the regular production of this kind of milk. Soft-curd milk is more rapidly digested by humans, calves, and rats, and leaves the stomach more quickly than regular milks At the same time, soft-curd milk has a lower content of total solids and a smaller calorific value. It has been observed that cows suffering with mastitis produce a soft-curd milk. This has led many persons to think that all soft-curd milk is pathologic. Such a belief is erroneous. Soft-curd milk is actually under more stringent control than regular milk because its production is mostly, if not entirely, limited to Grade A and certified herds. However, on account of the widespread prevalence of sub-clinical mastitis, it is recommended that the presence of udder infections be tested for when the curd tension is determined. Elias- showed 10 that soft-curd milk gave curds in the stomach similar to those of boiled milk. Espe and Dye reported that doubling the curd tension increased the length of the digestive period from 30 to 65 percent, and that boiling markedly lowered the curd tension. Welch and Doan showed that curd tension was greater in milk of high casein content, and that equalization of casein content by dilution with water caused both the curd tension and differences in rates of digestion largely to disappear, although the casein content might exercise only a minor role in the rate of digestion if the curd is artificially softened by heating, homogenization, and other means. ARTIFICIAL SOFT-CURD MILK Soft-curd by homogenization. Softness of curd can be imparted to a milk by homogenizing it. This procedure consists in pumping milk under very great pressure through a special valve with small clearance so that the, butterfat globules are broken up and uniformly distributed. The homogenization of skimmed milk does not impart soft-curd properties; at least about 1 percent of butterfat or other oil must be present. Chocolate milk is a soft-curd milk. Therefore, it seems that the imparting of soft-curd properties by mechanical means is a function of the degree of dispersion of discrete particles whereby the curd is mechanically prevented from setting into a solid homogeneous mass. Feeding experiments on rats showed that this homogenized soft-curd milk was digested just about as quickly as boiled milk or natural soft-curd milk. Letters patent 12 have been issued to cover the production of soft-curd milk by homogenization, although the process seems to have been practiced by milk companies for many years previously to the granting of the patent. The difficulty of controlling exactly the effectiveness of the homogenizing machine, together with the variability in the composition or physical nature of the milk, particularly the butterfat, precludes the determination of the most efficient temperatures and pressures. Experience has taught that the curd of a given milk cannot be softened beyond a certain point, regardless of the pressure used, and on the other hand, too light a pressure does not insure permanency to the imparted curd softness. In industrial practice, consistent results can be obtained when milk is homogenized at pressures of about 2500 to 3000 pounds per square inch at a temperature of about 145° F. This softens the curd to a tension of about 30 grams, or reduces the curd tension of average market milk about 50 percent. The homogenization of milk must be carefully conducted if a satisfactory product is to be obtained. Trout and his associates found 13 that some milk upon homogenization developed rancidity within 15 minutes after treatment. This effect seemed to be caused by a lipolytic enzyme which could be inactivated at temperatures of pasteurization. Accordingly, this off-flavor can be prevented by pasteurizing the milk before or immediately after homogenization. The flavor of the finished product is generally considered to be slightly better if pasteurization precedes homogenization, but health officers are inclined to require pasteurization to come last. Homogenized milk, unless the milk was initially of high quality, may exhibit a smudgy yellow or gray sediment in the bottom of the bottle. It is too finely divided to be revealed on a sediment disc. Babcock 14 reported that it consists largely of leucocytes, epithelial cells, and some finely divided dirt. Charles and Sommer 15 state that sediment may occur in milk of the highest sanitary quality and may come from a healthy udder. It is not seen in unhomogenized milk because the rising of the fat globules into the cream layer sweeps this light material upward. Clarification by centrifugal clarifying ma-chines will remove it. Soft-curd properties, artificially imparted to milk by homogenization, were studied by Anthony on two adult males who possessed the unusual ability to regurgitate at will without distress. This enabled them to drink the milk, hold it in their stomachs for 30 minutes, and then return it without the aid of a stomach pump or an emetic. These experiments showed that the tests on curd strength made in vitro and determined with the curd knife reasonably evaluated the nature of the curd in the human stomach (except in the case of mineral modified milk). The curd particles of breast milk were minute and soft, and were so finely divided that they could not be separated from the accompanying juices with a 20-mesh screen. On cows' milk, when the curd tension (by laboratory curd-knife technic) was high, the regurgitation specimens of curd in every case were large and leathery. When the readings were low, the curd particles were small and soft. Breast milk registered 0 curd tension, natural cows' milk 50-100 grams, and homogenized milk (processed at 3500 pounds) 15 grams. The patients reported that the milk tasted better (because of the minute division of the milkfat globules) and gave less distress. However, no digestive advantage is reported by some other investigators who worked on samples in vitro and on experimental animals. The latter work is not so impressive as clinical studies but may be better controlled. Much more fundamental and clinical re-search is necessary before the value of this processing is substantiated. Soft curd by sonic vibration. A modification of the homogenizing process for the production of soft-curd milk has been developed by subjecting milk to intense sonic vibration. Electromagnetic oscillators, somewhat similar to those used in submarine communications, are constructed to allow the passage of milk in a thin film between the "anvil" and the vibrating diaphragm. Sonic vibration acts directly on the butterfat of the milk to cause a more complete dispersion. The reduction of curd tension is a function of the number of fat particles, and not of the actual fat concentration. A direct relationship seems to exist between the degree of fat dispersion-and the degree of curd-tension reduction. Inasmuch as only a small proportion of the total fat in milk need be finely subdivided to reduce the curd tension, it is possible to produce soft curd by vibration without destroying the cream volume (cream line). Commercial homogenization. The practice of homogenizing market milk is gradually extending. It is quite general in parts of Canada, and is increasing irregularly in the United States. Fifteen states have no regulations for the control of homogenized milk, 19 states and the District of Columbia permit its sale if properly labeled, 2 states have taken no action but look upon it with disfavor, and 4 states prohibit its sale. It is a useful practice for the treatment of milk which is to be consumed in restaurants, institutions, or wherever the sale of bulk milk introduces the likelihood that the consumer may be served a portion from which a substantial part of the butterfat has separated. Tracy states that the unpopularity of homogenized milk in the past has been due largely to the emphasis placed on the cream line as a measure of the value of a milk, and to the unfriendly attitude of some regulatory officials who felt that homogenization might encourage fraudulent practices. About one-third of the milk-route customers of the University of Illinois changed to this milk for the following reasons: it looked and tasted better; no cream adhered to the bottle cap; no mixing was required; it tasted better for breakfast foods; it removed the temptation to abstract cream; it was easier to prepare for infant feeding; it did not allow rising of cream to top of glass in refrigerator; it made better milk drinks; it tasted better to children; it was more easily digested by infants; and it did not churn out on freezing. Soft curd by base exchange. Hard waters are softened by the zeolite or base-exchange method, whereby the percolation of water through a bed of zeolite (a sodium-aluminum silicate) effects an ex-change of sodium and calcium. As applied to milk, sodium from the zeolite replaces soluble calcium from the milk. The milk is first acidified to about 0.3 percent as lactic acid (with a dilute nitric acid solution) and then percolated at 64° F. over a granular column of zeolite. During the process, the pH is adjusted to that of ordinary cows' milk (about 6.50), and the acidity is reduced to about 0.15 per-cent as lactic acid. This process is reported 22 to change the taste, appearance, and other qualities very little from those of regular milk. The cream line is practically the same as in pasteurized milk. Bacteria counts are said to be lowered by the filtering effect of the pass-age of the milk through the zeolite bed. The Hill method cannot be used to measure the curd tension by this process because the Hill technic introduces about ten times as much soluble calcium into 100 milliliters of milk as is removed by the base-exchange treatment. Moreover, it is considered more desirable to use a method which more closely simulates gastric digestion. Such a method has been developed by Miller. Hess, Poncher, and Woodward 24 studied the nutritional effects of such a milk on an infant on a metabolism frame. They report that, in spite of the decrease of the percentage content of total calcium and phosphorus, 100 milliliters of such milk per kilogram of body weight kept a normal growing infant in a positive calcium and phosphorus balance during the entire time of feeding. Soft curd by enzymic action. Milk can be given soft-curd properties within a range of 20 to 30 grams by the addition of pancreatic extract, concentrated in the proportion of 1 part of the powder to 10,000 parts of milk. The milk containing the enzyme is heated at a temperature of 42° C. (108° F.) for 15 minutes, and then is pasteurized in the regular way. The preliminary heating brings about a partial digestion of the curd, and the pasteurization inactivates most of the enzyme. The mineral content, the protein, and the formol titration values remain substantially unchanged. Standards of quality. The quality of curd is usually determined by the Hill test, or some modification of it. Although natural milks may give a range of reading on the scale from 15 to 200 grams of tension, the average of numerous milk supplies has been found to be about 60-70. The American Association of Medical Milk Commissions 27 specifies that a soft-curd milk must show a curd tension be-low 30 grams, determined at least twice at an interval of 1 to 5 days before it can be claimed to be a soft-curd milk, and that the test must be repeated at monthly intervals thereafter. Determination of curd tension. Hill's method for determining the characteristics of milk curd is based on the measurement of the degree of toughness of the curd which is coagulated by pepsin in calcium chloride solution. The measurement is the indicated pull in grams necessary for a special knife to cut through the coagulated curd. The knife consists of several radial horizontal blades soldered at right angles to an upright slender rod. This knife is placed in a jar containing 100 milliliters of the milk to be tested. A coagulating solution of scale pepsin and calcium chloride is then added. This sets the curd around the knife. The knife is then hooked to a spring balance, and its pull as it cuts upward through the curd is read directly from the dial. Caulfield and Riddell have shown that it is expedient to make each determination in triplicate, and that temperature of reaction and time interval between the addition of coagulant and cutting of the curd must be kept constant. Miller 23 has modified this method by substituting an acid pepsin solution for the pepsin-calcium chloride solution. The measurement of toughness of curd by this method substantially parallels the digestibility of the milk by animals. See also the method of the U. S. Department of Agriculture, and that of the American Dairy Science Association reported supra by Doan. Determination of butterfat. Authorities are not in agreement as to the effect of homogenization on the accuracy of the butterfat de-termination by the Babcock method. Babcock found that in every case the homogenized milk averaged in fat about 0.1 percent lower than the same milk before it was homogenized. On the other hand, Tracy' states that homogenized milk can be tested satisfactorily by the Babcock method if both the acid and milk are at about 70° F., if the acid is added in small portions, if slightly less acid (1.5 milli-liters) is used, and if the solution is shaken well after each addition of acid. Microbiological examination. Inasmuch as natural soft curd has been associated with mastitis, it is advisable in the interest of sanitation and wholesomeness to examine samples of natural soft-curd milk for the presence of mastitis organisms. 1. S. S. BucKLEY, Maryland Agr. Exp. Sta. Bul. 184, 1914. 2. M. LADD, Boston Med. and Surg. J., 173, 13 (1915). 3. R. M. WASHBURN and C. H. JONES, Vermont Agr. Exp. Bul. 195, 1916. 4. R. L. HILL, Utah Agr. Exp. Sta. Bul. 207, 1928; Circular 101, 1933. 5. Council on Foods, J. Am. Med. Assoc., 108, 2040, 2122 (1937). 6. F. J. DoAN and R. C. WELCH, Pennsylvania State College Agr. Exp. Sta. But. 312, 1934. See also F. J. DoAN and C. C. FLORA, ibid., 380, 1939. 7. H. C. HANSEN, D. R. THEOPHILUS, F. W. ATKESON, and E. M. GILDow, J. Dairy Sci., 17, 257 (1934). 8. R. C. WELCH and F. J. DoAN, Milk Plant Monthly, 22 (11) 30 (1933). 9. W. V. HALVERSEN, V. A. CHERRINGTON, and H. C. HANSEN, J. Dairy Sci., 17, 281 (1934). 10. H. L. ELIAS, Am. J. Diseases Children, 44, 296 (1932). 11. D. L. EsPE and J. A. DYE, ibid., 43, 62 (1932). 12. R. FLUCKIOER, U. S. Patent 1,973,145, Sept. 11, 1934. 13. G. M. Tamil, C. P. HALLORAN, and I. GouLD, Mich. Agr. Exp. Sta. Tech. Bul. 145, 1935. 14. C. J. BARCOCK, U. S. Dept. Agr. Tech. Bul. 438, 1934. 15. D. A. CHARLES and H. H. SOMMER, Milk Plant Monthly, 24, 26, 32 (1935). 16. G. E. ANTHONY, The Bulletin (official publication of the Genesee County Medical Society), 9, March 4 (1936). 17. L. A. CHAMBERS, J. Dairy Sci., 19, 29 (1936). 18. Milk Dealer, 25, 36 (1936). 19. R. H. TRACY, Milk Plant Monthly, 24, 28 (1935). 20. U. S. Patent 1,954,769, assigned to M. & R. Dietetic Laboratories, Inc. 21. J. F. LYMAN, E. H. BROWNE, and H. E. OTTING, Ind. Eng. Chem., 25, 1297 (1933). Also see Milk Plant Monthly, January, 1934, p. 37. 22. H. E. OTTING and J. J. QuILLIGAN, Milk Dealer, 23, 36 (1934). 23. D. MILLER, J. Dairy Sci., 18, 259 (1935). 24. J. H. HESS, H. G. PONCHER, and H. WOODWARD, Am. J. Diseases Children, 48, 1058 (1934). 25. V. CONQUEST, A. W. TURNER, and H. J. REYNOLDS, J. Dairy Sci., 21, 361 (1938). 26. R. L. HILL, ibid., 6, 509 (1923). 27. Methods and Standards for the Production of Certified Milk, Am. Assoc. Med. Milk Commissions, New York, 1936. 28. W. J. CAULFIELD and W. H. RIDDELL, J. Dairy Sci., 17, 791 (1934). 29. Chief of Bureau of Dairy Industry, 1938, J. Milk Technol., 2, 48 (1939). 30. Curd Tension Committee. Rept. Annual Meeting Amer. Dairy Sci. Assoc., 31. P. H. TRACY, Milk Dealer, 25, 30, 60 (1936).
902
Animal bites and scratches, even when they are minor, can become infected and spread bacteria to other parts of the body. Whether the bite is from a family pet or an animal in the wild, scratches and bites can carry disease. Cat scratches, for example, even from a kitten can carry "cat scratch disease," a bacterial infection. Other animals can transmit rabies and tetanus. Bites that break the skin are even more likely to become infected. For superficial bites from a familiar household pet who is immunized and in good health: For deeper bites or puncture wounds from any animal, or for any bite from a strange animal: Call your doctor or other health care provider for any flu-like symptoms, such as a fever, headache, malaise, decreased appetite, or swollen glands following an animal bite. Rabies is a viral infection of certain warm-blooded animals and is caused by a virus in the Rhabdoviridae family. It attacks the nervous system and, once symptoms develop, it is 100 percent fatal in animals, if left untreated. In North America, rabies occurs primarily in skunks, raccoons, foxes, coyotes, and bats. In some areas, these wild animals infect domestic cats, dogs, and livestock. In the U.S., cats are more likely than dogs to be rabid. Individual states maintain information about animals that may carry rabies. It is best to check for region specific information if you are unsure about a specific animal and have been bitten. Travelers to developing countries, where vaccination of domestic animals is not routine, should talk with their health care provider about getting the rabies vaccine before traveling. The rabies virus enters the body through a cut or scratch, or through mucous membranes (such as the lining of the mouth and eyes), and travels to the central nervous system. Once the infection is established in the brain, the virus travels down the nerves from the brain and multiplies in different organs. The salivary glands are most important in the spread of rabies from one animal to another. When an infected animal bites another animal, the rabies virus is transmitted through the infected animal's saliva. Scratches by claws of rabid animals are also dangerous because these animals lick their claws. The incubation in humans from the time of exposure to the onset of illness can range anywhere from five days to more than a year, although the average incubation period is about two months. The following are the most common symptoms of rabies. However, each individual may experience symptoms differently. Symptoms may include: Rabies: Stage 1 Rabies: Stage 2 The symptoms of rabies may resemble other conditions or medical problems. Always consult your doctor for a diagnosis. In animals, the direct fluorescent antibody test (dFA) performed on brain tissue is most frequently used to detect rabies. Within a few hours, diagnostic laboratories can determine whether an animal is rabid and provide this information to medical professionals. These results may save a person from undergoing treatment if the animal is not rabid. In humans, a number of tests are necessary to confirm or rule out rabies, as no single test can be used to rule out the disease with certainty. Tests are performed on samples of serum, saliva, and spinal fluid. Skin biopsies may also be taken from the nape of the neck. Unfortunately, there is no known, effective treatment for rabies once symptoms of the disease occur. However, there are effective new vaccines (HDCV, PCEC) that provide immunity to rabies when administered soon after an exposure. It may also be used for protection before an exposure occurs, for persons such as veterinarians and animal handlers. Being safe around animals, even your own pets, can help reduce the risk of animal bites. Some general guidelines for avoiding animal bites and rabies include the following: If you or someone you know is bitten by an animal, remember these facts to report to your health care provider: Click here to view the Online Resources of Non-Traumatic Emergencies
81
This book provides readers with an overview of how Americans have commemorated and remembered the Civil War. The Civil War was one of the most divisive, dramatic, and deadly events in the course of American history. It involved not just northerners and southerners, but whites and blacks, women and men, and the elites, and the lower class. Not surprisingly, the ways in which this conflict is remembered and commemorated varies widely. Most Americans are aware of statues or other outdoor art dedicated to the memory of the Civil War. Indeed, the erection of Civil War monuments permanently changed the landscape of U.S. public parks and cemeteries by the turn of the century. But monuments are only one way that the Civil War is memorialized. This book describes the different ways in which Americans have publicly remembered their Civil War, from the immediate postwar era to the early 21st century. Each chapter covers a specific historical period. Within each chapter, the author highlights important individuals, groups, and social factors, helping readers to understand the process of memory. The author further notes the conflicting tensions between disparate groups as they sought to commemorate "their" war. An epilogue examines the present-day memory of the war and current debates and controversies. • Presents events related to the commemoration of public memory of the Civil War chronologically, from 1865 to the present • Illustrated with photographs of monuments, individuals, and events related to commemoration activities, as well as selected political cartoons related to Civil War memory from popular publications • Bibliography includes both primary and secondary sources on the subject of Civil War memory • Provides readers with a broad overview of an extremely popular topic in Civil War history in an easy-to-read, narrative form • Summarizes the most recent scholarship on the subject into one volume • Provides both in-depth critical analyses and clear summaries of the key themes • The role of memory in shaping historical consciousness is a timely scholarly topic
265