corpus_content
stringlengths
491
736k
| MRH Offers Sun-Safety Tips for Summer Middletown Regional Hospital Offers Sun-Safety Tips for Summer Memorial Day — the unofficial beginning of summer — is fast approaching and, with it, the time is here for fun in the sun! But while the sun's rays may keep you warm and lift your spirits, exposure to them also can lead to skin cancer. "Damage to skin, the body's largest organ, is cumulative," says Jennifer Ridge, M.D., board-certified dermatologist and member of Middletown Regional Hospital's medical staff. "Years of harmful ultraviolet radiation from exposure to the sun increases the risk for skin cancer later in life. But the good news is, caught early, many skin cancers are highly treatable and largely preventable." Melanoma is the deadliest form of skin cancer, because it can spread to internal organs. But it, too, is highly treatable when detected early. Here are the ABCDs of melanoma to help catch it early. Asymmetry: One half of a mole or skin spot is different than the other half, and it grows in size. Border: Edges are irregular, ragged, blurred or notched. Color: Pigmentation varies — brown and black with red, white, or blue creating a mottled appearance. Diameter: Width is greater than six millimeters — about the size of a pencil eraser. Experts at Middletown Regional Hospital suggest everyone take precaution when spending outdoor time this summer. Regardless of age, it's always important to protect the skin from sun damage. Here are some useful tips from the American Academy of Dermatology: Sunscreen is your friend. Use at least SPF 15 on exposed areas of the body. Infants under age six months should be kept out of the sun entirely. Apply sunscreen 30 minutes before going outdoors and reapply it every two hours or more frequently if swimming or sweating. People with fair complexions should consider zinc oxide on sensitive areas like the nose, lips, tops of ears or feet. Avoid tanning oils or baby oil. With these products, the skin is more susceptible to burning. Overcast outside? Don't be fooled. Damaging UV rays filter through clouds and haze causing intense burning. Consider clothing. A wide-brimmed hat is best to shade the face. Sun glasses are critical to protect eyes from harmful glare. Tightly woven fabric, like unbleached cotton or twill, shields the skin from UV rays. Wet t-shirts do not protect from the sun. Sunlamps and tanning booths are unsafe long-term. These contribute to skin cancer, as well. And if you see changes in your skin that concern you, see your medical professional at once.
Proper Pet Hygiene Poor hygiene in dogs can result in severe consequences if not addressed appropriately By College of Veterinary Medicine & Biomedical Sciences, Texas A&M University Like humans, pets can experience skin conditions that may cause redness, itchiness, odor, and even wounds. Fortunately, many skin conditions can be prevented with routine bathing and grooming. Dr. Alison Diesel, clinical assistant professor at the Texas A&M College of Veterinary Medicine & Biomedical Sciences, explained the importance of pet hygiene for both dogs and cats. On one hand, most cats do not usually require a bath to maintain a healthy skin and coat, Diesel said. However, she added that older or obese cats may benefit from bathing to help keep their coat and skin healthy. On the other hand, dogs require regular bathing and grooming with a frequency that depends on their skin and coat health. “Dogs without dermatological abnormalities benefit from a bath a couple of times a year or when they get dirty,” Diesel said. “However, dogs with skin problems often require more frequent bathing and sometimes benefit from specific kinds or medicated shampoos. If your dog has a skin problem, you should discuss bathing recommendations with your veterinarian.” Though your veterinarian can examine your pet’s skin during a routine check-up, sometimes skin abnormalities can develop between appointments. If you notice any abnormalities while bathing or grooming your pet, you should have your pet examined by a veterinarian. Some noticeable changes might be increased odor and dander and may result in discomfort or itching in the animal. In addition, animals with long hair coats are prone to matting. This can irritate the skin and result in wounds when removed or clipped out, Diesel said. Furthermore, pets with long hair coats are at higher risk for fly strike and acquisition of maggots hidden within the mats and under the hair coat. These creatures can further damage the skin, causing wounds, infections, sepsis and potentially death, Diesel explained. More severe or persistent skin conditions may benefit from examination by a boarded specialist in veterinary dermatology. Another important part of proper pet hygiene is keeping your pet’s ears clean. Most pet owners regularly bathe their pet to maintain their coat, but clean ears are just as important and should be part of your pet’s normal hygiene routine. When cleaning your pet’s ears, Diesel recommended saturating a cotton ball with a veterinarian-approved ear cleanser. “Gently place this in the dog or cat’s ear canal and massage to help deliver the solution along the length of the canal,” Diesel said. “An additional cotton ball can be used to wipe out excess fluid after the animal shakes their head. Q-tips should never be used to clean a dog or cat’s ears as this can lead to potential damage of the ear canal.” Diesel added that ear problems often manifest with scratching or rubbing at the ears, redness, discharge, and a foul odor. Some animals, such as dogs that swim a lot, are more prone to ear problems than others and should be monitored more closely. Additionally, Diesel said pet owners should discuss recommendations for appropriate ear cleaning with their veterinarian. Poor hygiene in dogs and cats can result in severe consequences if not addressed appropriately, Diesel said. Though it may seem like Fido hates his routine bath, proper hygiene for both dogs and cats is necessary to keep your furry friend healthy. If you have any concerns about your pet’s grooming and bathing routine, consult your veterinarian. grooming, pet hygiene, skin conditions, College of Veterinary Medicine & Biomedical Sciences Does My Dog Really Need to be Vaccinated? Have you ever wondered if your dog really needs a recommended vaccine... What Is Heartworm? Are you doing all you can to protect the ones you love? With April... Everything You Need To Know About Heartworm What is Heartworm? Heartworm disease is as scary as it sounds. It is... Bow Vows Ask an Expert - Fostering dogs with one in the house Ask an Expert - Resource guarding Dog Biking for Beginners
Enriched skimmed milk may curb frequency of gout flare-ups A daily dose of skimmed milk, enriched with two components found in dairy products, may help to curb the frequency of painful gout flare-ups, indicates research published online in the Annals of the Rheumatic Diseases. Previous long term research has shown that the risk of gout is greater among those whose diet is low on dairy products. And experimental studies indicate that certain components of dairy products, particularly glycomacropeptide (GMP) and G600 milk fat extract (G600), seem to dampen down the inflammatory response to gout crystals. The authors studied the frequency of gout flare-ups in 120 patients with the condition over a period of three months. All the patients had experienced at least two flare-ups in the preceding four months. The patients were divided into three different treatment groups: lactose powder; skimmed milk powder; or skimmed milk powder enriched with GMP and G600. Each powder was mixed in 250 ml of water as a vanilla flavoured shake and drunk daily. The patients attended a rheumatology clinic monthly to check on their requirement for medication and their symptoms, which they recorded using a daily flare diary and validated pain scale. There were no significant differences among the three groups at the start of the study in terms of frequency of gout flare-ups, pain, or drugs used to treat the condition. In all, 102 patients completed the three month study. And the results showed that those on the enriched skimmed milk diet had a significantly greater reduction in gout flare-ups compared with the other two groups. They also had greater improvements in pain and the amount of uric acid in their urine than those in the other two groups. This was matched by a trend towards a reduction in the number of tender joints. The enriched skimmed milk diet did not boost weight gain or increase the levels of potentially harmful blood fats. "This is the first reported randomised controlled trial of dietary intervention in gout management, and suggests that daily intake of skimmed milk powder enriched with GMP and G600 may reduce the frequency of gout flares," conclude the authors. Explore further: Study finds CT scans can help detect gout cases traditional tests miss Journal reference: Annals of the Rheumatic Diseases Provided by: British Medical Journal Study finds CT scans can help detect gout cases traditional tests miss X-ray images known as CT scans can help confirm gout in patients who are suspected of having the painful condition but receive negative results from traditional tests, a Mayo Clinic study has found. The type of CT scan analyzed, ... Dual-energy CT may be useful in evaluating the severity of gout, study suggests The incidence of gout is on the rise and duel energy CT has the potential to allow non-invasive diagnosis of the disease, according to radiologists at the University of British Columbia, Vancouver General Hospital, in Vancouver, ... Anti-malaria drug shows promise as Zika virus treatment A new collaborative study led by researchers at Sanford Burnham Prebys Medical Discovery Institute (SBP) and UC San Diego School of Medicine has found that a medication used to prevent and treat malaria may also be effective ... Decrease in sunshine, increase in Rickets A University of Toronto student and professor have teamed up to discover that Britain's increasing cloudiness during the summer could be an important reason for the mysterious increase in Rickets among British children over ... Scientists identify biomarkers that indicate likelihood of survival in infected patients Scientists have identified a set of biomarkers that indicate which patients infected with the Ebola virus are most at risk of dying from the disease. Research team unlocks secrets of Ebola In a comprehensive and complex molecular study of blood samples from Ebola patients in Sierra Leone, published today (Nov. 16, 2017) in Cell Host and Microbe, a scientific team led by the University of Wisconsin-Madison has ... Study raises possibility of naturally acquired immunity against Zika virus Birth defects in babies born infected with Zika virus remain a major health concern. Now, scientists suggest the possibility that some women in high-risk Zika regions may already be protected and not know it. A structural clue to attacking malaria's 'Achilles heel' Researchers from The Scripps Research Institute (TSRI) and PATH's Malaria Vaccine Initiative (MVI) have shed light on how the human immune system recognizes the malaria parasite though investigation of antibodies generated ... kevinrtrs 2 / 5 (2) Jan 24, 2012 All good and well, except if you're lactose intolerant - which also implies that your body is likely to react negatively to milk protein, in spite of adding enzymes to deal with the lactose...life is tough!!!!
UNL Institute of Agriculture and Natural Resources IANR News IANR News Nebraska researchers using sorghum for textile dye Nebraska researcher using sorghum for textile dye Nebraska researcher Yiqi Yang is using the by-products of sorghum as a colorant for textiles. Lincoln, Neb. — Nebraska researcher Yiqi Yang is using the byproducts of sorghum as a colorant for textiles. The Charles Bessey Professor of biological systems engineering and of texiles, merchandising and fashion design is using wool to start his research, but hopes that in the future sorghum can be used on other textiles and throughout many industries. Sorghum is a cereal grain used for food in some countries, but in the United States it is commonly used as livestock feed and turned into ethanol. Sorghum is also a major source for making liquor. The grain tends to be drought resistant, making it a popular crop to grow in dry climates. After using the sorghum starch for industrial applications, the coproducts and byproducts that remain are called distillers grains. Distillers grains from corn digests easier than sorghum, so Yang’s research looks to add a better value application for the sorghum industry for Nebraska. Yang and a group of Ph.D. students are currently working to see if the higher cross-linked proteins can have industrial applications, such as food-packaging materials or fibers for textiles. A unique part of the research involves the sorghum husks. The husks contain a dark natural color that can be used as a colorant. Some have tried to use it as a food additive; however, Yang tries to use it for more large-scale approaches. “If one could use it as a colorant for textiles, we’re talking about a huge demand,” Yang said. “This research is helping more than just the textile industry. By finding additional uses for co-products and byproducts we’re adding value for farmers.” The team started out by dyeing wool because it can carry positive charges, which creates a better dye sorption since natural colorants generally carry negative charges and because the value addition is high. Yang believes that silks and nylons also have the potential to dye well. While the hope is to eventually use the natural colorant from the sorghum husks to dye all types of textiles, cotton has been put on the backburner. Yang explains that there isn’t enough of a value increase to cotton to encourage consumers to purchase a naturally dyed textile, because cotton is already a cheap material, and because natural cellulosics do not carry positive charges. “You add more value to these highly valuable materials than cotton,” says Yang. “If you work on the more valuable materials, the value addition is higher. Later, if the technology matures and the cost decreases, then of course we can work with other materials.” Once available, this natural colorant could reach a large market of consumers and brands. Yang says that there is a place for this natural coproduct everywhere that you use colorants including plastics, food additives, packaging materials, cosmetics and even hair dyes. The project started with textiles and they have found some unique properties that make the product even more useful in other industries. For example, because the coproducts of sorghum are fluorescent with an ultraviolet light, it could be used as a tracing material for biomedical applications, such as tracking the movement of particles in the body. Yang and his team are collaborating with researchers from Jiangnan University in China on this project. Yiqi Yang Charles Bessey Professor Biological Systems Engineering Textiles, Merchandising & Fashion Design yyang2@unl.edu Writer: Gina Incontro - IANR Media Water and crops field day offered Aug. 24 in North Platte Pollinator field day Aug. 18 in Tekamah Field day focuses on weed control, herbicides University leads research into heat-tolerant crops Soybean management field days set for Aug. 8-11 at four locations IANR News Contact Information 105 Ag Communications Building ianrnews@unl.edu IANR on Facebook @UNL_IANR on Twitter IANR Home University of Nebraska Rural Initiative UNL Home NU Home UNL Libraries Connect with #UNL UNLincoln on Facebook @UNLincoln on Twitter UNL on YouTube @unlincoln on Instagram University of Nebraska–Lincoln on LinkedIn unlincoln on Pinterest UNLincoln on Snapchat Nebraska Today Emergency Planning and Preparedness Institutional Equity and Compliance Student Information Disclosures TIPS Incident Reporting University of Nebraska–Lincoln About the Big Ten Conference UNL web framework and quality assurance provided by the Web Developer Network · QA Test © 2017 University of Nebraska–Lincoln · 402-472-7211 University of Nebraska System Some parts of this site work best with JavaScript enabled.
2014 was the hottest year ever recorded, Nasa and NOAA confirm By Hannah Osborne An art installation called Desert of Cantareira by Brazilian artist and activist Mundano is seen at Atibainha dam, part of the Cantareira reservoir, during a drought in Nazare Paulista, Sao Paulo (Nacho Doce/Reuters) Last year was the hottest on record, with surface temperatures reaching their highest levels since 1880, when records began. Scientists at Nasa's Goddard Institute for Space Studies (GISS) and the National Oceanic and Atmospheric Administration (NOAA) both found 2014 to have had higher temperatures than 2010 – the last record-holding high. Experts at Nasa's Earth Observatory say there are a number of reasons why 2014 was so warm despite the lack of El Niño, which can push up temperatures. "This is the latest in a series of warm years in a series of warm decades," said GISS director Gavin Schmidt. "While the ranking of individual years can be affected by chaotic weather patterns, the long-term trends are attributable to drivers of climate change that right now are dominated by human emissions of greenhouse gases." Temperatures seen in 2014 are expected to continue along the planet's long-term warming trend, but with yearly fluctuations from phenomenon like El Niño. "These phenomena warm or cool the tropical Pacific and are thought to have played a role in the flattening of the long-term warming trend over the past 15 years. However, 2014's record warmth occurred during a Niño-neutral year," Nasa said in a statement. Richard Spinrad, NOAA chief scientist, said the US government agencies will continue to monitor temperature changes to best prepare for future climates. The 10 warmest years on record, except for 1998, have now taken place since 2000. "As we monitor changes in our climate, demand for the environmental intelligence NOAA provides is only growing. It's critical that we continue to work with our partners, like Nasa, to observe these changes and to provide the information communities need to build resiliency." Global warming: Every degree rise could see a drop in wheat yield by quarter of present production Ed Davey: We must act urgently to prevent dangerous climate change Climate change: 2014 set to be UK's hottest year since 18th century Climate Change: Nations Meet in Bonn for Talks as Global Temperatures are Set to Break More Records Anthropocene era: Humans changed the Earth's geology by setting off atomic bombs in 1945 Climate change: Social cost of carbon six times higher than current estimates Related topics : Climate Change
Certain patients with prostate cancer may benefit from Provenge clinical trial Select patients with advanced prostate cancer may benefit from a Georgia Health Sciences University Cancer Center clinical trial that looks to improve survival rates of the FDA-approved prostate cancer drug Provenge. The trial, led by GHSU Cancer Center Director Samir N. Khleif, is the first in the country to investigate prostate cancer treatment combining Provenge with two other cancer-fighting drugs, CT-011 and cyclophosphamide. As the first FDA-approved immunotherapy treatment for prostate cancer, Provenge has been found to extend life expectancy of certain men with advanced prostate cancer by nearly 20 percent. Related StoriesResearchers identify new way to effectively kill cancer stem cells using existing drugsAdherence to NCCN Guidelines spares breast cancer patients from unnecessary testsCould a blood test predict how cancer spreads in children? "Although the increased overall survival seen with Provenge treatment is a welcome advance in the treatment of prostate cancer, the goal of cancer therapy must be the eradication of disease," said Khleif. "Therefore, improvements can be made, and this clinical trial is intended to improve the current standard of care." Provenge works by training the body's immune system to find and attack prostate cancer cells. Khleif's trial hopes to boost Provenge's effectiveness by combining it with two other drugs: CT-011, a type of antibody that reverses immune suppression caused by cancer, and cyclophosphamide, which in a low dose enhances the effect of Provenge and CT-011. Both have been safely used alone or in combination with other cancer therapies, but never for prostate cancer. Preclinical animal studies in Khleif's lab found that the combination of Provenge with these two other drugs led to a significant increase in survival and complete tumor regression in more than 50 percent of mice. Based on these results, Dendreon Corporation, the makers of Provenge, and Khleif are collaborating on this first in human trial. Khleif joined the GHSU Cancer Center as its Director after more than 22 years in the Cancer Vaccine Section at the National Cancer Institute. His lab focuses on research into vaccines to help the immune system target and eradicate cancers. Prostate cancer is the most common cancer and the second leading cause of cancer deaths among men in most Western countries. Source:Georgia Health Sciences University 04f2bfc8-8938-4c81-8147-381585f1af79|0|.0 Posted in: Men's Health News | Drug Trial News Tags: Antibody, Cancer, Cancer Treatment, Clinical Trial, C-section, CT, Cyclophosphamide, Drugs, Immune System, Immunotherapy, Life Expectancy, Preclinical, Prostate, Prostate Cancer, Provenge, Tumor, Vaccine Comments (1) Could the mTOR pathway be used to protect against cancer?Study: Addition of MRI to mammography improves cancer detection after breast conservation therapyExperimental drug benefits some patients with advanced kidney cancerResearch presents crystal structure of enzyme implicated in cancer, neurodegenerative diseasesMSTF recommends colonoscopy and FIT as first tier screening tests for colorectal cancerNew 10-minute test uses drops of blood or saliva for early detection of cancersImmunotherapy found to be smarter, kinder treatment for people with head and neck cancerSimple blood test could improve treatment for many stage 2 colon cancer patients Provenge Perhaps Not Why do they have to select patients to see any benefit, does this mean that Provenge does not work in most cases. No wonder they are adding cyclophosphamide to this trial since it has been shown to work on its own anyway. Study sheds light on night work−cancer link
Breaking Breastfeeding Barriers Women have the power to save the lives of over 820,000 young children each year and to help millions more thrive and reach their full intellectual potential. They are physically equipped to remove billions of dollars from global health care costs and turn it into billions in economic prosperity. This is because women possess the ability to make milk that is the most nutritionally and immunologically potent food for infants and toddlers. Food that can fuel brain development like nothing else. Food that can protect against disease and illness and save as many lives as some of the world’s best vaccines. Food that can set a child upon a path toward better health and a more prosperous future. Yet while women have the ability to improve the health and vitality of their children, their communities, and the world at large, they face innumerable barriers in doing so. A new scorecard published by the World Health Organization (WHO) this week shows just how big these barriers are. It shows that no country in the world is adequately supporting women to breastfeed in line with the global recommendations. This worldwide failure to support women to breastfeed successfully is having serious consequences. A new report by UNICEF, WHO, 1,000 Days and Alive & Thrive, “Nurturing the Health and Wealth of Nations: The Investment Case for Breastfeeding,” indicates that countries are losing billions of dollars each year because they fail to invest in programs and policies that help women breastfeed. For example, in China, a country where only 1 in 5 babies are breastfed in accordance with global recommendations, it is estimated that its economy loses $66 billion per year due to low breastfeeding rates. These staggering economic losses are driven by costs associated with lower cognitive capacity of Chinese children that are not optimally breastfed. This is because breastfeeding has been shown to play a critical role in fostering a young child’s brain development. Numerous studies have shown that shorter durations of breastfeeding for children were associated with at least a 2.6-point loss in IQ scores. Moreover, using cutting-edge neuroimaging techniques to measure babies’ brain development, researchers in the U.S. found that babies who had been breastfed exclusively for at least three months had enhanced development in key parts of the brain by age 2 compared to children who were exclusively formula-fed or who were fed a combination of formula and breastmilk. The research showed that the extra growth was most significant in parts of the brain associated with language, emotional function and cognition. But in addition to the cognitive losses in children, poor breastfeeding rates cost countries in two other critical ways: first, in terms of higher health care costs to treat diseases that could have been prevented with better breastfeeding and second, the potential future income that is lost to maternal and child deaths due to low rates of breastfeeding. For example, in the U.S., researchers have estimated that over $17 billion could be saved each year in medical costs and in the costs associated with women and children dying prematurely if 90 percent of babies were breastfed exclusively for the first six months of life. Clearing the path for breastfeeding success The stunning loss of human potential and the enormous sums of money lost each year caused by the collective failure to support women to breastfeed is not inevitable. In fact, we know that rapid progress in increasing the number of children breastfed—and by extension saving lives, improving health and building prosperity—is possible with investment in the right policies and programs. We have a global target in place to take the proportion of babies exclusively breastfed from 40 percent to 50 percent over the next eight years. Getting to that target will take an investment of $5.7 billion by 2025. This investment translates into roughly $4.70 per newborn and would finance improved access to skilled breastfeeding counseling, better practices in maternity facilities, national breastfeeding education efforts, the development of paid family leave policies, and the implementation of the International Code of Marketing of Breastmilk Substitutes to restrict the unethical promotion of infant and toddler formula. The payoffs for the investment in these programs and policies are massive. For about $5 per baby, we could save 520,000 lives by simply ensuring that half of the world’s children are breastfed for the first six months of life and add almost $300 billion to the economies of low and middle-income countries. The ROI for breastfeeding is unbeatable. As the Investment Case for Breastfeeding report highlights, every $1 invested breastfeeding programs yields $35 in economic benefits. It is clearly in a country’s economic self-interest to invest in breaking down the barriers to enable women to breastfeed and every child to get the strongest start to life. Nonetheless, it will take a collective effort to clear the path to better enable women to breastfeed and unlock massive gains in health, cognitive potential and economic productivity. It is why I am excited that my organization, 1,000 Days, is part of the new Global Breastfeeding Collective, a partnership of 20 prominent international agencies and organizations led by UNICEF and WHO that are committed to increasing investment policies and actions to help women to successfully breastfeed. Together we can help unleash a woman’s power to breastfeed and her power to transform the world. Source Two friends started a law firm for the Trump era International Slavery Remembrance Day Why Do Millions Of The World’s Most Vulnerable Children Keep Getting Left Behind? Worried about the planet? Avoid that extra kid
Organism Can Cause Blindness In Extended-Wear Contact Lens Users If Left Untreated Scientists at Indiana State University are working with a common bacterium that can quickly infect and cause blindness in extended-wear contact lens users. Worse than that-- many popular cleaning solutions aren't strong enough to fight it. Share This TERRE HAUTE, Ind. -- Scientists at Indiana State University are working with a common bacterium that can quickly infect and cause blindness in extended-wear contact lens users. Worse than that-- many popular cleaning solutions aren't strong enough to fight it. More than 33 million people in the United States wear contact lenses, according to 1998 figures from the Vision Council of America. Of those, more than 83 percent use soft (including extended wear) contacts. It is this population of contact lens wearers that is at risk of coming in contact with the organism Pseudomonas aeruginosa. This common bacterium can cause blindness within 36 hours in cases where the individual already has corneal abrasions or other eye injuries. "The organism has been observed to initiate an infection of the cornea rather quickly," said Thomas Tsai, an optician and a senior life sciences and chemistry major at ISU. "It can double in number in less than an hour (from 1,000 to 2,000 cells), cause vision impairment within a few hours and total blindness within 36 hours when left untreated." Tsai and Dr. Kathleen Dannelly, assistant professor of life sciences, are currently testing the bacteria on a model contact lens and have noticed the organism's ability to burrow into the wall of a lens, making it harder to kill with recommended cleaning solutions. As a result, they are looking into the possibility of a vaccine to prevent infection from ever occurring. "Eye infections due to this strain of Pseudomonas were very rare before contact lenses became popular," Tsai explained. "The use of extended wear lenses, in particular, has caused it to become even more common." The scientists also recently discovered that different lens solutions adhere to glass and different types of plastics commonly used in the manufacturing of contact lens cases. This, too, can interfere with a solution's ability to function at optimum levels, according to Tsai. Tsai and Dannelly have been studying the effects of five contact lens solutions against two strains of Pseudomonas aeruginosa. Included in the testing were solutions by leading manufacturers, including Bausch and Lomb's new ReNu formula, Alcon's Opti-Free Enhanced, Original Opti-Free and Opti-One and Allergan's Complete. Current Food and Drug Administration (FDA) guidelines require that contact lens solutions kill bacteria three times over in order to be deemed effective. "The FDA uses a common and weaker form of this bacteria (found in soil) to test their solutions," explained Tsai. "We are using samples actually taken from patients who have gone blind as a result of this bacteria." Dannelly and Tsai first tested the solutions' effects on the Pseudomonas sample in a dish, without applying the organism to an actual contact lens. The ReNu formula (Bausch and Lomb) was found not to have much effect on the Pseudomonas even after eight hours in the solution; however, Alcon's Opti-Free Enhanced significantly reduced the number of organisms on contact. "I would like to see doctors and opticians explain the cleaning regimen better. Patients also should clean their lenses thoroughly and contact their doctor as soon as they notice any severe irritation or redness. But, most importantly, people should never sleep with contact lenses in," Tsai said. Tsai said he became interested in studying Pseudomonas infections with contact lens use as an optician at Wal-Mart. Dannelly, Tsai's co-investigator, was already studying Pseudomonas and its effects on the lacrimal gland (which causes tears), so it seemed a perfect fit to join the two areas of interest together, Tsai explained. Tsai and Dannelly are working to secure grants through Wal-Mart and the National Institutes of Health to support their research. Story Source: The above story is based on materials provided by Indiana State University. Note: Materials may be edited for content and length. Indiana State University. "Organism Can Cause Blindness In Extended-Wear Contact Lens Users If Left Untreated." ScienceDaily. ScienceDaily, 7 March 2000. <www.sciencedaily.com/releases/2000/03/000307090602.htm>. Indiana State University. (2000, March 7). Organism Can Cause Blindness In Extended-Wear Contact Lens Users If Left Untreated. ScienceDaily. Retrieved August 27, 2014 from www.sciencedaily.com/releases/2000/03/000307090602.htm Indiana State University. "Organism Can Cause Blindness In Extended-Wear Contact Lens Users If Left Untreated." ScienceDaily. www.sciencedaily.com/releases/2000/03/000307090602.htm (accessed August 27, 2014).
Changing forests Climate changes will influence regional forests, but forests are slow to change and the changes are more difficult to monitor. Posted on July 31, 2013 by Bill Cook, Michigan State University Extension When Bob Dylan released his The Times They Are a-Changin’ album in 1964, he probably did not know just how fast things were picking up speed. Most humans, so the social psychologists seem to say, are resistant to change. And, so it goes with climate change perception, perhaps. With all the changes occurring around us, many of them with immediate and tangible influence, it is easy to set aside something like climate change for a day that our schedules will not likely see. However, science has not set aside measuring and analyzing the changes that our planet is undergoing. And, they are getting better at it as tools and technologies improve. Ignoring the rhetoric of politics, conspiracies, and hidden agendas, there exist discreet trends that, if they continue, will significantly influence the not-so-distant future. Which trends? The last 50 to 100 years show varying degrees of evidence for trends such as: Rising temperatures, More intense and frequent severe storms, Longer growing seasons, Altered rainfall patterns and summer droughts, Shorter periods of ice cover and earlier snowmelt, River flow changes, Changing wildfire patterns, Rising sea levels, More rapid glacial retreats, and Thawing permafrost. The data sets are convincing, especially when you look at the entire time scale. The world climate is strikingly complex, however, and these trends vary around the globe and among regions. For resource managers and landowners, the more important consideration might have to do with longer-term effects on our forest land. In the universe of responses to climate change, there are the schools of “mitigation” and “adaptation”. These schools are not mutually exclusive or contradictory. In the forest realm, they are quite complementary. Mitigation is measures that slow the change. Science has demonstrated that forests play a huge role in the carbon cycle, and carbon dioxide is one of the major gases that affect climate. Keeping forests as forests is a key element. Managing forests for increased vigor, and storing carbon in wood products, is another key element. Adaptation involves methods and techniques that might be used to adapt to the new environmental conditions. Forests will respond to changes in precipitation, temperature, cycles, and other ecological drivers. However, forests are slow to change and defining the new pathways can be difficult. Forests are also very dynamic systems, meaning their responses may hold some surprises for forest researchers, managers and owners. Tweaking forest management to account for climate change often involves practices that are good to employ for several other beneficial, and more traditional, outcomes. A well-managed forest can increase biological diversity. More diverse forests may be more resistant to changes. Well-managed forests are less vulnerable to many pests and pathogens. A well-managed forest also produces higher quality timber (more money) and more ecological services (habitat, water quality, etc.). In our region, one of the leaders in forest adaptation science is the U.S. Forest Service National Institute for Applied Climate Science, located in Houghton. These scientists have been working with several partners across the Lake States to identify forest vulnerabilities, likelihood of change, and possible management strategies. Of course, they are also working in a cloud of statistical uncertainty. The uncertainty is not so much whether or not there will be change but, rather, what the nature of what those changes will be. Should forest owners be managing their forests in an entirely different direction for a changing climate? Probably not, unless they are currently doing nothing at all. The most important recommendation, the one for which Michigan State University Extension would advocate, would be to manage for a healthier and more vigorous forest, and keep in mind that over the course of the life of a particular forest, the environmental conditions are likely to change. Our children and grandchildren will inherit our choices. Evenings in the Garden - Fall 2017 Series Sep 28, 2017 | MSU Tollgate Farm and Education Center, 28115 Meadowbrook Road - Novi, MI 48337 Oct 12, 2017 | MSU Tollgate Farm and Education Center 28115 Meadowbrook Road - Novi, MI 48337 Oct 26, 2017 | MSU Tollgate Farm and Education Center 28115 Meadowbrook Road - Novi, MI Autumn leaf discoloration September 15, 2017 | Bill Cook | Broad-leaved trees often host late season leaf discoloration. The cooler and wetter growing season favors the growth of common leaf fungi. Heating and wood September 15, 2017 | Bill Cook | As the colder season approaches, some buildings are less prepared than others. Wood-based products, such as chips or pellets, can be viable alternatives to new fossil fuel furnaces and boilers. Plant Species Race Against Climate Change discussion to be held in Hickory Corners September 15, 2017 | Bethany Bohlen | Learn about the race for survival between native and invasive plant species from Michigan State University associate professor Dr. Jen Lau on Oct. 9. Michigan Department of Natural Resources reports Japanese stiltgrass detected in Michigan September 12, 2017 | Beth Clawson | Aquatic invasive species super plants! Withstanding acidic soil, variable light conditions, flooding and, multiplying beyond control in a single year. Don’t miss registering for 2017 Protecting Pollinators in Urban Landscapes Conference September 12, 2017 | Rebecca Finneran | Arborists, city foresters, lawn and landscape professionals, public or private garden managers and golf course superintendents will benefit from attending this conference. How to treat hemlock trees for hemlock woolly adelgid David Smitley, Michigan State University Department of Entomology, and Deborah G. McCullough, Michigan State University Departments of Entomology and Forestry | Guidelines for homeowners with hemlock trees infested with hemlock woolly adelgid. Buckthorn Watch Bookmark (E3139) Joy Landis, Doug Landis, and Mary Gardiner | This publication comes in a set of 100 per pack. Options for Protecting Hemlock Trees from Hemlock Woolly Adelgid (E3349) Deborah G. McCullough| This bulletin provides information on effective insecticide products and application methods to protect hemlock trees from hemlock woolly adelgid, a serious invasive insect pest. Hemlock Woolly Adelgid (E3300) Deborah G. McCullough| A little insect that means big trouble for hemlock trees in Michigan. Hiedra Venenosa (E2946SP) Kyle K. Meister, and Carolyn Randall | This is the Spanish counterpart to the English bulletin E2946, Poison Ivy.
Home → Research → Data-Driven business models Background- Data-driven business models We live in an era of big data – often characterized in terms of the 3Vs – volume, velocity and variety. Three simple examples illustrate the phenomenon. Volume – Tesco has data on the shopping habits of 15 million customers going back 20 years. Velocity – Twitter receives around 12 terabytes of tweets every day. Variety – over 200 million photos are uploaded to Facebook each and every day. These 3Vs neatly encapsulate the reasons why we are experiencing a data explosion, an explosion that is resulting in interesting developments in consumer markets. Increasingly firms are using the data they can access to develop new insights about their customers and their behaviours. Supermarkets analyse spending patterns, trying to work which bundles of products consumers tend to buy. Entertainment outlets use data on who is visiting their park to plan resource allocation and ensure that VIPs get preferential treatment. Beyond consumer markets, big data is increasingly making its presence felt in the business-to-business and business-to-government sectors (McKinsey 2011). Work on smart cities – including using Twitter posts to monitor the state of urban infrastructure – is becoming commonplace. Some organisations are even innovating their whole business models, creating new services through the application of big data (McKinsey). Vestas – a wind turbine manufacturer, for example, has spent 12 years buying data on global wind flow patterns. Vestas uses these data to model how wind flows around the world and now advise their customers on where to locate the wind turbines they purchase to ensure the most efficient energy production through the turbine’s life. In essence they are using big data to create new sources of value and competitive advantage.Recent developments in European regulation mandate the deployment of smart metering infrastructures in electric grids (EC 2012) in which big data technologies will play an important role as enablers. As data-driven strategies take hold, they will become an increasingly important point of competitive differentiation. According to a McAfee ( 2012) companies that inject big data and analytics into their operations show productivity rates and profitability that are 5% to 6% higher than those of their peers (McAfee 2012). A recent report prepared by the world economic forum (WEF 2012) identifies uses for Big Data for economic development purposes (e.g. health care, micro finance, education, agriculture). The same report emphasizes the importance of developing adequate business models providing appropriate incentives for private-sector actors to share and use data for the benefit of society. This call focuses on two main questions, (1) the question of how big data influences business and economic models and (2) what kind of business and economic models are required to articulate emerging data ecosystems as depicted in the following figure. With regards to the first question, moving beyond the use of big data in consumer markets, we are interested in commissioning research that explores the role big data is playing today and is likely to play in the future in enabling new economic and business models (LRP 2010). With regards to the second question we are interested in commissioning research that explores the potential application of current business model implemented by internet companies (e.g. Google search, Apple Appstore, Twitter) in the realm of these data ecosystems. Themes for this call: Despite the widespread interest in the phenomenon of big data, there is still relatively little discussion of the role big data is and will play in enabling economic and business model innovation in the Digital Economy. This theme is especially interested in the development of research concerning: The role of big data in enabling economic and business model innovation. The constraints and barriers to exploitation of the value created by the use of big data in economic and business model innovation. The future potential of big data in economic and business model innovation, especially given the increasing shift to social and unstructured data. Some interim outputs from this theme: NEW! Big Data for Big Business? A Taxonomy of Data-driven Business Models used by Start-up Firms by Philipp Max Hartmann, Dr Mohamed Zaki, Niels Feldmann and Prof Andy Neely, Cambridge Service Alliance, University of Cambridge Data-driven economic models: challenges and opportunities of big data by Monica Bulger, Ralph Schroeder, and Greg Taylor, Oxford Internet Institute, University of Oxford Capturing Value from Big Data through Data-Driven Business Models: Patterns from the Start-up world by Philipp Hartman, Dr Mohamed Zaki and Prof Duncan McFarlane, Cambridge Service Alliance, University of Cambridge NEW! Introduction to A taxonomy of data-driven business models used by start-ups by Dr. Mohamed Zaki, Cambridge Service Alliance, University of Cambridge. From Creative Industries Knowledge Transfer Network Big Data Science Blog by Professor Richard Vidgen, Professor of Systems Thinking in the Hull University Business School
Seemingly minor wastes pollute US waterways By Scott Armstrong, Staff writer of The Christian Science Monitor / Old tires. Backyard pesticides. Lawn fertilizers. Oily parking lots. Innocuous though these may seem, they lie behind one of the quietly emerging environmental issues of the 1980s: urban runoff -- the dirt, bacteria, toxic metals, and other pollutants washed off city streets by rainstorms into storm drains and flushed into the nation's rivers, lakes, and harbors.For the past several decades, the nation's environmental laws have been aimed mainly at curbing traditional sources of pollution, such as industrial wastes and raw sewage. With many of these efforts under way, concern is mounting about so-called ``nonpoint'' pollution sources, of which urban runoff is a major culprit. Nonpoint pollution enters the environment from diffuse sources. It includes everything from the runoff of farm pesticides to residues eminating from mines and city storm drains. A recent study by the Association of State and Interstate Water Pollution Control Administrators found that 41 percent of the rivers it surveyed were impaired or threatened by nonpoint-source pollution. More than half of the lakes looked at were similarly imperiled. While only some of this was because of urban runoff, its contribution to the nation's pollution problems is stirring increasing concern: In Huntington Harbor, Calif., a prosperous marina community south of Los Angeles, state officials recently found the highest levels of toxic manganese in mussels recorded anywhere in the state. The primary source of the metal -- as well as high levels of lead and zinc -- was traced to storm-drain runoff from inland Orange County. Water-quality officials with the Tennessee Valley Authority are setting up catchment basins and taking other steps to stop polluted rainwater from Knoxville flowing into Fort Loudoun Lake, a key reservoir in that region. The action was taken after recent surveys showed urban runoff was dumping unusual amounts of dirt, bacteria, and toxic metals into the 14,000-acre lake -- in some cases, befouling waters more than local sewage.Federal studies have shown that urban runoff is responsible for 19 percent of the toxic lead flowing into the Chesapeake Bay each year. San Francisco Bay annually takes in the equivalent of a small oil spill (9.8 million pounds of grease and oil). ``Storm water is not a sexy issue, by any means,'' says Joan Becker, a water-resource specialist with the Natural Resources Defense Council in Washington. ``But we think it is very significant.''To varying degrees, city and community officials and the Environmental Protection Agency (EPA), agree. But a brouhaha is developing over how to cope with it. Prodded by environmental groups, the EPA is moving toward enforcing, for the first time, comprehensive regulations governing urban runoff. But city and town officials say the resulting costs could bankrupt local treasuries. Of concern is a set of regulations issued in August that would require communities to apply for permits for municipal storm drains, similar to what they now do for sewage systems. The rules would take effect starting in December 1987. EPA officials say the aim is to plumb the extent of the problem and start controlling it.But local officials opposed to the regulations -- including the National League of Cities and the US Conference of Mayors -- question how such a diffuse source of pollution can be easily curbed. Critics point out that it may cost $8,500 just to meet each permit application. With more than 1 million municipal storm sewers dotting urban America, that adds up to $8.5 billion. This says nothing of the cost of cleaning up sites, should a problem be found. ``We're having trouble building treatment plants to deal with sewage,'' says Barbara Harsha, a senior policy analyst with the National League of Cities.EPA officials contend that the new regulations need not be so alarmingly expensive or difficult to meet. They estimate permits should run about $1,000 apiece. They also point out that municipalities could file group applications, rather than licensing each individual storm drain. Comforting though the EPA is trying to be, this agency itself has not always been eager to regulate the urban runoff problem. When the federal Clean Water Act was passed in 1972, it specified that permits would be required for all ``point'' source discharges. EPA exempted storm drains. Shortly thereafter, environmentalists sued the EPA over the exemption, but lost. Ever since, regulations have existed requiring municipalities to get permits for storm drains, but the EPA has been known to be lax about enforcing them.Now, under renewed pressure from environmentalists, it is moving cautiously toward putting some teeth behind the regulations. ``Localized, urban runoff can be a serious problem that ought to be dealt with,'' says William Diamond, head of EPA's water enforcement and permit branch. ``But it shouldn't reorient everybody's priorities.'' Environmental groups are pushing to have the regulations enforced sooner, while cities are lobbying to get them watered down and delayed beyond the 1987 deadline. A congressional conference committee is expected to take up the issue within the next two months. Whatever the outcome in Congress is, everyone agrees the problem will have to be tackled at some point. It will not be an easy task. First there's the difficulty of pinpointing where the pollution is coming from and determining who's responsible. Then there's the job of cleaning it up. ``Can you conceive of trying to control the quality of miles and miles of storm drains in Orange County alone?'' asks Joanne Schneider, an environmental specialist with the regional water-quality control board in the area. Some cities, however, are already working to curb the problem. In the San Francisco Bay area, for instance, local water officials are experimenting with using wetlands to absorb pollutants. Other cities are exploring everything from expanded street sweeping to keep dirt and grime off roadways, to using special catchment basins to handle the runoff. Even so, urban runoff appears destined to be one of the ``sleeper'' environmental issues of the 1980s. 1 in 10 beach water samples is contaminated, report finds Clean Water Act at 40: Is it failing to meet new pollution challenges? To tackle polluted runoff, cities turn to 'green' strategies
Therapeutic agent reduces age-related sleep problems in fruit flies April 1, 2014 The fruit fly Drosophila melanogaster has a life expectancy of approx. 8 weeks and belongs to the model organisms studied by scientists at the Max Planck Institute for Biology of Ageing in their quest to understand aging in living beings. Credit: MPI f. Biology of Ageing/ W. Weiss Elderly flies do not sleep well – they frequently wake up during the night and wander around restlessly. The same is true of humans. For researchers at the Max Planck Institute for Biology of Ageing in Cologne, the sleeplessness experienced by the fruit fly Drosophila is therefore a model case for human sleeping behaviour. The scientists have now discovered molecules in the flies' cells that affect how the animals sleep in old age: if insulin/IGF signalling is active, the quality of the animals' sleep is reduced and they wake up more often. Using a therapeutic agent, researchers managed to improve the flies' sleep again. The scientists suspect that the causes of sleep problems experienced by older flies and humans are similar. It is also possible that sleep problems encountered by humans may not necessarily be an inevitable side effect of ageing and may even be reversible. Flies and humans sleep in much the same way. Just like us, flies sleep during the night and are active during the day. And the quality of sleep deteriorates as both species age: the individuals nap more frequently during the day and sleep for shorter periods at night. Gerontologists are very familiar with the insulin/IGF (insulin-like growth factor) signalling pathway. It is actually a metabolic pathway that controls the cell's response to nutritional deficiency and also affects life expectancy. Fruit flies therefore live longer if the signalling pathway is less active. The signalling pathway also plays a role in ageing humans. Researchers at the Cologne-based Max Planck Institute have now discovered that fruit flies sleep better at night and are more active during the day if insulin/IGF signalling is inhibited. "Daytime activity and night-time sleep are thereby controlled by two different components: during the day, the neurotransmitter octopamine and the adipokinetic hormone AKH increase activity in flies. At night, on the other hand, the neurotransmitter dopamine and the kinase TOR reduce the sleep periods," explains Luke Tain from the Max Planck Institute for Biology of Ageing. Rapamycin is a substance that inhibits TOR activity, inhibiting the molecule. "We administered rapamycin to older flies and observed that they once again slept for longer periods. As a result, we were able to reverse the deterioration in sleep quality as a consequence of ageing," says Tain. The cells of such different organisms as roundworms, flies and humans use the insulin/IGF signalling pathway. Its components and function are similar in various species of living organisms. The researchers now want to study whether the signal molecules in higher animals, such as mice, have the same effect. In this way, they hope to discover treatments that will improve sleep quality in old age. Explore further: New fruitfly sleep gene promotes the need to sleep Athanasios Metaxakis, Luke S. Tain, Sebastian Grönke, Oliver Hendrich, Yvonne Hinze, Ulrike Birras & Linda Partridge , Lowered Insulin Signalling Ameliorates Age-Related Sleep Fragmentation in Drosophila , PLoS Biology, 2 April 2014. www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001824 Journal reference: PLoS Biology New fruitfly sleep gene promotes the need to sleep All creatures great and small, including fruitflies, need sleep. Researchers have surmised that sleep – in any species—is necessary for repairing proteins, consolidating memories, and removing wastes from cells. But, ... Scientists identify the switch that says it's time to sleep The switch in the brain that sends us off to sleep has been identified by researchers at Oxford University's Centre for Neural Circuits and Behaviour in a study in fruit flies. Study in fruitflies strengthens connection among protein misfolding, sleep loss, and age Pulling an "all-nighter" before a big test is practically a rite of passage in college. Usually, it's no problem: You stay up all night, take the test, and then crash, rapidly catching up on lost sleep. But as we age, sleep ... Scientists wake up to causes of sleep disruption in Alzheimer's disease Being awake at night and dozing during the day can be a distressing early symptom of Alzheimer's disease, but how the disease disrupts our biological clocks to cause these symptoms has remained elusive. Researchers identify gene that helps fruit flies go to sleep In a series of experiments sparked by fruit flies that couldn't sleep, Johns Hopkins researchers say they have identified a mutant gene—dubbed "Wide Awake"—that sabotages how the biological clock sets the timing for sleep. ... Scientists turn skin cells into heart cells and brain cells using drugs In a major breakthrough, scientists at the Gladstone Institutes transformed skin cells into heart cells and brain cells using a combination of chemicals. All previous work on cellular reprogramming required adding external ... Age-dependent changes in pancreatic function related to diabetes identified Age-related changes in the human pancreas govern how our bodies respond to rising and falling blood sugar levels throughout our lifetimes, and could affect whether we develop diabetes as adults. But it's been nearly impossible ... New study expands potential applications for stool transplants For the first time, scientists studying stool transplants have been able to track which strains of bacteria from a donor take hold in a patient's gut after a transplant. The team, led by EMBL with collaborators at Wageningen ... Researchers discover potential treatment for sepsis and other responses to infection Researchers at the Icahn School of Medicine at Mount Sinai say that tiny doses of a cancer drug may stop the raging, uncontrollable immune response to infection that leads to sepsis and kills up to 500,000 people a year in ... Stem cell study finds mechanism that controls skin and hair color A pair of molecular signals controls skin and hair color in mice and humans—and could be targeted by new drugs to treat skin pigment disorders like vitiligo, according to a report by scientists at NYU Langone Medical Center. Study pinpoints mechanism that allows cells with faulty DNA to reproduce When it comes to replicating their DNA so they can propagate, normal cells are perfectionists. Cancer cells, on the other hand, have no problem tolerating mistakes while copying their DNA. In fact, messed-up DNA is a big ...
FAQ'S for Retina Home Faq FAQ'S for Retina 1. What is uveitis? The uvea is the middle layer of the wall of the eye. It has three parts: the iris, the ciliary body and the choroid. Inflammation (or swelling) of any part of the uvea is called "uveitis." Uveitis can be classified by the area involved. If the primary area involved is the iris, the condition is called "iritis"; the ciliary body, "cyclitis", and the choroid, "choroiditis." Uveal inflammation may also involve adjacent tissues. For example, choroiditis may spread to the retina and thus becomes a "chorioretinitis." In order to help diagnose what specific type of uveitis you may have, it is important for the ophthalmologist to locate the source of the inflammation. Once the source is located, the physician can best treat the condition and predict the long-term visual outcome. The symptoms of uveitis depend upon the area that is inflamed and the duration of inflammation. Acute iritis may cause a red eye with pain and sensitivity to light. Chronic and posterior inflammation may be painless but may cause symptoms such as floaters or decreased vision. These symptoms should alert you to seek expert medical attention promptly. 2. How is uveitis treated? Untreated uveal inflammation can lead to blindness. Cataract, glaucoma, retinal scarring, retinal detachment, optic nerve damage and atrophy or shrinkage of the eye are some of the potential complications of persistent uveitis. Many patients with uveitis have good vision as their disease is managed by medicines and eye drops. A careful medical history, including family, social and sexual history, is important in the uveitis patient. Evaluation of uveitis is directed toward the diagnosis and identification of possible underlying causes of the disease. Bacteria, fungi, viruses, protozoa or other agents along with abnormalities of the immune system can cause uveitis. Testing may involve blood tests, X-rays, special ocular studies or evaluation by other skilled medical consultants beyond ophthalmology. A full medical evaluation may reveal an inflammatory disease that has involved other organs besides the eye. Examples of this include sarcoidosis, rheumatoid arthritis and related conditions. For these types of conditions, treatment for the underlying cause of the uveitis helps all parts of the body, including the eye. In most cases, no obvious underlying cause is found for the uveitis. Treatment then is directed to the eye inflammation alone. Treatment may include drops or injections of cortisone medication around the eye. Sometimes it may be necessary to use oral drugs that suppress inflammation, such as prednisone or cytotoxic (chemotherapeutic) agents. Treatment can be prolonged for uveitis. Therefore, close followup with an ophthalmologist is important to keep the eye functional and to detect occasional side effects from the treatments. 3. How common is diabetic retinopathy? Because there are so many diabetics in the United States, diabetic retinopathy is the leading cause of new cases of blindness among people aged 20 to 74 years. Approximately 5 to 10 percent of the general U.S. population has diabetes mellitus. Persons with diabetes need to regularly check their blood sugar levels to be sure they are maintaining blood sugar levels that are as near normal as possible. They should also regularly see their primary healthcare provider as well as keep regular checkups with their ophthalmologist even if they are not having vision problems. 4. What is diabetic retinopathy? Diabetic retinopathy is a complication of diabetes that affects the eyes by causing deterioration of the blood vessels in the retina. These weakened vessels may leak fluid or blood, develop brushlike branches, or become enlarged. 5. Is it safe for a woman with diabetic retinopathy to become pregnant? Most diabetic women can have a baby without an increase in retinopathy. In some patients, however, the retinopathy might worsen enough to require laser photocoagulation. In a few cases, vision might remain decreased. It is recommended that all patients be frequently monitored during pregnancy. Generally, this means a baseline examination and visits at least every three months. 6. Can I delay retinopathy by keeping my blood sugar under control? A recent national study showed that strict control of blood sugar can markedly delay the onset of diabetic retinopathy and can slow the progression of early cases. All diabetics should strive for good control of their blood sugar because some patients, even those with more advanced diabetic retinopathy, might delay the progression of the disease if their blood sugar is maintained at a reasonable level. Others, however, will see a progression of the disease even if their blood sugar is normal or near normal. 7. Does high blood pressure affect the eyes? Some studies have shown the patients with high blood pressure are more likely to have retinopathy. However, since high blood pressure alone can damage the eyes, heart, kidneys and brain, patients should keep their blood pressure under control and have it monitored regularly. 8. Why would a person with no vision problems require laser treatment? Patients with proliferative retinopathy might have normal vision but are still at high risk for imminent loss of vision due to hemorrhage or retinal detachment. Laser photocoagulation in these patients has been proven to be effective by the Diabetic Retinopathy Study. 9. Is it true that a person can lose considerable vision shortly after laser treatment? Each case is different. In patients with advanced diabetic retinopathy, laser photocoagulation treatment is not as effective as it is in patients with early retinopathy. In many patients, the progression of retinopathy is delayed. But in others, the disease progresses despite the laser treatment or, by coincidence, at the same time as the treatment. 10. Should a person with diabetic retinopathy exercise? Studies have shown that most patients with proliferative retinopathy have hemorrhages at night while they sleep. There is no convincing evidence that exercise increases the number of hemorrhages. Moreover, exercise is important not only for general well-being, but also for controlling blood sugar levels. Each patient should continue routine exercise unless he or she notices hemorrhages frequently during exercise. 11. If my doctor tells me that I have diabetic retinopathy, is there anything else that I should be checked for? Yes. Patients with blurred vision from diabetic eye disease are very likely to have kidney disease and/or high blood pressure. They should be checked regularly by their primary healthcare practitioner. 12. Why can’t I be given a stronger prescription for glasses to compensate for my vision loss? If the retina is damaged, stronger glasses cannot return distance vision to normal. They do provide greater magnification, but they also force a patient to hold reading material closer to the face. Most patients who have a moderate degree of vision loss opt for a hand-held magnifier in addition to normal reading glasses, allowing for a more comfortable reading distance. Some patients might be helped greatly by low vision aids. These are special magnifying devices that enable patients to make the best use of their remaining eyesight by enlarging objects so that they can be seen with parts of the eye other than the macula. For certain patients, telescopic devices might improve distance vision. These aids are available through your own ophthalmologist or through Wills Eye’s Low Vision Service. 13. How can I prevent further vision loss from retinopathy? There is no evidence that limiting the use of your eyes, avoiding television or bright light, taking vitamins or using sunglasses or any other devices can prevent diabetic retinopathy or its progression. 14. I’ve heard that taking one aspirin a day will reduce my chance of a heart attack, but I’m afraid to take aspirin for fear of causing hemorrhages in my eye. What should I do? Currently, there is no evidence that diabetics who take aspirin are at greater risk of frequent hemorrhages of the eye. You should discuss the use of aspirin with your primary healthcare provider. 15. I have recently been diagnosed with diabetic retinopathy, and my blood sugar level usually measures around 300 to 350. Should I use the insulin pump or use multiple injections of insulin to rapidly bring my blood sugar under control? You should try to bring your blood sugar level down, but there is some evidence that rapidly bringing it under control might actually accelerate the progression of retinopathy. It is preferable to bring the level down gradually and under the supervision of your medical doctor and retina specialists. 16. What hope does the future hold for patients with diabetic retinopathy? Research into the basic mechanisms of retinopathy is ongoing. Doctors and scientists continue to study how the retina and choroid work and what changes occur during the aging process. Research is also under way on means to control new blood vessel growth and blood vessel leakage. Recent studies, still awaiting confirmation, show that certain antihypertensive medications (high blood pressure drugs) may slow the development of retinopathy. 17. Is there any financial assistance available for people who have lost vision? There is financial aid for people whose best-corrected vision with glasses is 20/200 or worse, or whose visual field is restricted to 10 degrees or less. They might be eligible for an additional income tax deduction as well as other financial and rehabilitative benefits to help them cope with vision loss. People with vision slightly better than 20/200 might be eligible for rehabilitative services. 18. What is retinal vein occlusion? Arteries carry blood from the heart to various body parts, and veins return it. The retina has one major artery and one major vein, which is called the central retinal vein. Sometimes, branches of this vein can be blocked. 19. What causes retinal vein occlusion? In most cases, an underlying causes is not found, and we never know why it happens. However, retinal vein occlusion is more common in patients with high blood pressure and arteriosclerosis. 20. Why does retinal vein occlusion cause decreased vision? What is the likely visual outcome? When the vein is blocked, the circulation is greatly slowed. When this happens, the retina (the part of the eye which sees, like the film in a camera) does not work as well as it should. In addition, tiny blood vessels called capillaries leak excessive fluid into the retina, causing it to swell. This is called macular edema. The ultimate visual outcome for patients with retinal vein occlusion cannot be predicted. About one-quarter of these patients have spontaneous improvement in vision, but in others, the vision remains decreased or even worsens. The only known way to improve vision for patients with retinal vein occlusion is to treat the swollen retina with laser. With laser treatment, most patients have a small improvement in vision. A small minority have improvement to near normal. In many the vision is not helped at all. However, physicians normally wait a few months to see if there is a spontaneous improvement before considering laser treatment. 21. Are there any restrictions or precautions for patients with retinal vein occlusion? There is no reason to limit one’s activities (such as reading, watching TV, etc.). However, when you have blurred vision in one eye for any reason, your depth perception is impaired. If this is true for you, you should be very careful doing anything that requires you to judge distances, such as using machinery, climbing ladders, pouring hot or hazardous liquids, or driving. 22. What is central retinal vein occlusion? Arteries carry blood from the heart to various body parts, and veins return it. The retina has one major artery and one major vein, which is called the central retinal vein. Sometimes the vein becomes blocked. This is called central retinal vein occlusion. 23. What causes central retinal vein occlusion? In most cases, there is no underlying cause and doctors do not know why it happens. However, it is more common in patients with glaucoma, high blood pressure, arteriosclerosis and diabetes. 24. What causes decreased vision? What is the likely visual outcome? What can be done to improve vision? When the retinal vein is blocked, the circulation is greatly slowed. When this happens, the part of the eye which sees (like the film in a camera) does not work as well as it should. The ultimate visual outcome cannot be predicted. A few patients, with time, have spontaneous improvement in vision. Some patients get worse. Currently, there is no known way to improve vision. Laser, eye drops and glasses will not help. 25. If there is no treatment for central retinal vein occlusion, why are followup visits necessary? Although nothing can be done to help their vision, patients who have had a central retinal vein occlusion need to be seen at regular intervals because in about one-third of all cases, a severe form of glaucoma, called neovascular glaucoma, develops. If it looks like this is about to occur, a laser treatment is necessary. Though the laser does not improve vision, it does prevent glaucoma from developing. Of course, if there is any marked decrease in vision or if the eye becomes painful, it is important to see your doctor immediately. 26. Are there any restrictions or precautions for patients with central retinal vein occlusion? 27. What happens during macular translocation surgery? In order to move the retina, an operation is performed. This can be performed under local or general anesthesia. There are three basic steps to this operation. First, the retina is intentionally detached (the wall paper lifted off the wall) by injecting fluid under the retina. Second, several stitches are placed towards the back of the eye to mildly indent the wall of the eye. (These stitches are not visible afterwards and remain permanent.) Third, an air bubble is placed into the main cavity of the eye. After surgery, patients are instructed to sit upright for 24 to 48 hours. The air bubble, in combination with the indentation of the wall of the eye, pushes the retina back into position against the back wall of the eye. Although some variations with this technique may be used depending on the specific circumstances, these basic steps are performed to achieve macular translocation. The air bubble injected into the eye will be gradually absorbed by the body within a few days to weeks. In general, laser treatment is performed as quickly as possible after surgery (usually within one week) to the CNV. 28. What are the risks and benefits of macular translocation surgery? It is impossible to predict exactly how far the retina will shift as a result of this surgery. If your doctor suggests macular translocation surgery, he or she believes there is a reasonable chance that the retina will move far enough to safely allow treatment of the CNV. Unfortunately, in a minority of patients, there is no sufficient movement of the retina. There are risks associated with this surgery. These include infection, hemorrhage, cataract, glaucoma and retinal detachment. Although many of these problems are correctable, there is a small risk that irreversible loss of vision could develop as a result of this surgery. Macular translocation surgery does not “cure” macular degeneration. In some cases successful closure of the CNV may only be temporary and new blood vessels will grow. If this occurs, then additional laser surgery may be necessary. In some cases, additional laser surgery may not be possible. The long-term benefit of macular translocation surgery is not known. Preliminary results are encouraging, but not every patient benefits from the procedure. There is generally only mild-to-moderate discomfort after this surgery, lasting one-to-two weeks. Some restrictions in activity beyond special positioning requirements may be required. As a result of moving the retina, some patients may experience double vision, depending on the quality of vision in the other eye. The recovery of vision after surgery is quite variable. Some patients require several weeks or even months to fully assess their visual recovery. This is often temporary. If not, in some cases it may be possible to correct visual impairments with eye glasses. Macular translocation surgery is a promising technique but some (but not all) patients with CNV. 29. Are any financial benefits or other services available for people who have lost vision? People who are legally blind may be eligible for a larger federal tax deduction and should contact their local IRS office. People who are legally blind and 64 years of age or younger and unable to work may be eligible for Supplemental Security Income (SSI) or Social Security Disability. Persons who are experiencing problems related to low vision or blindness may be eligible to receive special transportation, reading and rehabilitation services. There are also support groups available. To speak with a social worker about these services, contact the Wills Eye Hospital Social Services Department at (215) 928-3007. Diabetes Retinopathy FAQ's Because there are so many diabetics in the United States, diabetic retinopathy is the leading cause of new cases of blindness among people aged 20 to 74 years. Approximately 5 to 10 percent of the general U.S. population has diabetes mellitus. 6. Should a person with diabetic retinopathy exercise? 7. If my doctor tells me that I have diabetic retinopathy, is there anything else that I should be checked for? 8. Why can’t I be given a stronger prescription for glasses to compensate for my vision loss? If the retina is damaged, stronger glasses cannot return vision to normal. They do provide greater magnification, but they also force a patient to hold reading material closer to the face. Most patients who have a moderate degree of vision loss opt for a hand-held magnifier in addition to normal reading glasses, allowing for a more comfortable reading distance. Some patients might be helped greatly by low vision aids. These are special magnifying devices that enable patients to make the best use of their remaining eyesight by enlarging objects. For certain patients, telescopic devices might improve distance vision. These aids are available through your own ophthalmologist or through Wills Eye Hospital’s Low Vision Service. 9. How can I prevent further vision loss from retinopathy? Currently, there is no evidence that diabetics who take aspirin are at greater risk of frequent eye hemorrhages. This was studied in the Early Treatment Diabetic Retinopathy Study. Research into the basic mechanisms of retinopathy is ongoing. Doctors and scientists continue to study how the retina and choroid work and what changes occur during the aging process. Research is also under way on means to control new blood vessel growth and blood vessel leakage. There is financial aid for people whose best-corrected vision with glasses is 20/200 or worse in both eyes, or whose visual field is restricted to 10 degrees or less. They might be eligible for an additional income tax deduction as well as other financial and rehabilitative benefits to help them cope with vision loss. 13. Detached and Torn Retina A retinal detachment is a very serious problem that almost always causes blindness unless treated. The appearance of flashing lights, floating objects, in the affected eye may indicate a retinal tear and/or detachment. A curtain over a part of the vision is a sight of detached retina. As one gets older, the vitreous, the clear gel-like substance that fills the inside of the eye, tends to shrink slightly and take on a more watery consistency. As this occurs the vitreous separates from the retina and may tear it. Retinal tears increase the chance of developing a retinal detachment. Fluid vitreous, passing through the tear, seperates the retina from the back of the eye like wallpaper peeling off a wall. Laser surgery or cryotherapy (freezing) are often used to seal retinal tears to attempt to prevent detachment. If the retina is detached, it must be repaired. There are four ways to do this: Pneumatic retinopexy involves injecting a special gas bubble into the eye that pushes on the retina to seal the tear. The scleral buckle procedure requires the fluid to be drained from under the retina before a flexible piece of silicone is sewn on the outer eye wall to give support to the tear while it heals. Vitrectomy surgery removes the vitreous gel from the eye, replacing it with a gas bubble, which is slowly replaced by the body's fluids. A combination fo some of the above.
Title: Genotoxicity and functionality assessment of a bone marrow stromal cell line following chemotherapy exposure in an in vitro model of multiple myeloma Author: Andrews, S. W. Multiple myeloma (MM) is a haematological malignancy characterized by terminally differentiated plasma cells and their accumulation in the bone marrow (BM). Despite significant advances in therapeutic strategies it currently remains incurable. The interactions between the BM microenvironment and malignant plasma cells have been pivotal to understanding this disease. Previous reports have shown that patients with a haematological malignancy sustain “damage” to their BM, but how much of this is due to the disease and/or the treatment is currently unknown. Furthermore MM plasma cells have been documented to harness the BM microenvironment to their advantage, improving their growth and survival. However, little is known about the functionality of BM mesenchymal stem cells (MSC) in patients with MM disease which form an essential compartment of the BM microenvironment. It was hypothesised that MSC altruistically protect MM cells from therapy and consequently become phenotypically and genetically compromised. To facilitate the study of the effects of chemotherapeutic agents and MM cells on MSC, a non-contact co-culture model was developed that allowed the investigation of functional and genetic damage. In line with previous studies, U266 cells were found to be protected from drug-induced cell death when in co-culture with the stromal cell line HS5. However, the promoting effects of the BM appear to be at the detriment to their own survival. HS5 cells were found to have lower viability, altered morphology and disrupted differentiation when in a non-contact co-culture with U266 cells. Results from this study have revealed that interactions of MSC with MM cells lead to an altruistic protection of MM cells by the BM. This work demonstrates that U266 cells have an improved viability following exposure to chemotherapy when in a non-contact co-culture with MSC/HS5. Furthermore, genotoxic assays also revealed that HS5/MSC interactions with U266 cells protect U266 from the genotoxic effects of melphalan in co-culture, whilst for the first time HS5 morphology was shown to be severely altered following exposure to chemotherapy and when in co-culture with U266 cells. This work has demonstrated, for the first time, the cytotoxic effects of novel agents bortezomib and carfilzomib on HS5 cells when in co-culture with U266 cells. Results from this study also demonstrate that melphalan severely effects the ability of HS5 cells to differentiate in an osteogenic lineage with a further deficiency in differentiation when in co-culture with U266. Adipogenic differentiation of HS5 was unable to take place when in co-culture with MM cells and was again further impaired by chemotherapy. This is the first study to reveal that primary MSC secrete significantly high concentrations of IL-6 compared to the stromal cell line HS5. A further increase in expression of IL-6 was also shown when in co-culture with U266 cells. Increased multi-nucleation was also identified in both HS5 and U266 cells when exposed to either thalidomide, lenalidomide and bortezomib with abnormalities providing possible explanations for the therapy related malignancies and neurotoxicity that is seen in some patients. Genotoxicity to the MSC/HS5 compartment of the co-culture measured by the micronucleus assay was also found to be reduced suggesting that the BM is protected from the DNA damaging effects of some agents when in co-culture with MM cells. Combined work on the functionality and genotoxicity of the interactions between the BM and MM reveal a tropism of MSC and HS5 towards the MM cell line U266. With this research being conducted in a non-contact co-culture, it has indicated that cell-cell contact is not essential to provide protection of both the BM and MM cells against chemotherapy. This research provides further understanding of the MSC and MM interactions’ impact on the functionality of the BM and their protection from genotoxic damage. Elucidating the consequence of cytotoxic and genotoxic damage to MSC via chemotherapy treatment and/or through haematological disease may allow for the development of effective therapies and improve the quality of life for patients with MM.
Video can effectively reduce anxiety in children undergoing inhaled induction Published on November 1, 2012 at 12:55 AM Research by Dalhousie University student Katherine Mifflin has found that having children watch a video immediately prior to surgery can reduce their anxiety during anesthesia induction, the most stressful time for children throughout the perioperative process. Up to 50% of children display significant distress at the point of inhaled induction and separation from parents, fear, or exposure to a foreign environment may cause children to display high levels of distress during this time. Consequentially, children who experience high levels of distress at anesthesia induction may have more pain during recovery, longer hospital stays, and more negative behavior changes after surgery. Related StoriesSurvey finds regular sports drinks consumption among childrenStudies reveal how parents can help develop smart, social kidsLaundry pods more likely to cause poisoning injuries in small children than nonpod detergents The research study was conducted at the IWK Health Centre in Halifax, Nova Scotia, under the supervision of Dalhousie professors Dr. Jill Chorney and Dr. Thomas Hackman. "Our study is one of the first to examine the effectiveness of video to reduce anxiety in children undergoing inhaled induction," says Dr. Chorney. "On the basis of the previous research with cartoon and video use in minor medical procedures, it was expected that playing a video clip during anesthesia induction would be effective at reducing anxiety." The goal of this research study was to determine whether video distraction can be used as a clinical tool by anesthesiologists to help reduce anxiety in their pediatric patients. The study found that playing video clips during the inhaled induction of children undergoing ambulatory surgery is an effective method of reducing anxiety and therefore pediatric anesthesiologists may consider using the strategy to achieve a smooth transition to the anesthetized state. "The 97 study participants were assigned to either the experimental video distraction group or control group," notes Dr. Chorney. "Participants in the video distraction group were presented with a list of age-appropriate videos to choose from, asked what they enjoyed viewing at home, and a similar clip was found on YouTube™ for the child to view during induction. Enabling the participant to choose a video allowed for parental approval of the video and gave the child the opportunity to become familiar with the content, thus becoming engaged with the distractor and possibly avoiding anticipatory anxiety." Source:Anesthesia and Analgesia. eaaf2876-76e7-49b3-9395-7cd8d9dba945|0|.0 Posted in: Child Health News | Medical Procedure News | Healthcare News Tags: Analgesia, Anxiety, Children, Hospital, Pain Permalink Study examines impact of school-based obesity intervention program Roundup: Dallas approves stipend for gay, unmarried employees; Federal judge says Milwaukee can raise retiree contribution HDAC inhibitors may help regulate alcoholism-induced anxietyGroup art therapy shows promise in treating Syrian refugee children with psychological problemsStudy to examine new ways to transfer autistic child's improving communication skills into education settingOmega-3 rich diet during pregnancy has no effect on weight of babiesImpulsive children may drink less alcohol when raised in less coercive environmentSearing temperatures can be inherently dangerous to vulnerable children, older adultsResearch shows enveloped viruses could survive on toys long enough to cause infectionAnti-anxiety medication dampens helping behavior in rats Children acquire bacterium linked to tooth decay from intra- and extra-familial sources
Regulation of distinct biological activities of the NF-κB transcription factor complex by acetylation Lin-Feng Chen, Warner C. Greene … Although the proximal cytoplasmic signaling events that control the activation of the NF-κB transcription factor are understood in considerable detail, the subsequent intranuclear events that regulate the strength and duration of the NF-κB-mediated transcriptional response remain poorly defined. Recent studies have revealed that NF-κB is subject to reversible acetylation and that this posttranslational modification functions as an intranuclear molecular switch to control NF-κB action. In this review, we summarize this new and fascinating mechanism through which the pleiotropic effects of NF-κB are regulated within the cells. NF-κB is a heterodimer composed of p50 and RelA subunits. Both subunits are acetylated at multiple lysine residues with the p300/CBP acetyltransferases playing a major role in this process in vivo. Further, the acetylation of different lysines regulates different functions of NF-κB, including transcriptional activation, DNA binding affinity, IκBα assembly, and subcellular localization. Acetylated forms RelA are subject to deacetylation by histone deacetylase 3 (HDAC3). This selective action of HDAC3 promotes IκBα binding and rapid CRM1-dependent nuclear export of the deacetylated NF-κB complex, which terminates the NF-κB response and replenishes the cytoplasmic pool of lat
Multiple genes manage how people taste sweeteners Genetics may play a role in how people's taste receptors send signals, leading to a wide spectrum of taste preferences, according to food scientists. These varied, genetically influenced responses may mean that food and drink companies will need a range of artificial sweeteners to accommodate different consumer tastes. Artificial sweetener. Genetics may play a role in how people's taste receptors send signals, leading to a wide spectrum of taste preferences, according to Penn State food scientists. These varied, genetically influenced responses may mean that food and drink companies will need a range of artificial sweeteners to accommodate different consumer tastes. Credit: � roblan / Fotolia Genetics may play a role in how people's taste receptors send signals, leading to a wide spectrum of taste preferences, according to Penn State food scientists. These varied, genetically influenced responses may mean that food and drink companies will need a range of artificial sweeteners to accommodate different consumer tastes. "Genetic differences lead to differences in how people respond to tastes of foods," said John Hayes, assistant professor, food science and director of the sensory evaluation center. Based on the participants' genetic profile, researchers were able to explain the reactions of subjects in a taste test when they sampled Acesulfame-K -- Ace K -- in the laboratory. Ace K is a human-made non-nutritive sweetener commonly found in carbonated soft drinks and other products. Non-nutritive sweeteners are sweeteners with minimal or no calories. While some people find Ace K sweet, others find it both bitter and sweet. The researchers, who reported their findings in the recent issue of the journal, Chemical Senses, said that variants of two bitter taste receptor genes -- TAS2R9 and TAS2R31 -- were able to explain some of the differences in Ace K's bitterness. These two taste receptor genes work independently, but they can combine to form a range of responses, said Alissa Allen,doctoral student in food science, who worked with Hayes. Humans have 25 bitter-taste receptors and one sweet receptor that act like locks on gates. When molecules fit certain receptors like keys, a signal is sent to the brain, which interprets these signals as tastes -- some pleasant and some not so pleasant, Allen said. In another study recently published in the journal Chemosensory Perception, Allen had 122 participants taste two stevia extracts, RebA -- Rebaudioside A -- and RebD -- Rebaudioside D. Stevia is a South American plant that has served as a sweetener for centuries, according to the researchers. While the plant is becoming more popular as a natural non-nutritive sweetener, consumers have reported of tastes from stevia-based sweeteners, including bitterness. The researchers found that RebA and RebD bitterness varies greatly across subjects, but this was not related to whether or not participants found Ace K bitter. Likewise, variation in the TAS2R9 and TAS2R31 genes did not predict RebA and RebD bitterness. They also found that of the stevia extracts, the participants considered RebD to be much less bitter than RebA. While stevia is growing in acceptance as a natural replacement for other sweeteners, manufacturers do not use the whole leaf. Instead, the leaf is ground up and certain parts of it are extracted and blended to make the sweetener. "Our work suggests ingredient suppliers may want to consider commercializing RebD, as it provides similar sweetness to RebA with much less bitterness," said Hayes. Hayes also said that researchers are just beginning to understand the molecular basis of taste perception. "We've known for over 80 years that some people differ in their ability to taste bitterness, but we have only begin to tease apart the molecular basis of these differences in the last decade," Hayes said. The National Institutes of Health supported this work. The above story is based on materials provided by Penn State. The original article was written by Matthew Swayne. Note: Materials may be edited for content and length. Alissa L. Allen, John E. McGeary, John E. Hayes. Rebaudioside A and Rebaudioside D Bitterness do not Covary with Acesulfame-K Bitterness or Polymorphisms in TAS2R9 and TAS2R31. Chemosensory Perception, 2013; DOI: 10.1007/s12078-013-9149-9 A. L. Allen, J. E. McGeary, V. S. Knopik, J. E. Hayes. Bitterness of the Non-nutritive Sweetener Acesulfame Potassium Varies With Polymorphisms in TAS2R9 and TAS2R31. Chemical Senses, 2013; 38 (5): 379 DOI: 10.1093/chemse/bjt017 Penn State. "Multiple genes manage how people taste sweeteners." ScienceDaily. ScienceDaily, 20 August 2013. <www.sciencedaily.com/releases/2013/08/130820135053.htm>. Penn State. (2013, August 20). Multiple genes manage how people taste sweeteners. ScienceDaily. Retrieved October 30, 2014 from www.sciencedaily.com/releases/2013/08/130820135053.htm Penn State. "Multiple genes manage how people taste sweeteners." ScienceDaily. www.sciencedaily.com/releases/2013/08/130820135053.htm (accessed October 30, 2014).
Dynamic Prediction for Multiple Repeated Measures and Event Time Data The Department of Epidemiology and Biostatistics welcomes Sheng Luo, PhD, Associate Professor in Department of Biostatistics at The University of Texas Health Science Center School of Public Health who will present: Dynamic Prediction for Multiple Repeated Measures and Event Time Data: an Application to Neurodegenerative Disorders In many clinical studies of neurodegenerative disorders such as Parkinson's disease (PD) and Amyotrophic lateral sclerosis (ALS), multiple longitudinal outcomes are collected to fully explore the multidimensional impairment caused by these diseases. Moreover, some survival events (e.g., initiation of levodopa therapy in PD and death in ALS) are strongly correlated to the disease status. The personalized dynamic predictions of risks of target events and future health outcome trajectories at every time point, given the subject-specific health outcome profiles, are highly relevant for patient targeting, management, prognosis, and treatment selection. Dr. Luo will propose a joint model that consists of a latent trait linear mixed model (LTLMM) for the multiple longitudinal outcomes, and a survival model for event time. The two submodels are linked together by underlying latent variables. He will develop a fully Bayesian methodology for parameter estimation and dynamic prediction. His proposed model is evaluated by simulation studies and is applied to the clinical studies of PD and ALS. Nancy Colon-Anderson nanderson@drexel.edu Attachments for this Event: Sheng Luo.jpg Dornsife School of Public Health, Nesbitt Hall, Room 719 Population Health Spotlight Speaker Series Jonathan Mann Health and Human Rights Memorial Lecture Series
Free lecture on health impact of viruses on human gut bacteria set for Dec. 1 at MSU November 18, 2015 -- MSU News Service Doctoral student Pilar Manrique will lecture on how viruses of the bacteria in the human gut can affect health. Marinque was a 2014 recipient of a Kopriva Graduate Fellowship. (MSU photo by Kelly Gorham) High-Res available A free public lecture about the health impact of viruses that infect the bacteria in the human gut will be given Tuesday, Dec. 1, at Montana State University. Pilar Manrique, doctoral candidate in the Department of Microbiology and Immunology and a 2014 recipient of a Kopriva Graduate Fellowship, will present “The Human Gut Virome in Health and Disease” at 4:10 p.m. in the Byker Auditorium in the Chemistry and Biochemistry building. A reception will follow. The human gut is colonized by millions of bacteria (gut microbiome) essential for human health. Changes in the gut microbiome composition and structure negatively impact human health, and correlate with important diseases such as diabetes and cancer. For this purpose, Manrique has isolated viruses from human samples, directly sequenced the isolated viral genomes, and applied advance bioinformatics analysis to understand the viral community composition and temporal dynamics in the human gut. Manrique studies the role of bacteriophages -- viruses that infect bacteria -- in shaping the structure and function of the human gut microbiome. MSU researchers have identified different bacteriophages communities present in healthy versus diseased gut microbial communities that could potentially be used as disease markers. Manrique’s lecture is presented by the Kopriva Science Seminar Series, which is funded through an endowment created by Phil Kopriva, a 1957 microbiology graduate from MSU. Kopriva, who died in 2002, also created an endowment to fund the Kopriva Graduate Fellowship Program, which provides support and opportunities for graduate students in the College of Letters and Science, particularly in the biomedical sciences. The series features seminars by MSU graduate students, faculty members and guest speakers. For more information about this and other Kopriva lectures, visit http://www.montana.edu/lettersandscience/kopriva.html. Jody Sanford, (406) 994-7791 or jody.sanford@montana.edu Human interactions with microorganisms the focus of Feb. 12 Kopriva lecture - January 30, 2015 Caffrey, Willems receive Kopriva fellowships - August 19, 2015
Sickle cell disease and genetic engineering: Mini-Lesson from pgEd Sickle cell disease (SCD) is a common genetic condition affecting people around the globe, primarily those with West or Central African ancestry (including many African-Americans and Latinos), as well as people of Middle Eastern, South Asian and Mediterranean descent. Many students of biology learn about SCD, and many students know of it from personal and family experience. This mini-lesson asks students to consider the plans for genetic engineering as a treatment for SCD, and to think about the dimensions of race, trust and informed consent as they relate to clinical trials. Students read a brief overview of SCD, and then read a longer article about the latest vision for a treatment, described in MIT Technology Review’s article, “Sickle-Cell Patients See Hope in CRISPR,” by Emily Mullin, August 23, 2017. A short video in the article explains how CRISPR works, and should be shown in class. Students can respond to the questions in writing or in a classroom discussion. Finally, students are invited to take a short online quiz developed by pgEd and “pin” their awareness of SCD on our world map. What to learn more? Check out pgEd’s lesson “Protecting athletes with genetic conditions: Sickle cell trait.” Also, for more about genetic engineering, see “Genome editing and CRISPR.” Download here: Mini-lesson- Sickle cell disease and genetic engineering Announcements, News You Can Use pgEd working with Youth Enrichment Services (YES) What are your favorite books?
Teaching approaches: Group talk 1 Why Group Talk Matters One of the prime goals of education is to enable children to become more adept at using language, to express their thoughts and to engage with others in joint intellectual activity (their communication skills). A second important goal is to advance children's individual capacity for productive, rational and reflective thinking (their thinking skills). Dialogic talk can help achieve both these goals. The work of the Russian psychologist Vygotsky is relevant for understanding why this is so.4 He suggested that using language to communicate helps us learn ways to think. As he put it, what children gain from their 'intermental' experience (communication between minds through social interaction) shapes their 'intramental' activity (the ways they think as individuals). What is more, he suggested that some of the most important influences on the development of thinking will come from the interaction between a learner and more knowledgeable, supportive members of their community. Although developed over half a century ago, Vygotsky's intriguing ideas have only really been put to the test in recent years. Now research has confirmed the validity of some of his claims about the link between language use and the learning of ways of thinking. Research has shown that teachers' modelling of ways of asking questions, offering explanations and providing reasons can have a significant and positive effect on how children use language in problem-solving tasks.5 Research by myself and colleagues has shown that a programme of carefully designed teacher-led and group-based activities enables children not only to become better at talking and working together but also at solving problems alone.6 The group-based activities of this programme are very important; but equally important is the kind of dialogue a teacher uses in whole-class plenaries and group monitoring. It is no coincidence that the teacher in the example above has been involved in this programme. And this brings us back to 'dialogic talk'. (Adapted from The educational value of dialogic talk in whole-class dialogue, section DialogicTalk). 2 The Importance of Talk Recent research (see the collection edited by Littleton and Howe (2010)) has shown the importance of the link between spoken language, learning and cognitive development (e.g. Mercer, Wegerif & Dawes, 1999; Mercer, Dawes, Wegerif & Sams, 2004 – see below). Through using language and hearing how others use it, children become able to describe the world, make sense of life's experiences and get things done. They learn to use language as a tool for thinking, collectively and alone. However, children will not learn how to make the best use of language as a tool for communicating and thinking without guidance from their teachers. School may provide the only opportunity many children have for acquiring some extremely important speaking, listening and thinking skills. (Adapted from The Importance of Speaking and Listening, section ImportanceOfTalk). 3 Exploratory Talk and the Thinking Together approach One approach to thinking about group talk has come out of the Thinking Together project based at the University of Cambridge. In this approach, ‘group talk’ is characterised as one of three ‘types’ – cumulative, disputational, or exploratory (Mercer & Littleton, 2007) as Table 1 indicates. Table 1 - Typology of Talk Type of Talk Characteristics Analysis Disputational “Characterised by disagreement and individualised decision making. There are few attempts to pool resources, to offer constructive criticism or make suggestions.” “short exchanges, consisting of assertions and challenges or counter-assertions (‘Yes it is.’ ‘No it’s not!’).” Cumulative “Speakers build positively but uncritically on what the others have said. Partners use talk to construct ‘common knowledge’ by accumulation.” “Cumulative discourse is characterized by repetitions, confirmations and elaborations.” Exploratory “Partners engage critically but constructively with each other’s ideas. Statements and suggestions are offered for joint consideration. These may be challenged and counter-challenged, but challenges are justified and alternative hypotheses are offered. Partners all actively participate, and opinions are sought and considered before decisions are jointly made. Compared with the other two types, in exploratory talk knowledge is made more publicly accountable and reasoning is more visible in the talk.” Explanatory terms and phrases more common – for example, ‘I think’ ‘because/’cause’, ‘if’, ‘for example’, ‘also’ Adapted from (Mercer & Littleton, 2007, pp. 58–59) The Thinking Together site at the University of Cambridge gives some typical sequences of each talk type[1] (Mercer, 2008) in small group work. It is important to note that often dialogue will contain elements of each of these, and indeed that there are times when one ‘type’ of talk might be more appropriate than another – however generally speaking, higher levels of exploratory talk are associated with the educational gains discussed in the introduction to this chapter. A typical pattern of research in these studies has involved an intervention including the development of classroom ‘ground rules’, followed by lessons which are specifically designed to encourage high quality, dialogic, talk which engages pupils in explaining. The typology provides teachers with a simple way to understand the nature of the talk in their own classrooms, and – through encouraging explanation, elaboration, and mutual listening – some clear ways to improve the quality of the talk, as shall now be outlined further. 3.1 Ground Rules Ground Rules are important to consider in order to establish effective group talk in classroom contexts. Again, the resources on the Thinking Together website website[2] are useful for this purpose. 3.2 Exploratory Talk Such ground rules should be designed to encourage mutual respect, and understanding, while also fostering high quality critique and reasoning through dialogue. 4 What Does Group Talk Look Like? What is meant by ‘group talk’ and ‘argument’? Group talk includes any activity where pupils’ ideas are explored verbally between pupils, even if the final product is written or practical. It includes verbal argument (in this context the word argument is used to describe discussion between pupils who hold differing views) as much as more formal debates (about contentious topics such as genetic engineering). Group talk can be both collaborative and competitive. Stop and think Before reading ahead, jot down your first thoughts to complete the following statements: An activity a science/maths teacher might carry out that could be called a ‘group talk’ activity is … If the activity was successful, what I would expect to see the pupils doing is …and what I would expect to hear in their conversations is …and what I would expect to see the teacher doing is … The benefits to the learner of science/maths would be … A teacher might not use group talk activities, giving reasons, such as … What does successful group talk and argument look like? When you take part in productive talk as an adult, you make suggestions and support, modify or clarify others’ views. You challenge ideas, ask questions to seek clarification, summarise and evaluate the pros and cons. You care about your own opinions, but allow others to shape and counter them. In lessons where productive group talk is taking place you will see pupils discussing ideas with each other independently of, but guided by, the teacher. Pupils will often be turning to face each other, making and maintaining eye contact with others and using animated expressions with their eyes, face and through gesture. They will want to convince others, but will be looking for opportunities to consider others’ views. Words and phrases related to reasoning (such as because, why?, what if ...?) will be used. At times, pupils will be thinking and saying little as they listen to others. The teacher will be aware of the progress of the conversations and intervening without interrupting the flow of the talk. The pupils will be in control of the time taken on a discussion and will be clear on what they are expected to produce as a result of the activity. When the group talk is over, pupils may have changed their minds at least once. They will be able to explain their current viewpoint and any previous opinions they held, as well as some of the views held by others. Why do it? What are the benefits to the learner? Higher-level thinking Pupils are challenged to defend, review and modify their ideas with their peers. It encourages reflection and metacognition (thinking about one’s own thinking). Pupils often communicate ideas better with other pupils than with teachers. Assessment for learning Effectively reveals the progress of the pupil to the teacher, encouraging the pupil to self- and peer-assess while allowing the teacher to plan more effectively. As such, group talk complements methods embraced as Assessment for learning. Illustrating science in action Working scientists use group talk – in class it models how they work, supporting the teaching of the ‘ideas and evidence’ aspects of scientific enquiry. Developing the whole child The ability to resolve disagreements is a life-skill. Pupils become more reflective as they try to arrive at a consensus by expressing different points of view; or work collaboratively to explore ideas, plan and make decisions. Further, it supports the development of literacy. Pupil motivation and emotional involvement When argument is taking place, and pupils are actively prompted and provoked to defend a point of view – by the teacher and by others – it raises the emotional involvement in a topic, so that pupils are more engaged. In essence, they are being encouraged to ‘care’ about the science viewpoint they have, and to take a stand for or against it, even if they concede to others along the way. These features are more common in good English, RE and humanities lessons. Variety and learning styles Can be used as an alternative to written or practical work (for example, experiments), or just listening as the teacher explains and demonstrates. Group talk encourages the use of different learning styles and thus can be inclusive to pupils excluded from more traditional (and often written) activities. Why is group talk relatively uncommon in science and maths lessons? What are the issues expressed by teachers? External factors Many teachers may feel a pressure to ‘deliver the curriculum’. There is no time in the lesson to do more than impart information. Also, the teacher may be concerned about having evidence of work having taken place (for example, usually something written down in books) – for others in the school, for parents or for Ofsted. Internal factors The teacher may be reluctant to take a risk with group talk because they are afraid that discipline will be a problem. They do not feel comfortable with the apparent loss of control and, as their pupils are not used to being given this level of freedom to express their ideas, they may be reluctant or misbehave. If group talk has been tried in the past it may have been unsuccessful because of a lack of consideration of factors such as classroom layout and teacher behaviour. When are pupils more likely to engage in group talk and argument? when seating arrangements and environment are planned in order to facilitate discussion; when the teacher’s language and non-verbal communication are planned in advance in order to promote pupil confidence in the stimulus material for group talk; when the teacher withholds their opinion, or the answers for longer than usual; when groupings are chosen by the teacher, and are regularly changed; when timings are specifically used and usually kept short; when group talk is used regularly and becomes part of everyday science lessons. It is the teacher skills of running group talk that require the most effort to develop. Once developed, they can then be used with little preparation on the part of the teacher, allowing them to be a regular feature of lessons. Teachers may also find it useful to consider the resources in the category, and to read the Group Talk in Science - Research Summary document. (Adapted from Group Talk - Benefits for Science Teaching, section Whole). 5 What Do Pupils Think of Group Talk? Pupil attitudes to group talk and argument Pupils moving from primary to secondary classrooms are quoted in a recent study by the DfES (Curriculum continuity, 2004): ‘You were expected to work as a group’ (primary); ‘There is less group work; teachers often expect you to work individually’ (secondary); ‘There were group work rules such as taking turns, having a chair, a scribe and a timekeeper’ (primary); ‘We only have group work rules in English’ (secondary). In their study of pupils’ attitudes to their science education, Osborne and Collins (2000) reported how pupils they interviewed ‘appreciated teachers who were willing to engage in ‘discussions’’ and who allowed pupils to contribute. Some pupils equate ‘writing’ in science with ‘work’, with practical or discussion work seen as more engaging and providing welcome variety. Matthews’ (2001) project involved pupils working in small groups of varying gender mix where they are asked to reflect on their own and others involvement in group talk. He concluded that, when combined with feedback discussions, collaborative learning in the pupils studied can lead to pupils getting on better and helping each other with their learning, and that this leads to pupils liking science more and being more likely to continue with it in the future. The emphasis in Shakespeare (2003) is to provide stimulus for argument and then provocation to continue to defend or alter one’s views in such a way that there is an emotional involvement in the science and thus greater motivation to resolve the dispute. This was supplemented by examples of phrases seen to work well in class that sustain and enhance the responses provided by pupils. In a later project, funded by Wellcome Trust and DfES entitled Running arguments? – teacher skills for creative science classrooms, D. Shakespeare, S. Naylor and B. Keogh worked with Bedfordshire teachers from Key Stage 2 to post-16 on the skills needed to run arguments in lessons. Pupils’ opinions were sought as teachers changed their practice and behaviour in class and included reference to the positive attitudes pupils developed towards regular changing of groups and the chance to work with others, including the making of new friendships. Only a small minority reported a dislike for group discussion. (Adapted from Group Talk in Science - Research Summary, section PupilAttitudes). Littleton, K., & Howe, C. (2010). Educational dialogues: understanding and promoting productive interaction. Abingdon, Oxon: Routledge. ↑ http://thinkingtogether.educ.cam.ac.uk/resources/5_examples_of_talk_in_groups.pdf ↑ http://thinkingtogether.educ.cam.ac.uk/resources/Are_these_useful_rules_for_discussion.pdf 7 Relevant resources CPD Planning for Inclusion Planning for inclusion in your classroom This resource discusses planning(ta) for inclusion(ta), in particular as related to active learning(ta), group talk(ta) and more generally interactive pedagogy. Group talk Group Talk & Argument in Science Teaching Activities and practical examples to use group talk in science lessons This Teacher Education resource covers background information, practical activities, and practical examples for engaging dialogue(ta) in the context of group work(ta) and whole class(ta) work effectively in the classroom, in particular to ensure high quality reasoning(ta). The resource encourages teachers to think about situations and prompts for argumentation(ta) and how these might be used to support the science curriculum. Group talk Organising Group Talk in Science The group in which students are expected to work has a huge bearing on their willingness to speak openly. Can we better manage group talk? This resource contains activities and examples relating to group talk(ta) in science lessons in whole class(ta) and group work(ta) settings. Shape Getting Your Formulae in Shape Solving a card sort for perimeter, volume and area formulae This resource provides an opportunity for some revision of shape formulae - perimeter, area, and volume. It encourages pupils to engage in effectivereasoning(ta), and group talk(ta), and could be used as an effective assessment(ta) tool. The task could be differentiated(ta), or extended for a whole class by cutting the 'formulae' lines off the bottom of each hexagon, and asking students to match these to the shapes, prior to matching the shapes to the formulae type. Retrieved from "http://oer.educ.cam.ac.uk/w/index.php?title=Teaching_Approaches/Group_talk&oldid=29294"
How to stay hydrated Don’t wait for thirst to hit: keep fluids up throughout the day for optimal mind and body function. Written by David Cameron-Smith Humble water, fresh from the tap, is a quaint throwback to schoolyard days and summer holidays spent at grandparents’ houses. Jars recycled as glasses and water from the tap have given way to trendy, new age drinks, providing an ever increasing range of osmolytes, nutrients and brain boosters. No matter how hip hydration has become, it’s still the humble partnership of two hydrogen atoms and a single oxygen atom in a unique union that makes water what water is and does. What is thirst? Thirst is a hardwired response triggered by decreasing blood volume. This signals the brain stem to activate a range of hormones and nerves to help try to reduce any further losses. A dry mouth and an overwhelming need to quench the thirst are the most obvious effects. Thirst might also be accompanied by a headache and feelings of tiredness. Unfortunately, these biological responses come after water has already been lost from the body. The signs of increasing thirst are not apparent until the body has lost at least 1-2% of total body weight. This can be well in excess of one litre of body water already lost. By the time you start to feel thirsty, nerves and muscles, including the heart are already compromised. This is heightened as you get older. For many older people, feeling thirsty follows even greater levels of dehydration. This can be dangerous – every summer, heat stress proves too much for some of our senior citizens. “Unfortunately, these biological responses come after water has already been lost from the body.” There are no universal rules of how much and when to drink. The rate of fluid loss from the body varies greatly from one day to the next. The big influences are heat and exertion. Sweating is an effective way to cool the body, but it comes with the loss of body water. Even if the sweat is not dripping from your forehead, a modest workout will generate a measurable amount of sweat. The harder you go, the more you sweat, increasing the speed of dehydration. Remaining adequately hydrated is about preparation and adaptation. With summer here, ensure you leave the house well hydrated and prepared. Bringing a bottle of water with you is a great idea. For regular consumers of coffee or tea, the caffeine levels do not usually trigger greater fluid loss. However these beverages are not great ways to hydrate. Mostly served hot, a steaming cuppa may even increase your core temperature and when sipped slowly, does little to restore body fluid levels. Cold teas and coffees are often as sugar-laden as standard soft drinks. “Don’t rely on your thirst to tell you when you need to drink more water. Instead be prepared, drink up early and stay hydrated throughout the day.” On the opposite side of the equation are the ‘rules’ of drinking eight glasses or ‘at least’ two litres of water each day. For most people, on most days, this amount will have you rushing off to the toilet. There is no evidence that it helps flush out toxins, gives you smoother skin or helps with concentration. But at these levels, there are no indications that it will do you any harm. For most of us, most of the time, water does just fine. It is true that the body loses salts, and that combinations of dextrose sugars help increase water absorption, but the effects are small. The heat of our summer is set to take its toll on your hydration status. Don’t rely on your thirst to tell you when you need to drink more water. Instead be prepared, drink up early and stay hydrated throughout the day. Before, during and after any exercise, remember to drink. Importantly, if you can avoid strenuous activity or exercise in the heat of the day, do so. Heat exhaustion and dehydration can be dangerous, impairing your judgement and compromising the functions of vital organs. It’s time to rediscover water and the joy of drinking from a glass. Professor David Cameron-Smith is Chair in Nutrition at the Liggins Institute, University of Auckland The best things to do in Toronto Exploring New York City on foot
Title: Enhancement of vehicle crash and occupant safety : a new integrated vehicle dynamics control systems/front-end structure mathematical model Author: Elkady, Mustafa Awarding Body: University of Sunderland Current Institution: University of Sunderland http://sure.sunderland.ac.uk/5861/ Nowadays, occupant safety becomes one of the most important research area and the automotive industry increased their efforts for enhancing the safety of the vehicles. The aim of this research is to investigate the effect of vehicle dynamics control systems (VDCS) on both the collision of the vehicle body and the kinematics behaviour of the vehicle’s occupant. In this work, a novel vehicle dynamics/crash mathematical model is proposed and developed to co-simulate the crash event with the VDCS. This model is achieved using the novel approach of integrating front-end structure and vehicle dynamics mathematical models. The proposed mathematical model integrates both anti-lock braking systems (ABS) and active suspension control (ASC) systems alongside with crash structure modelling. This model is developed by generating its equations of motion and solving them numerically, this approach is used due to its quick and accurate analysis. In addition, a new multi-body occupant mathematical model is developed to capture the occupant kinematics before and during the collision. Validations of the proposed mathematical models are achieved to ensure their accuracy by comparing the simulated results with other real crash test data and former models results. The validation analysis of the vehicle and occupant models shows that the comparison results are well matched and the models are valid and can be used for different crash scenarios. The numerical simulation results are divided into two parts for vehicle and occupant models, respectively. Related to the vehicle model, it is shown that the mathematical model is flexible and useful for optimization studies. The results show that the deformation of the front-end structure is reduced, the vehicle body pitching and yawing angles are notably reduced, and the vehicle pitching acceleration is greatly reduced. Related to the occupant model, it is shown that the VDCS does have a significant effect on the rotations of the occupant's chest and head owing to its effect on the vehicle pitching. In addition, the occupant's deceleration is also slightly decreased and the occupant safety is improved. Keywords: Automotive Engineering
Using Physics to Understand the Heart Zhen Song Ph.D., Physics Advisor: Prof. Alain Karma Song was the recipient of a prestigious American Heart Association pre-doctoral fellowship First Job: Cardiovascular Research Laboratories, David Geffen School of Medicine, at UCLA (post-doc) “The American Heart/Stroke Association is committed to identifying and supporting specific science areas deemed vital to achieving their mission and strategic objectives,” says Song. “The Association has established partnerships with various organizations to fund critical-need, high-impact, and focused research programs. I currently hold an American Heart Association (AHA) pre-doctoral fellowship, which aims at helping students initiate careers in cardiovascular and stroke research by providing research assistance and training.” Song received the AHA pre-doctoral fellowship in July 2011. “Under the guidance of Professor Karma, I proposed a project using computer models to study arrhythmogenic effects of calsequestrin mutations,” he says. “Triggered activity often causes life-threatening reentrant cardiac arrhythmias. Various forms of triggered activity have been linked with mutations of one or several cardiac membrane ion channels in the setting of the inherited Long QT (LQT) syndrome, or with mutations of calcium cycling proteins as in the setting of Catecholaminergic Polymorphic Ventricular Tachycardia (CPVT), which is an inherited life-threatening electrical disturbance of the heart. Carriers of LQT mutations are at risk for polymoriphic ventricular tachycardias such as torsade de pointes (TdP) and/or sudden cardiac arrest.” Song says considerable progress has been made in the molecular characterization of various cardiac gene mutations in several congenital diseases, but arrhythmogenic mechanisms of triggered activity are varied and complex and remain not completely understood, even at a cellular level. “A main reason is that triggered activity at this level results from the complex interaction of a very large number of cardiac membrane ion channels and calcium cycling proteins,” he says. “Thus, it is generally extremely difficult, if not impossible, to predict the effect of one defective gene-coded functional protein, taken in isolation, without considering its interaction with all the other normally-functioning cardiac proteins. From this standpoint, in-silico electrophysiological computer models of cardiac activity provide a powerful tool to study this complex interaction in order to gain basic mechanisms of triggered activity and arrhythmias. The overarching goal of my proposed doctoral research is to further develop and use a new physiologically detailed in-silico electrophysiology model of the ventricular myocyte to gain basic insights into calcium-mediated cellular mechanisms of triggered activity.” Overall, Song says his research goal is to understand the basic arrhythmogenic mechanisms of CPVT. CPVT occurs in genetically-predisposed individuals without structural cardiac abnormalities. “It is typically manifested as syncope in the setting of physical activity or acute emotion,” Song says. “Even though CPVT is a rare disorder, it is estimated to account for roughly 15 percent of all sudden cardiac deaths in young people. Even though we focus specifically on a rare genetic disorder, the insights derived from this study are also expected to be relevant for other diseases such as the LQT syndrome and heart failure. Also, while this investigation is limited to a cellular level, we expect the insights to provide a basis to understand mechanisms of triggered activity at the organ level where they become even more complex. Lastly, ventricular fibrillation remains a major cause of sudden death in the US and worldwide. The novel insights into mechanisms of triggered activity from the study should provide an improved basis for risk stratification in a broad population and the development of reliable antifibrillatory drug therapies.” Song began a postdoctoral fellowship this spring at Cardiovascular Research Laboratories, David Geffen School of Medicine, at UCLA. ← Breaking Down Big Data and Human BehaviorStretching DNA to the Limit → User Log In/ College of Science/ Open Faculty Positions/ Giving/ How to Apply/ MyNEU• Find Faculty & Staff• Find A-Z• Emergency Information• Search• Text Only• 360 Huntington Ave., Boston, Massachusetts 02115 • 617.373.2000 • TTY 617.373.3768 © 2016 Northeastern University
Writing Like Michelangelo and Da Vinci Italy’s culture exists in epic proportions. Art was everywhere in Italy – not just in museums, but on the floors beneath my feet, on the sides of buildings, even in rubbled ruins. Art was everywhere and I could not get enough. I never knew I would be stunned, yes – stunned, by the statues and artwork I saw during my whirlwind trip to that boot-shaped country. I marveled at the ambitious excavations in Pompeii, at the ability of Da Vinci to capture the element of surprise on his disciple’s faces in Last Supper, and in the mosaic of golden opulence of St. Mark’s Cathedral. As I walked through Doge’s Palace in Venice, I gasped aloud at the ornate paintings. Somewhere amidst these lavish displays of famous statues and paintings, I had an epiphany about my own form of self-expression. I’m not sure when exactly the idea started to form, perhaps the dawning realization first materialized as I gazed up at the Systine chapel ceiling. It had never occurred to me that subtle images could hold the capacity to elicit powerful emotions until I gazed upon the most innocent of gestures: two index fingers (God’s and Adam’s) almost touching. That simple visual drew forth a wide range of feelings in me: hope, humility, kindness, love. If viewing a painting could bring up this level of complex emotions, what might be possible if an art form had all four senses at its disposal? Writers are spoiled and we don’t even know it. DaVinci and Michelangelo tell a whole story through inanimate objects. They, like all painters and sculptors, have only sight to convey meaning. Even modern-day movies, where the senses are limited to sight and sound, have more luxuries for context than these ancient masters of clay of paint had. True, film-makers get background music to create mood. But we writers, have a full gamut of tools. We can twist words to create setting and emotion in a way that readers can experience all the elements of sight, sound, taste and texture. As I studied Michelangelo’s work, I noticed the visual elements he used to capture Mary’s seemingly calm acceptance of the loss of her son in La Pieta. I noted how Mary’s left hand faces palm up and how the slight angle of her head is inclined toward her son, even the fact that her eyes are closed helped communicate a specific emotion. I can’t recall a time when I’ve described hand position to elicit a given mood. What a wonderful technique to reveal compassion or tenderness. The next time I get stuck attempting to convey one of the myriad of emotions a character is feeling, I plan to flip open a book of art. Until I had the awesome experience of touring Italy, it had never occurred to me that studying paintings and sculpture could be used as a tool to capture the depth and intimacy in writing. Leave a Comment » | Uncategorized | Tagged: art, Da Vinci, Italy, Last Supper, Michelangelo, Writing | Permalink
Multicultural Student Center presents symposium on ‘race & place’ The Multicultural Student Center (MSC) and Institute for Justice Education and Transformation (IJET) at the University of Wisconsin–Madison will hold their annual spring symposium “Race &…” to encourage dialogue and action around racial identity and other social justice issues. The two-day symposium, March 14-15 at the Pyle Center on the UW campus, serves as a capstone to IJET’s 2012-13 programming around “Race & Place: Movement, Space, Land, and Power.” Fifteen scholars, professors, practitioners and experts from the Madison community in multiple disciplines and fields will be presenting topics on the intersections between race and various places. Monica White, assistant professor of environmental justice at UW–Madison’s Nelson Institute for Environmental Studies and Department of Community and Environmental Sociology, will kick off the symposium with a public lecture, “Reclamation, Reconnection, and Resistance: Black Farmers, Food Security and Justice,” the evening of March 14. White documents the history of black farmers’ collectives, cooperatives, and experiences in the Midwest, and studies how grassroots organizations and communities of color engage in developing sustainable community food systems in response to issues of hunger and food inaccessibility. “It’s important to our freedom that we control our food. The reason people engage in agriculture is different based on race. Communities of color have been concerned about the environment, air and water, but we haven’t heard their voices using concepts such as sustainability,” White explains. Other featured guests are Becky Martinez, social justice consultant, trainer, and founder of Infinity Martinez Consulting; Lisa Brock, academic director of Kalamazoo College’s Arcus Center for Social Justice Leadership; and Townsand Price-Spratlen, professor of sociology at the Ohio State University. Topics covered in sessions on Friday, March 15 include: Discussion on multi/bi-racial ethnic identity; Emerging scholars panel with UW–Madison graduate students speaking on black lesbian and trans masculinities, black identity in James Van Der Zee’s photography, and queer South Asian orientations in media; Hosted lunch discussions on race and higher education, the prison industrial complex, environmental justice, media, the LGBTQ movement in Dane County and outside the United States; Presentation on the oppressive history of race, documentation and the body; Workshop on framing and messaging racial justice in media; Training on people of color allyship for self-identified people of color to take action to move toward racial justice in solidarity with one another; Presentation exploring the effects of Eurocentrism and colorism, sexism, heterosexism, and ableism on identity with artist, performance poet, and educator Sharon Powell; and Panel with community activists and scholars on race and incarceration in Dane County and beyond. The full schedule can be viewed here. diversity, environment, food, LGBTQ For Campus Communicators
Leadership for Healthy Communities Rising rates of obesity have created a significant public health challenge. In 2009, 23 states saw increases in overall obesity rates and 30 states reported that at or above 30 percent or children and adolescents were overweight or obese. Accompanying these rising rates of obesity are increases in chronic diseases, including type 2 diabetes, high blood pressure, coronary heart disease, and pediatricians today are diagnosing more cases of these formerly adult conditions in children. Obesity, coupled with these related chronic diseases, threaten to make this generation of children the first in 200 years to lead unhealthier lives and have a shorter lifespan than their parents. Cities, towns, and counties play an influential role in improving residents’ access to healthy foods and their ability to be physically active through a range of policies and program as well as with their land use and zoning authority. The International City/County Management Association (ICMA) is dedicated to supporting programs and policies that promote active living and access to healthy foods. Through 2009, ICMA was involved in Leadership for Healthy Communities, a national program of the Robert Wood Johnson Foundation, formerly known as Active Living Leadership. The program supports state and local government leaders in efforts to reduce childhood obesity through public policies that promote active living, healthy eating, and access to healthy foods. As a former partner in this initiative, ICMA supports government leaders in their efforts to create and implement policies, programs, and places that achieve these goals. Through its work with Leadership for Healthy Communities, ICMA partnered with the National Association of Counties (NACo) and the American Association of School Administrators (AASA) to promote collaboration among local governments and schools in their efforts to combat childhood obesity. Their Healthy Communities Network initiative started with a full day of facilitated dialogue among local leaders in Pascagoula, Mississipi in U.S. in 2007, and continued with six more dialogues in the southeast and southwest. To learn more about ICMA’s work on active living or healthy eating, visit the topic pages on the Knowledge Network. ICMA members who are interested in ongoing peer exchange and technical assistance related to healthy communities can join the Healthy Communities Group on the Knowledge Network. For more information contact Anna Read at aread@icma.org.
Bulletin of Marine Science, Volume 76, Number 2 Comparative sustainability mechanisms of two hake (Merluccius gayi gayi and Merluccius australis) populations subjected to exploitation in chile Authors: Payá, Ignacio; Ehrhardt, Nelson M. Source: Bulletin of Marine Science, Volume 76, Number 2, April 2005, pp. 261-286(26)Publisher: University of Miami - Rosenstiel School of Marine and Atmospheric Science Download (PDF 5,407.5kb) Download Article: Common (Merluccius gayi gayi Guichenot, 1848) and southern (Merluccius australis Hutton, 1872) hakes inhabit the central (32°–41°S) and the southern (41°–57°S) regions off the coast of Chile, respectively. Both species support important trawl and longline fisheries. The common hake fishery started in the 1940s, while the southern hake fishery began in 1970s. During the last 10 yrs the abundance of common hake has increased while that of the southern hake has decreased. At present, the common hake stock is fully exploited and the southern hake is overexploited. We review several biological, fishery, and environmental aspects that influence the historical abundance of each hake. Significant differences exist regarding natural mortality, growth rates, and reproductive and feeding dynamics. The timing of the spawning season is similar for both hakes and is synchronized with the increasing turbulence and upwelling. The resilience of each species to exploitation and environmental changes was analyzed relative to their stock-recruitment (S-R) relationships coupled to environmental variables. The common hake exhibits a Ricker-type S-R relationship with clear compensatory processes due to cannibalism, with significant deviations from this model explained by environmental changes. The southern hake has an almost linear S-R relationship with little evidence of compensatory components. General additive models (GAM) show that El Niño/Southern Oscillation (ENSO) events have positive effects on the recruitment of both hake species. Benchmarks consisting of extinction parameters and Fmsy were calculated based on S-R relationships and spawning per recruit analysis. Simulations of constant catch and constant exploitation were used to portray the differences in resilience of each species to exploitation relative to the benchmarks. Results indicate that abundance of the common hake is significantly more impacted by environmental conditions while the abundance of the southern hake is controlled by exploitation regimes under a rather low population response due to the almost total lack of compensatory mechanisms in the spawner-recruit relationship. References: 17 references Opening the references page in a new window requires javascript to be enabled in your browser. The Bulletin of Marine Science is dedicated to the dissemination of high quality research from the world's oceans. All aspects of marine science are treated by the Bulletin of Marine Science, including papers in marine biology, biological oceanography, fisheries, marine affairs, applied marine physics, marine geology and geophysics, marine and atmospheric chemistry, and meteorology and physical oceanography. In this Subject: Biology/Life Sciences , Oceanography By this author: Payá, Ignacio ; Ehrhardt, Nelson M.
First Aid for Falls in Cats Even though cats usually land on their feet, they can still sustain injuries when they fall. Prepared cat owners should be aware of the problems that can result when a feline takes a tumble. Sprains, broken bones, head trauma, and chest or abdominal injuries may result when felines fall. If you see your cat fall, observe him carefully for a couple of days. Some injuries are immediately obvious while others don’t become apparent until hours after the fall. Even if you don't see your pet take a tumble at all, you should suspect a fall if you notice any of the following signs: Reluctance to stand or walk Pain upon lying down or rising Stiff gait Limping Decreased appetite or difficulty eating First Aid for Falls Serious injuries from falls need to be immediately evaluated and treated by a veterinarian. But here are first aid steps to implement at home as you prepare to take your cat to the veterinary clinic: Monitor breathing. If your cat struggles to breath, proceed right away to the nearest emergency clinic. Remember that cats are “nose breathers”, so panting is a sign of respiratory distress. Transporting cats with respiratory problems need to be done very carefully, especially if ribs were broken. Support the cat behind the front legs and in front of the back legs and gently place him in a pet carrier. If you don’t have a pet carrier, use a rigid object like a baking sheet as a gurney. Cats with broken ribs should stay very still to avoid lung puncture, so don’t let him struggle--if he prefers to lie down or sit up, let him do it. If your pet stops breathing, ventilate him to keep his oxygen level up. To assist respiration, make a funnel by wrapping your hand(s) around his muzzle. Keeping his mouth closed, blow air into his nose. Proper ventilation should make his chest rise. Give 15-20 breaths per minute until he starts breathing on his own, or until you have reached the emergency hospital. Protect open wounds. If the skin was broken during the fall, wrap a clean towel over the area to minimize contamination. It is particularly important to cover a wound that has a broken bone protruding from it. Bone infections can seriously complicate healing. Puncture wounds to the abdomen should also be covered to minimize infection from outside contaminants; however, if the intestines are punctured, infection could start from within. Your veterinarian will assess this problem. Control bleeding. If the wound is bleeding, wrap the towel tightly around the injured site and apply gentle, but firm pressure. If the towel becomes soaked, do not remove it. Just put another towel on top of it to avoid disturbing the clot. Most bleeding stops within 5-10 minutes; however, cats with clotting disorders may take longer. Excessive bleeding may occur if the spleen or liver was injured, so prompt emergency care is vital. Monitor the cat for several hours. Sometimes, cats appear normal after a fall as they walk around and play. Later, they become lethargic and weak or develop difficulty breathing, so it’s important to monitor them closely for several days after a fall. Delayed injuries include collapsed lungs caused by punctures from broken ribs, or hernias that start as small openings and rip open later. Diaphragmatic hernias occur when there is a tear in the wall separating the chest from the abdomen. If abdominal organs (liver, stomach, intestines) move into the chest cavity, respiration is impaired. Hernias may also occur in the abdominal wall, creating pockets that trap intestines, bladder, or other organs. These delayed problems are emergencies that require prompt attention. Transport the cat with the injured side down as you head to the veterinary clinic. Look for head injuries - Blood in the eyes, nose, or mouth means possible head injury. Cats will usually swallow blood that pools in the mouth and lick blood that flows from the nose, so there is no need to control the bleeding. Just proceed to the veterinary clinic. Be aware of back injuries. A cat that can’t get up at all may have sustained a back injury and should be kept as still as possible. Gently place the cat on a rigid object like a baking sheet. Cover him with a blanket and seek emergency help. Monitor eating and elimination. Broken jaws occur frequently when cats fall. Watch your cat eat and drink. If he drops food, yelps when he chews, or drools excessively, have him examined by your veterinarian. Monitor his eliminations. If he doesn’t have a normal bowel movement within 48 hours or if he does not urinate within 24 hours, seek help. Your cat may have ruptured his bladder or the bladder may be impinged in an abdominal hernia. Lack of normal urination and defecation can be signs of something serious. Cats are naturally climbers, so it’s not easy to prevent them from jumping on the sofa or counter tops. Cat owners should always be prepared to handle the unexpected. After all, cats may not always land on their feet. This client information sheet is based on material written by: Lynn Buzhardt, DVM
yet►Fragmented QRS and abnormal creatine kinase-MB are predictors of coronary artery disease in patients with angina and normal electrocardiographys Jung Joo Lee, Jae Hoon Lee, Jin Woo Jeong, Jun Young Chung Department of Emergency Medicine, Dong-A University College of Medicine, Busan, Korea Correspondence to Jae Hoon Lee, M.D. Department of Emergency Medicine, Dong-A university College of Medicine, 26 Daesingongwon-ro, Seo-gu, Busan 49201, Korea Tel: +82-51-240-5590 Fax: +82-51-240-5309 E-mail: leetoloc@dau.ac.kr Received 2015 April 30; Revised 2015 July 6; Accepted 2015 July 9. Patients with symptoms of coronary artery disease (CAD) often display normal tracings or only nonspecific changes on electrocardiography (ECG). The aim of this study was to explore strategic elements of the ECG and other potential factors that are predictive of CAD in this scenario. This was an observational study of 142 patients with the chief complaint of chest pain, each of whom presented with a normal ECG and was subjected to emergency coronary angiography (CAG). Two population subsets were identified: those patients (n = 97) with no significant stenotic lesions and those (n = 45) with the significant stenotic lesions of CAD. Those patients with normal or nonspecific ECGs and CAD (15.8%) were more likely to have left circumflex artery involvement (20% vs. 7%). In patients with normal ECGs and CAD (vs. normal CAG), male sex (86.7% vs. 68%, p = 0.023), creatine kinase-MB (CK-MB) levels > 10 U/L (13 vs. 10, p = 0.025), and fragmented QRS (fQRS) (38.6% vs. 21.6%, p = 0.042) occurred with greater frequency. In multivariable analysis, the following variables were significant predictors of CAD, given a normal ECG: male sex (odds ratio [OR], 2.593; 95% confidence interval [CI], 1.068 to 5.839); CK-MB (OR, 2.497; 95% CI, 0.955 to 7.039); and W- or M-shaped QRS complex (OR, 2.306; 95% CI 0.988 to 5.382). In our view, male sex, elevated CK-MB (> 10 U/L), and fQRS complexes are suspects for CAD in patients with angina and unremarkable ECGs and should be considered screening tests. Keywords: Coronary disease; Myocardial infarction; Electrocardiography; Angiography The electrocardiography (ECG) is pivotal in evaluating patients with suspected coronary artery disease (CAD). However, in patients who present to the emergency department (ED) with a chief complaint of chest pain, the initial ECG may be normal despite later discovery of CAD (Fig. 1). The rate at which this phenomenon occurs (3% to 16%) varies widely in the literature [1,2]. Normal electrocardiograph in patient with 3-vessel disease. Patients who present to the ED with chest pain and with normal or nonspecific ECG interpretations have low rates of mortality and cardiac complications [3] and thus are considered low-risk [4]. Nevertheless, a normal or nonspecific ECG tracing does not exempt a patient from risk for cardiovascular events. Normal ECGs may contribute to increased mortality in patients who present to the ED who consequently are not hospitalized for acute cardiac ischemia [5]. There is also the possibility that changes consistent with myocardial ischemia may develop later, despite initial ECG tracings that are normal [6]. In one large study of patients with initially normal or nonspecific ECGs, respective hospital mortality rates of 5.7% and 8.7% were recorded [7]. To date, no algorithm exists to reliably predict adverse cardiovascular outcomes in patients with potential CAD based on a normal ECG alone (i.e., in the absence of ischemic changes) [8] or in conjunction with cardiac markers [9] or a history of angina [10]. Emergency echocardiography is a promising diagnostic tool, but it too has limitations. For example, regional wall motion abnormality (RWMA) was found in 22% of patients without CAD, and 12% of patients with CAD showed no evidence of RWMA [11]. In patients with normal or nonspecific ECGs, a means of differentiating those with and without CAD is clearly needed. The objective of our study was to determine the incidence of CAD in patients with normal or nonspecific ECGs, evaluating specific ECG features and other clinical parameters that are potentially predictive of CAD in this context. A retrospective observational study was conducted that consisted of reviewing all hospital admissions where patients presented to our ED with chest pain or discomfort that was suspicious of CAD, had ‘normal’ or ’nonspecific’ initial ECG interpretations, and had undergone emergency coronary angiography (CAG) based upon clinical assessments, including a history of recurrent or ongoing angina symptoms, elevated cardiac biomarkers, previous history or scores with high risks, and echocardiographic findings of heart dysfunction. The proportions of patients with verifiable significant stenotic lesions (of CAD) and those with no significant stenotic lesions were identified. Study settings and population This study was conducted at a high-end medical facility that functions as one of the regional cardiocerebrovascular centers in South Korea. Its ED serves an urban population of approximately 30,000 adult-patient visits per year. Data were compiled from 463 patients who consecutively presented between March 2010 and June 2014 and met all criteria as follows: (1) non-traumatic chest pain or discomfort that was suspicious of CAD as the chief complaint; (2) normal or nonspecific ECG interpretation, with no arrhythmia or ischemic change; and (3) CAD screening via CAG. Study protocol All patients who had undergone CAG were entered into a so-called ‘normal-ECG’ registry. Registry data included baseline characteristics, clinical presentation, and outcomes of diagnostic testing (i.e., cardiac biomarkers, ECG, and echocardiography). Subjects in the normal-ECG registry were stratified into two groups based on CAG results: patients who presented with normal CAGs (n = 97) and those who presented with abnormal CAGs (n = 45) involving a minimum of 70% stenosis in at least one vessel. The clinical characteristics that were assessed were age, sex, duration of chest pain, typical chest pain, and known CAD. ‘Typical chest pain’ was equated with a sensation of squeezing and pressure (or heaviness). Pleuritic, positional, and prickly (or sharp) chest pain were excluded [10,12]. Patient eligibility was stipulated by an initial ECG that had been interpreted as ‘normal’ or ‘nonspecific’ according to Standardized Reporting Guidelines [13]. ECGs that read as ‘abnormal but not diagnostic of ischemia’ were also acceptable, with the following exclusions: bundle branch block, left ventricular (LV) hypertrophy with strain, minimal ST-T change in two or more contiguous leads, and pacemaker rhythm. Investigation of normal ECGs focused on duration of QRS complex, Q-wave characteristics, convexity of ST segment, corrected QT interval, and fragmented QRS (fQRS). An fQRS is defined by the presence of an additional R wave (R’) or notching in the nadir of the S wave, or the presence of > 1 R’ (fragmentation) in two contiguous leads with or without a typical bundle-branch block on a 12-lead ECG. Mean value, as the average of the three inferior leads, was used for estimating the duration of the QRS complex or Q wave. Emergency echocardiographic screenings for CAD were also reviewed. Patients with echocardiographic evidence of RWMA prior to CAG were allowed, although changes (RWMA, ejection fraction) in echocardiographs that were done after the CAG were grounds for exclusion given the potential impact of cardiac catheterization on the motion of the cardiac wall. To evaluate the relationships between risk factors and CAD in subjects in the normal-ECG registry, all categorical independent variables with more than two values were analyzed using Fisher exact test, and the Mann-Whitney U test was applied to all continuous independent variables. The significance of these relationships was repeatedly tested through univariable and multivariable analysis by binary logistic regression analysis. All calculations relied on standard software SPSS version 21 (IBM Co., Armonk, NY, USA), with statistical significance set at p < 0.05. Incidence of patients with normal or nonspecific ECG interpretations Of the 463 patients who had been admitted with chest pain or discomfort and subjected to CAG, initial ECGs (performed in our ED) were interpreted as normal or nonspecific in 142 cases. In addition, 286 of these 463 patients were diagnosed with CAD, including 45 of the 142 patients with normal or nonspecific ECG readings. The rate of normal or nonspecific ECG interpretations among patients with CAD was 15.8%. Results of coronary angiography CAD was defined as a 70% or more narrowing of the luminal diameter of the coronary artery by CAG. CAG was performed on all 463 patients who had accrued during the 3.25-year study timeframe, and in 286 of these patients, significant stenotic lesions were documented as single-vessel (left anterior descending artery [LAD, 29%], right coronary artery [RCA, 19%], or left circumflex artery [LCX, 7%]), or double-vessel (28%) or triple-vessel/left main (17%) CAD. In the 45 patients with normal or nonspecific ECGs and significant stenotic lesions, single-vessel disease predominated (LAD, 24%; RCA, 24%; LCX, 20%), with fewer instances of double-vessel (27%) or triple-vessel/left main (13%) disease; LCX lesions were also observed more frequently (20% vs. 7%) than in the all-inclusive group with CAD unrestricted by ECG. Differentiating patients with normal or nonspecific ECGs by CAG group (CAD vs. normal) Patients with CAD were more apt to be male (86.7% vs. 68%, p = 0.023), with notching of the QRS complex (fQRS) on ECG (38.6% vs. 21.6%, p = 0.042), compared with patients of normal status (Table 1). However, persistent chest pain (57.5% vs. 61.9%, p = 0.696) and chronic ischemic injury caused by previous old myocardial infarction (MI) (33.3% vs. 20.6%, p = 0.142) did not differ significantly by group. Characteristics of 142 patients with angina and normal electrocardiographys Initial troponin I levels of patients with CAD exceeded those of patients with normal CAGs, although not to a statistically significant extent (0.038 ng/mL vs. 0.02 ng/mL, p = 0.202). In contrast, creatine kinase-MB (CK-MB) levels showed a positive correlation with acute coronary lesions (13 U/L vs. 10 U/L, p = 0.025). At a threshold > 10 U/L defined by the abnormal criteria of the biochemical test in our hospital (sensitivity, 75.6%; specificity, 47.3%), the accuracy of CK-MB in discriminating patients with significant stenotic lesions from normal counterparts was 0.621 (95% confidence interval [CI], 0.534 to 0.704), as estimated by the area under the receiver operating characteristic curve (Fig. 2). Receiver operating characteristic curve showing discriminatory capability of creatine kinase-MB > 10 U/L. Area under curve (i.e., accuracy) is 0.621 (95% confidence interval, 0.534 to 0.704). Pathologic Q waves in the inferior lead (0.5 mm vs. 0.8 mm, p = 0.162), changes in the Q wave in the aVR lead (1 mm vs. 1 mm, p = 0.477), and prolongation of QRS duration (2 mm vs. 2 mm, p = 0.547) were not distinctive in patients with CAD. Moreover, the impact of convex or concave ST-segments by group was uncertain (6.7% vs. 8.2%, p = 1.000), and corrected QT intervals did not differ significantly by group (436 msec vs. 436 msec, p = 0.584). Within the subset of patients who had undergone emergency echocardiography prior to CAG, RWMA was rigorously investigated with respect to CAD, but it did not differ significantly by group (31.8% vs. 16.9%, p = 0.221). In multivariable models, the odds ratios (ORs) for each variable as follows reflected significant group differences: males (OR, 2.593; 95% CI, 1.068 to 5.839); abnormal CK-MB (OR, 2.497; 95% CI, 0.955 to 7.039); and fQRS (OR, 2.306; 95% CI, 0.988 to 5.382) (Table 2). Hence, these parameters constituted significant predictors of CAD in our patients with angina and normal ECGs. Additionally, although we examined whether male, fQRS and abnormal CK-MB were related to degree of stenosis, no statistical association was observed (p = 0.372, p = 0.691, and p = 0.175, respectively). Univariate analysis of factors related to coronary artery disease in patients with angina and normal electrocardiographys The ECG of one such patient is illustrated in Fig. 1. This particular male showed fQRS in two leads (III and aVF), with CK-MB level (17 U/L) above the threshold and troponin I level at 0.006 ng/mL. CAG confirmed stenosis of the proximal LCX (80%), mid LAD (80%), and mid RCA (100%). The ECG is a valid means of risk stratification for patients who present on an emergency basis with chest pain or discomfort. Studies indicate that low-risk patients with chest pain may be identified upon presentation at the ED by clinical evaluation plus ECG [14], and those patients with normal ECGs are considered low-risk [15,16]. However, clinically significant (albeit lower) short-term mortality rates have been recorded in patients with CAD and normal or nonspecific ECG tracings, compared with similar patients whose ECGs are abnormal [7], and a normal ECG has been cited as a significant factor in the failure to diagnose acute MI at emergency facilities [5]. Thus, the broad generalization that normal ECGs are low-risk may limit detection of ischemia in the following circumstances: (1) occlusive lesions of the LCX [17]; (2) prior acute MI; (3) established CAD [18]; (4) adequate collateral coronary circulation [19]; (5) transiently normal ECG [20]; and (6) small infarctions [17,21]. Our study illustrates that in patients with CAD, those with normal or nonspecific ECGs had a higher frequency of lesions involving the LCX. Chua et al. [17] similarly demonstrated that LCX-related acute MI that presented without ST-T changes has been underdiagnosed in ED settings, with ST-T changes absent in 23% of patients who were suffering LCX occlusion. In contrast, Caceres et al. [2] found that the frequency of LCX involvement in patients did not differ by ECG status (normal vs. abnormal). Notching of QRS complexes has been the subject of much discussion in many recent studies. It is commonly upheld that fQRS correlates with an array of disorders, including CAD, dilated cardiomyopathy, cardiac sarcoidosis, arrhythmogenic right ventricular cardiomyopathy, Brugada syndrome, and long QT syndrome, as shown by myocardial perfusion-gated SPECT (single photon emission computed tomography) studies, CAG, and late gadolinium-enhanced cardiac magnetic resonance imaging [22-24]. In addition, fQRS has been identified as an independent prognosticator of mortality or cardiac events [25] in the setting of MI [23,26], with some evidence that equates the number of leads affected with the severity of the coronary lesions [27,28]. As demonstrated by Boineau [29], multiple MIs may lead to ECG modifications, with loss of Q waves and development of M- or W-shaped QRS complexes (fQRS). Other recent studies have shown that fQRS correlates poorly with myocardial scarring and is instead related to functional or anatomical ventricular abnormalities [30-32]. According to Lee and Goldman [33] the CK-MB level at the early onset of chest pain is a more accurate and sensitive index of MI than troponin I or troponin T. Wang et al. [21] discovered that among patients who presented with non-ST-segment elevation acute coronary syndromes, higher CK-MB (p < 0.0001) and troponin I (p = 0.001) levels corresponded with coronary arterial occlusion, which was documented by CAG in 27% (528/1,957) of their subjects. We contend that male sex, abnormal CK-MB, and fQRS can be used as important diagnostic tools to predict coronary occlusion in patients with suspected CAD and normal ECGs. The association between CAD and a number of predictors (male, p = 0.696; fQRS, p = 0.053) in multivariable analysis was more decreased than that found in the univariable analysis. The reason the p value was higher than 0.05 in the multivariable analysis was the small sample size; better results will be shown through large multicenter studies. Herein, we evaluated fQRS complexes in normal patients without CAD, whereas most prior studies of fQRS have looked at high-risk diseases such as MI or cardiomyopathy; few researchers have focused on the significance of fQRS complexes in the general population. Although a number of variables in our collected data generated statistically negative outcomes, the significance of these variables is still debatable. A Q wave is generally abnormal if its duration is 0.04 seconds or longer in lead I, in all three inferior leads (II, III, aVF), or in leads V3 through V6 [34], and a pathologic Q wave is one involving two or more contiguous leads. The presence of a Q wave in the aftermath of MI is associated with higher in-hospital morbidity and mortality [35]. Furthermore, a history of prior MI in patients with non-ST-elevation CAD poses a significant risk [21]. Some sources have shown that prolonged corrected QT intervals are predictive of cardiac events after MI [36]. Elsewhere, however, pathologic Q waves, corrected QT intervals, and previous ischemic heart disease were unrelated to prognosis (i.e., major adverse cardiac events) [37]. Still, one publication does maintain that fQRS complexes are superior to pathologic Q waves in terms of their sensitivity to and negative predictive value for myocardial scarring [38]. We similarly contend that fQRS is a superior index of CAD to pathologic Q waves or history of prior MI. In our hands, RWMA detected by emergency echocardiography (done prior to CAG and performed by a cardiologist or sonographer) curiously showed no statistical association with significant stenotic lesions of a coronary artery. Possible explanations are as follows: (1) latent stress-related cardiomyopathy, such as isolated basal LV dysfunction, global LV hypokinesia, and other wall motion abnormalities in non-coronary distribution (aside from Takotsubo cardiomyopathy) [39], in patients with angina and normal ECGs; (2) residual RWMA as from previous MI; (3) RWMA undetected because of minute or diffuse multiple coronary lesions; (4) execution error by untrained cardiologists or sonographers; and (5) paucity of suitable cases. This study has a number of acknowledged limitations, one being the limited (for statistical purposes) number of patients with normal ECGs who were admitted for angina through our emergency services. Another is the retrospective nature of the study and its restriction to a single institution. Nevertheless, nearly all required data were collected successfully with the exception of some laboratory results. Finally, collateral coronary arterial circulation was not investigated owing to the incompleteness of our records, and patients (with potential CAD) who did not undergo CAG were excluded from study. In conclusion, our data indicate that male sex, CK-MB levels beyond a set threshold, and fQRS complexes on ECG are predictive of CAD in patients with angina and normal ECGs. The presence of fQRS on ECG and elevated CK-MB in this context also appear to reflect coronary arterial stenosis in 70% or more of cases. These findings clearly merit further study. 1. Fragmented QRS (fQRS) related with scar or left ventricular dysfunction was observed in patients with structural heart disease as well as with normal hearts. 2. A fQRS was shown in 10% to 16% patients who had been analyzed as having normal sinus rhythm and certified coronary artery lesions by coronary angiography (CAG). 3. Patients with coronary artery disease were more apt to be male (86.7% vs. 68%, p = 0.023) with notching of the QRS complex (fQRS) on electrocardiography (38.6% vs. 21.6%, p = 0.042) compared with patients who showed normal CAG. This work was supported by the Dong-A University Research Fund. 1. Lee TH, Rouan GW, Weisberg MC, et al. Clinical characteristics and natural history of patients with acute myocardial infarction sent home from the emergency room. Am J Cardiol 1987;60:219–224. 2. Caceres L, Cooke D, Zalenski R, Rydman R, Lakier JB. Myocardial infarction with an initially normal electrocardiogram: angiographic findings. Clin Cardiol 1995;18:563–568. 3. Brush JE Jr, Brand DA, Acampora D, Chalmer B, Wackers FJ. Use of the initial electrocardiogram to predict in-hospital complications of acute myocardial infarction. N Engl J Med 1985;312:1137–1141. 4. McCullough PA, Ayad O, O’Neill WW, Goldstein JA. Costs and outcomes of patients admitted with chest pain and essentially normal electrocardiograms. Clin Cardiol 1998;21:22–26. 5. Pope JH, Aufderheide TP, Ruthazer R, et al. Missed diagnoses of acute cardiac ischemia in the emergency department. N Engl J Med 2000;342:1163–1170. 6. Silber SH, Leo PJ, Katapadi M. Serial electrocardiograms for chest pain patients with initial nondiagnostic electrocardiograms: implications for thrombolytic therapy. Acad Emerg Med 1996;3:147–152. 7. Welch RD, Zalenski RJ, Frederick PD, et al. Prognostic value of a normal or nonspecific initial electrocardiogram in acute myocardial infarction. JAMA 2001;286:1977–1984. 8. Sanchis J, Bodi V, Nunez J, et al. New risk score for patients with acute chest pain, non-ST-segment deviation, and normal troponin concentrations: a comparison with the TIMI risk score. J Am Coll Cardiol 2005;46:443–449. 9. Dagnone E, Collier C, Pickett W, et al. Chest pain with nondiagnostic electrocardiogram in the emergency department: a randomized controlled trial of two cardiac marker regimens. CMAJ 2000;162:1561–1566. 10. Goodacre S, Locker T, Morris F, Campbell S. How useful are clinical features in the diagnosis of acute, undifferentiated chest pain? Acad Emerg Med 2002;9:203–208. 11. Peels CH, Visser CA, Kupper AJ, Visser FC, Roos JP. Usefulness of two-dimensional echocardiography for immediate detection of myocardial ischemia in the emergency room. Am J Cardiol 1990;65:687–691. 12. Swap CJ, Nagurney JT. Value and limitations of chest pain history in the evaluation of patients with suspected acute coronary syndromes. JAMA 2005;294:2623–2629. 13. Hollander JE, Blomkalns AL, Brogan GX, et al. Standardized reporting guidelines for studies evaluating risk stratification of emergency department patients with potential acute coronary syndromes. Ann Emerg Med 2004;44:589–598. 14. Amsterdam EA, Kirk JD, Diercks DB, Lewis WR, Turnipseed SD. Immediate exercise testing to evaluate low-risk patients presenting to the emergency department with chest pain. J Am Coll Cardiol 2002;40:251–256. 15. Slater DK, Hlatky MA, Mark DB, Harrell FE Jr, Pryor DB, Califf RM. Outcome in suspected acute myocardial infarction with normal or minimally abnormal admission electrocardiographic findings. Am J Cardiol 1987;60:766–770. 16. Rouan GW, Lee TH, Cook EF, Brand DA, Weisberg MC, Goldman L. Clinical characteristics and outcome of acute myocardial infarction in patients with initially normal or nonspecific electrocardiograms (a report from the Multicenter Chest Pain Study). Am J Cardiol 1989;64:1087–1092. 17. Chua SK, Shyu KG, Cheng JJ, et al. Significance of left circumflex artery-related acute myocardial infarction without ST-T changes. Am J Emerg Med 2010;28:183–188. 18. Conti A, Poggioni C, Viviani G, et al. Short- and long-term cardiac events in patients with chest pain with or without known existing coronary disease presenting normal electrocardiogram. Am J Emerg Med 2012;30:1698–1705. 19. Jewitt DE, Balcon R, Raftery EB, Oram S. Incidence and management of supraventirucular arrhythmias after acute myocardial infarction. Am Heart J 1969;77:290–293. 20. Sharkey SW, Berger CR, Brunette DD, Henry TD. Impact of the electrocardiogram on the delivery of thrombolytic therapy for acute myocardial infarction. Am J Cardiol 1994;73:550–553. 21. Wang TY, Zhang M, Fu Y, et al. Incidence, distribution, and prognostic impact of occluded culprit arteries among patients with non-ST-elevation acute coronary syndromes undergoing diagnostic angiography. Am Heart J 2009;157:716–723. 22. Ozdemir S, Tan YZ, Colkesen Y, Temiz A, Turker F, Akgoz S. Comparison of fragmented QRS and myocardial perfusion-gated SPECT findings. Nucl Med Commun 2013;34:1107–1115. 23. Guo R, Li Y, Xu Y, Tang K, Li W. Significance of fragmented QRS complexes for identifying culprit lesions in patients with non-ST-elevation myocardial infarction: a single-center, retrospective analysis of 183 cases. BMC Cardiovasc Disord 2012;12:44. 24. Take Y, Morita H. Fragmented QRS: what is the meaning? Indian Pacing Electrophysiol J 2012;12:213–225. 25. Das MK, Suradi H, Maskoun W, et al. Fragmented wide QRS on a 12-lead ECG: a sign of myocardial scar and poor prognosis. Circ Arrhythm Electrophysiol 2008;1:258–268. 26. Lorgis L, Jourda F, Hachet O, et al. Prognostic value of fragmented QRS on a 12-lead ECG in patients with acute myocardial infarction. Heart Lung 2013;42:326–331. 27. Torigoe K, Tamura A, Kawano Y, Shinozaki K, Kotoku M, Kadota J. The number of leads with fragmented QRS is independently associated with cardiac death or hospitalization for heart failure in patients with prior myocardial infarction. J Cardiol 2012;59:36–41. 28. Aslani A, Tavoosi A, Emkanjoo Z. Diffuse fragmented QRS as an index of extensive myocardial scar. Indian Pacing Electrophysiol J 2010;10:67–68. 29. Boineau JP. Diagnosis of multiple infarcts from complex electrocardiograms during normal rhythm, left bundle-branch block, and ventricular pacing. J Electrocardiol 2011;44:605–610. 30. Wang DD, Buerkel DM, Corbett JR, Gurm HS. Fragmented QRS complex has poor sensitivity in detecting myocardial scar. Ann Noninvasive Electrocardiol 2010;15:308–314. 31. Ahn MS, Kim JB, Yoo BS, et al. Fragmented QRS complexes are not hallmarks of myocardial injury as detected by cardiac magnetic resonance imaging in patients with acute myocardial infarction. Int J Cardiol 2013;168:2008–2013. 32. MacAlpin RN. The fragmented QRS: does it really indicate a ventricular abnormality? J Cardiovasc Med (Hagerstown) 2010;11:801–809. 33. Lee TH, Goldman L. Evaluation of the patient with acute chest pain. N Engl J Med 2000;342:1187–1195. 34. Goldberger AL. Goldberger’s Clinical Electrocardiography 8th edth ed. Philadelphia: Elsevier; 2012. 35. Lekakis J, Katsoyanni K, Trichopoulos D, Tsitouris G. Qversus non-Q-wave myocardial infarction: clinical characteristics and 6-month prognosis. Clin Cardiol 1984;7:283–288. 36. Juul-Moller S. Corrected QT-interval during one year follow-up after an acute myocardial infarction. Eur Heart J 1986;7:299–304. 37. Choi WS, Lee JH, Park SH, et al. Prognostic value of standard electrocardiographic parameters for predicting major adverse cardiac events after acute myocardial infarction. Ann Noninvasive Electrocardiol 2011;16:56–63. 38. Das MK, Khan B, Jacob S, Kumar A, Mahenthiran J. Significance of a fragmented QRS complex versus a Q wave in patients with coronary artery disease. Circulation 2006;113:2495–2501. 39. Bybee KA, Prasad A. Stress-related cardiomyopathy syndromes. Circulation 2008;118:397–409. CAD (n = 45) Normal CAG (n = 97) p valuea Male sex Age, yr 57 (50.5–65) Angina > 20 minutes Typical symptoms Old MI > 3 months Initial troponin I, ng/mL 0.038 (0–1.29) 0.02 (0–0.19) Initial CK-MB, U/L BNP, pg/mL 40.9 (20.5–60.2) 41.9 (18.9–115.1) CRP, mg/dL QRS duration, mm 2 (1.85–2.3) Convex ST segment Q duration in inferior, mm 0.8 (0.2–1) Q depth in inferior, mm 2 (0.9–4.1) Q duration in aVR, mm Q depth in aVR, mm Corrected QT interval, msec 436 (415–449.5) 436 (418.5–454.5) Fragmented QRS EF, % Values are presented as number (%) or medians (interquartile range). CAD, coronary artery disease; CAG, coronary angiography; MI, myocardial infarction; CK-MB, creatine kinase-MB; BNP, brain natriuretic peptide; CRP, C-reactive protein; EF, ejection fraction; RWMA, regional wall motion abnormality. ap value analyzed by Fisher exact test and Mann-Whitney U test. Odds ratio p value (95% CI) Univariable analysis 0.023 (1.169–7.971) Initial troponin I Abnormal CK-MB QRS duration Q duration in inferior Q depth in inferior CI, confidence interval; MI, myocardial infarction; CK-MB, creatine kinase-MB; RWMA, regional wall motion abnormality.
Public Release: 12-Dec-2013 NUS researchers develop novel bio-inspired method to grow high-quality graphene for high-end electronic devices Drawing inspiration from how beetles and tree frogs keep their feet attached to submerged leaves, the study breaks current technology bottleneck and enables wide ranging applications for graphene IMAGE: This image shows researchers at NUS' Graphene Research Centre working on wafer scale graphene. view more Credit: NUS Faculty of Science Singapore, 12 December 2013 - A team of researchers from the National University of Singapore (NUS), led by Professor Loh Kian Ping, who heads the Department of Chemistry at the NUS Faculty of Science, has successfully developed an innovative one-step method to grow and transfer high-quality graphene on silicon and other stiff substrates, opening up opportunities for graphene to be used in high-value applications that are currently not technologically feasible. This breakthrough, inspired by how beetles and tree frogs keep their feet attached to submerged leaves, is the first published technique that accomplishes both the growth and transfer steps of graphene on a silicon wafer. This technique enables the technological application of graphene in photonics and electronics, for devices such as optoelectronic modulators, transistors, on-chip biosensors and tunneling barriers. The innovation was first published online in prestigious scientific journal Nature on 11 December 2013. Demand for graphene in silicon-based industries Graphene has attracted a lot of attention in recent years because of its outstanding electronic, optical and mechanical properties, as well as its use as transparent conductive films for touch screen panels of electrodes. However, the production of high quality wafer-scale graphene films is beset by many challenges, among which is the absence of a technique to grow and transfer graphene with minimal defects for use in semiconductor industries. Said Prof Loh, who is also a Principal Investigator with the Graphene Research Centre at NUS Faculty of Science, "Although there are many potential applications for flexible graphene, it must be remembered that to date, most semiconductors operate on "stiff" substrates such as silicon and quartz." "The direct growth of graphene film on silicon wafer is useful for enabling multiple optoelectronic applications, but current research efforts remain grounded at the proof-of-concept stage. A transfer method serving this market segment is definitely needed, and has been neglected in the hype for flexible devices," Prof Loh added. Drawing inspiration from beetles and tree frogs To address the current technological gap, the NUS team led by Prof Loh drew their cues from how beetles and tree frogs keep their feet attached to fully submerged leaves, and developed a new process called "face-to-face transfer". Dr Gao Libo, the first author of the paper and a researcher with the Graphene Research Centre at NUS Faculty of Science, grew graphene on a copper catalyst layer coating a silicon substrate. After growth, the copper is etched away while the graphene is held in place by bubbles that form capillary bridges, similar to those seen around the feet of beetles and tree frogs attached to submerged leaves. The capillary bridges help to keep the graphene on the silicon surface and prevent its delamination during the etching of the copper catalyst. The graphene then attaches to the silicon layer. To facilitate the formation of capillary bridges, a pre-treatment step involving the injection of gases into the wafer was applied by Dr Gao. This helps to modify the properties of the interface and facilitates the formation of capillary bridges during the infiltration of a catalyst-removal liquid. The co-addition of surfactant helps to iron out any folds and creases that may be created during the transfer process. Industrial applications and new insights This novel technique of growing graphene directly on silicon wafers and other stiff substrates will be very useful for the development of rapidly emerging graphene-on-silicon platforms, which have shown a promising range of applications. The "face-to-face transfer" method developed by the NUS team is also amenable to batch-processed semiconductor production lines, such as the fabrication of large-scale integrated circuits on silicon wafers. To further their research, Prof Loh and his team will optimise the process in order to achieve high throughput production of large diameter graphene on silicon, as well as target specific graphene-enabled applications on silicon. The team is also applying the techniques to other two-dimensional films. Talks are now underway with potential industry partners. ### Carolyn Fong Manager, Media Relations Office of Corporate Relations DID: (65) 6516 5399 Email: carolyn@nus.edu.sg About National University of Singapore (NUS) A leading global university centred in Asia, the National University of Singapore (NUS) is Singapore's flagship university, which offers a global approach to education and research, with a focus on Asian perspectives and expertise. NUS has 16 faculties and schools across three campuses. Its transformative education includes a broad-based curriculum underscored by multi-disciplinary courses and cross-faculty enrichment. Over 37,000 students from 100 countries enrich the community with their diverse social and cultural perspectives. NUS has three Research Centres of Excellence (RCE) and 23 university-level research institutes and centres. It is also a partner in Singapore's fifth RCE. NUS shares a close affiliation with 16 national-level research institutes and centres. Research activities are strategic and robust, and NUS is well-known for its research strengths in engineering, life sciences and biomedicine, social sciences and natural sciences. It also strives to create a supportive and innovative environment to promote creative enterprise within its community. For more information, please visit http://www.nus.edu.sg. carolyn@nus.edu.sg @NUSingapore http://www.nus.edu.sg/ More on this News Release Researchers at NUS' Graphene Research Centre (IMAGE) http://www.science.nus.edu.sg/press-releases/817-nus-researchers-develop-novel-bio-inspired-method-to-grow-high-quality-graphene-for-high-end-electronic-devices Breaking News
What is Wrongful Death Law? In wrongful death law, individuals are able to sue for monetary damages. Written By: Alexis W. Edited By: Heather Bailey Wrongful death law is a branch of tort law that allows a victim's estate to recover when someone is killed. Tort law governs both intentional and negligent behavior which causes damage. It is a form of civil law, which means that individuals bring these suits against each other. It also means penalties are monetary, rather than criminal penalties such as jail time. Under the law in the United States and many other countries, every person owes a legal duty to every other person not to cause harm. If someone breaches that legal duty, either by behaving very carelessly or by doing something intentional to harm another, then the person who is harmed by the negligence or the intentional bad action has a legal right to be made whole. Tort law dictates what the person has to prove to show that the duty was breached, and what types of damages the injured victim is entitled to recover. When someone dies as a result of a negligent or intentional wrong, that person obviously cannot sue the individual responsible for his injury. The law, however, dictates that someone should still recover and that the person who caused the injury should still have to pay. Wrongful death law is thus in place to allow the estate of the deceased person to sue. To bring a wrongful death suit under wrongful death law, the plaintiff must have standing. This means he must be in some way related to the deceased victim and he must have an appropriate connection to the victim so as to represent the estate. Husbands, wives, children, or even parents are allowed to bring a wrongful death suit under wrongful death law. Neighbors and casual acquaintances, for example, are generally are not permitted to bring a wrongful death lawsuit. The plaintiff bringing the wrongful death suit must generally also prove the other elements of the case that the victim would have had to prove, had he been alive. For example, if a person was suing based on negligence, he would have to prove that the defendant actually was negligent, that the negligence was the proximate or direct cause of injury, and that injury and damages actually occurred. Wrongful death law thus dictates that the plaintiff suing on behalf of the deceased victim would have to prove these same elements in order to recover for the death. The law also dictates that the appropriate settlement for wrongful death is based on the person's life expectancy, the amount of money he made, and other related factors such as the closeness of the deceased person with the plaintiff. What Are the Steps for Filing Fatal Accident Claims? What is Wrongful Imprisonment? What is a Wrongful Conviction? How Common are Wrongful Death Settlements? What is Wrongful Death Malpratice? What is the Statute of Limitations for a Wrongful Death Action? Should I File a Wrongful Death Claim?
Scientists target newly-discovered enzyme in fight against malaria A mosquito is bloated with blood as it inserts its stinger into human flesh in this undated file photo obtained from the US Department of Agriculture. [AFP] Scientists on Wednesday said they had identified a new target in the parasite that causes malaria, a disease that causes more than a half a million deaths annually. Potential drugs can aim at a newly-discovered enzyme that the parasite uses to metabolise energy at every stage of its infection in humans, they said. The finding, published in the journal Nature, is important because only a tiny handful of weaknesses have been found that apply to every stage of the complex process by which the Plasmodium parasite grows and multiplies in the body. Most drugs aim at specific stages in the parasite’s cycle, not all. They notably fail to wipe out early forms of the parasite called hypnozoites that remain dormant in the liver and then revive, triggering a malarial relapse. The new target, called phosphatidylinositol-4-kinase, or PI4K, is an enzyme that the parasite needs to survive in host cells. “Most drugs selectively work on certain stages of the (parasite’s) life cycle, but not all stages,” said Case McNamara, a genomics specialist at the Novartis Research Foundation in San Diego, California. “Inhibitors of this drug target have the potential to not only cure individuals of a malaria infection, but also to prevent infections and even block transmission of the parasite back to the mosquito.” The only drug that is currently licensed to wipe hypnozoites is primaquine. Licensed more than half a century ago, the formula is considered a last-throw-of-the-dice option, as it can cause potentially life-threatening anaemia for people with an inherited genetic mutation. According to the UN’s World Health Organisation (WHO), 219 million people became infected with malaria in 2010, of whom 660,000 died, most of them African children under the age of five. [Image via Agence France-Presse]
Costa Rican lab to test plasma space rocket By John McPhaul LIBERIA, Costa Rica (Reuters) - Better known for coffee, surfing and jungles, tiny tropical Costa Rica is now home to scientists working on a plasma rocket engine they hope will slash travel times to the moon and beyond. Led by Costa Rican-born former NASA space shuttle astronaut Franklin Chang-Diaz, the Houston-based Ad Astra Rocket Company inaugurated a site last weekend in the Central American nation to test rocket components. The company hopes to sell the finished rocket engine, propelled by super-hot plasma, to NASA for moon trips planned for the next decade and an eventual lunar space station. Scientists believe rockets that run on plasma, the stuff that makes stars shine, will be faster than rockets currently used in space travel. Considered the fourth state of matter because it is neither a solid, liquid or gas, plasma is a high energy form of matter that can reach millions of degrees, making it a potentially powerful fuel. Closer to home, plasma is found in lightning bolts and neon signs. Chang-Diaz said he located the laboratory in the Costa Rican town of Liberia with the hope it will plant the seeds of space-age industry in a developing country that depends on tourism for much of its income. "Eventually as our people learn from experience they could design components, and that would become intellectual property of Costa Rica," Chang-Diaz said in a recent interview. The extreme power of his proposed rocket, which uses Variable Specific-Impulse Magnetoplasma Rocket, or VASIMR, technology, conceived in the 1970s, could eventually cut travel time to Mars by about a third, he said. Chang-Diaz, a physicist, helped develop VASIMR during several space shuttle missions after he joined NASA in the A prototype of the rocket, to be built in Ad Astra's Houston laboratory, should be completed by the end of 2007 with a price tag of $10 million. Ad Astra hopes to unveil two operational rockets by the end of 2010 and 2011 at a cost of $150 million.
Cocaine users have 45 percent increased risk of glaucoma A study of the 5.3 million men and women seen in Department of Veterans Affairs outpatient clinics in a one-year period found that use of cocaine is predictive of open-angle glaucoma, the most common type of glaucoma. Current and former cocaine users had a 45 percent increased risk of glaucoma. A study of the 5.3 million men and women seen in Department of Veterans Affairs outpatient clinics in a one-year period found that use of cocaine is predictive of open-angle glaucoma, the most common type of glaucoma. The study revealed that after adjustments for race and age, current and former cocaine users had a 45 percent increased risk of glaucoma. Men with open-angle glaucoma also had significant exposures to amphetamines and marijuana, although less than cocaine. Patients with open-angle glaucoma and history of exposure to illegal drugs were nearly 20 years younger than glaucoma patients without a drug exposure history (54 years old versus 73 years old). Study results appear in the September issue of Journal of Glaucoma. "The association of illegal drug use with open-angle glaucoma requires further study, but if the relationship is confirmed, this understanding could lead to new strategies to prevent vision loss," said study first author Regenstrief Institute investigator Dustin French, Ph.D., a research scientist with the Center of Excellence on Implementing Evidence-Based Practice, Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service in Indianapolis. A health economist who studies health outcomes, he is also an assistant professor of medicine at the Indiana University School of Medicine. Glaucoma is the second most common cause of blindness in the United States. Although the mechanism of vision loss in glaucoma is not fully understood, most research has focused on an increase in eye pressure gradually injuring the optic nerve. Most individuals who develop open-angle glaucoma have no symptoms until late in the disease process when substantial peripheral vision has been lost. Dr. French and colleagues found that among the 5.3 million veterans (91 percent of whom were male) who used VA outpatient clinics in fiscal year 2009, nearly 83,000 (about 1.5 percent) had glaucoma. During the same fiscal year, nearly 178,000 (about 3.3 percent) of all those seen in the outpatient clinics had a diagnosis of cocaine abuse or dependency. Although this study determined significant increased risk for glaucoma in those with a history of drug use, it does not prove a causal relationship. It is unlikely that glaucoma preceded the use of illegal drugs, since substance use typically begins in the teens or twenties. "The Veterans Health Administration substance use disorder treatment program is the largest and most comprehensive program of its kind in the country," said Dr. French. He believes that the reliability of the data used in the glaucoma study reflects the overall scope and high quality of the VHA substance use program. The long-term effects of cocaine use on intraocular pressure, the only modifiable risk factor for glaucoma, requires further study. Should the association of cocaine use and glaucoma be confirmed in other studies, substance abuse would present another modifiable risk factor for this blinding disease. This study, "Substance Use Disorder and the Risk of Open-Angle Glaucoma" was funded by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service. In addition to Dr. French, co-authors are Curtis E. Margo, M.D., of the University of South Florida College of Medicine and Lynn E. Harman, M.D., of the James Haley VA Hospital in Tampa. The above post is reprinted from materials provided by Indiana University School of Medicine. Note: Content may be edited for style and length. Dustin D. French, Curtis E. Margo, Lynn E. Harman. Substance Use Disorder and the Risk ofOpen-angleGlaucoma. Journal of Glaucoma, 2011; 1 DOI: 10.1097/IJG.0b013e3181f7b134 Indiana University School of Medicine. "Cocaine users have 45 percent increased risk of glaucoma." ScienceDaily. ScienceDaily, 30 September 2011. <www.sciencedaily.com/releases/2011/09/110929122934.htm>. Indiana University School of Medicine. (2011, September 30). Cocaine users have 45 percent increased risk of glaucoma. ScienceDaily. Retrieved August 27, 2016 from www.sciencedaily.com/releases/2011/09/110929122934.htm Indiana University School of Medicine. "Cocaine users have 45 percent increased risk of glaucoma." ScienceDaily. www.sciencedaily.com/releases/2011/09/110929122934.htm (accessed August 27, 2016). Study Estimates Prevalence of Glaucoma Among Singapore Chinese May 14, 2015 — A study of Chinese adults in Singapore suggests the prevalence of glaucoma, a disease of the eye that can result in blindness, was 3.2 percent, with no difference from a previous study conducted in ... read more DNA Sequencing Helps Identify Genetic Defects in Glaucoma Nov. 14, 2014 — Scientists have sequenced the mitochondrial genome in glaucoma patients to help further understanding into the genetic basis for the disease. Glaucoma is a major cause of irreversible blindness, ... read more Long-Term Oral Contraceptive Users Twice as Likely to Have Serious Eye Disease Nov. 18, 2013 — Research has found that women who have taken oral contraceptives for three or more years are twice as likely to suffer from glaucoma, one of the leading causes of blindness which affects nearly 60 ... read more Caffeinated Coffee Linked to Vision Loss Oct. 3, 2012 — A new study suggests caffeinated coffee drinkers should limit their intake to reduce their chances of developing vision loss or blindness. According to a scientific paper, heavy caffeinated coffee ... read more Strange & Offbeat
miRagen Therapeutics Signs Research and Licensing Agreements for microRNA Profiling of Human Heart Failure Study (Nanowerk News) miRagen Therapeutics, Inc., a biopharmaceutical company focused on improving patients' lives by developing innovative microRNA (miRNA)-based therapeutics for cardiovascular and muscle disease, and the University of Colorado (CU) announced today that they have entered into sponsored research and licensing agreements to collaborate on miRNA therapeutics discovery and development. The sponsored research agreement will support the analysis of miRNA and gene expression changes from a study conducted at the University of Colorado Cardiovascular Institute at the UC Denver School of Medicine, "Beta Blocker Effects on Remodeling and Gene Expression (BORG)," while the licensing agreement will enable the company to commercialize intellectual property associated with discoveries made during the research project. Further analysis of the completed study, funded by miRagen, will provide the Company with data on miRNA changes in human heart failure patients followed over two years with associated disease outcomes. Financial details of the agreements were not disclosed. The BORG study was led by Michael R. Bristow, M.D., Ph.D., Professor of Medicine and Co-Director of the Cardiovascular Institute at CU, and a co-founder of miRagen, and Brian Lowes, M.D., Associate Professor of Medicine. CU investigators in laboratories led by David Port, Ph.D., and Carmen Sucharov, Ph.D., will also be contributing to the study. The study, which is the next-generation version of a landmark serial myocardial gene expression study published in the New England Journal of Medicine in 2002 (Lowes et al NEJM 346:1357-1365), was conducted in 63 chronic heart failure/non-ischemic dilated cardiomyopathy patients followed for an 18 month period, with measurements of chamber remodeling and messenger RNA as well as miRNA expression at baseline, three months and 12 months. "We are extremely pleased to work closely with Dr. Bristow and the University of Colorado and to gain access to these unique data in human patients with heart failure," said William S. Marshall, Ph.D., President and CEO of miRagen Therapeutics, Inc. "This provides us with the ability to analyze miRNA levels, as well as gene expression changes, in a given patient at specific points in time in their disease progression. We believe this will provide a very powerful tool in stratifying our miRNA targets and support our mission of developing groundbreaking miRNA-based therapeutics to treat patients with cardiovascular and muscle disease." "The BORG study performed at the University of Colorado Health Sciences Center contains novel information on miRNAs and their relationships to myocardial remodeling and messenger RNA (mRNA) behavior, which will be very useful to miRagen in target selection for their therapeutic miRNA approaches," said Dr. Bristow. "In drug development, animal models are of course very valuable, but for target validation as well as novel target discovery, human data are vitally important." "The University is very pleased with closing this deal," said David Poticha of the CU Technology Transfer Office. "The team that has been assembled by miRagen has a strong history of successfully developing Colorado-based biotechnology companies, and we firmly believe miRagen is the right and best partner to help commercialize the microRNA technologies developed by Drs. Port, Sucharov and Bristow." About microRNAs MicroRNAs have emerged as an important class of small RNAs encoded in the genome. They act to control the expression of sets of genes and entire pathways and are thus thought of as master regulators of gene expression. Recent studies have demonstrated that microRNAs are associated with many disease processes. Because they are single molecular entities that dictate the expression of fundamental regulatory pathways, microRNAs represent potential drug targets for controlling many biologic and disease processes. About miRagen Therapeutics miRagen Therapeutics, Inc., was founded in 2007 to develop innovative microRNA-based therapeutics for cardiovascular and muscle disease. Only recently discovered, microRNAs are short, single-stranded RNA molecules encoded in the genome that regulate gene expression and play a vital role in influencing cardiovascular and muscle disease. Cardiovascular disease is the leading cause of death globally and represents an enormous burden on global healthcare systems. Principally funded through venture capital investments, miRagen combines world recognized leadership in cardiovascular medicine with unprecedented in-house expertise in microRNA biology and chemistry. For more information, please visit www.miragentherapeutics.com. Source: miRagen Therapeutics (press release) (click here for Research News) Compact 6-Axis Motion Controller for High Precision Positioning Applications in Industry and Research New Catalog: Custom Engineered Precision Motion Systems Largest Telescope in the World Driven by 2000+ PI Actuators Nanosurf launches webshop Miniaturized Fast Steering Mirror Platforms (FSM) for Laser Beam Control are Piezo-based Small, Precision, Affordable: Linear Motor Stage with Magnetic Direct Drive Imaging, Endoscopy, and Fiber Optics Applications Benefit from New Piezoelectric Micro-Tubes Mini XY Stage Combines 10nm Resolution with 200mm/sec Speed for Optics and Photonics Applications Latest Nano-Positioning and Precision Motion Control Systems Display at Optics+Photonics 2017 PI's New $15MM R+D Center - Commitment to Future Precision Engineering Innovations Posted: Jul 28, 2017 Planar, Parallel Motion Basis for New 6-Axis Positioning Systems World's smallest nanoparticle counter New Nanopositioning Solutions Catalog Debuted at LASER 2017 New from PI, X-Y-Tip-Tilt Stage for Rotary Air Bearings PI's Precision Automation Solutions Featured at 2017 LASER World of PHOTONICS Technology Advances in Nanomechanics Webinar Higher Velocity, Resolution: PI's New Compact High Performance Hexapod for Industrial Applications PI is Motion And Control Industry 2017 Awards Skills And Training Finalist Park Systems Introduces Park NX12 for Unsurpassed Affordable High Resolution NanoScale Imaging Dynamic Positioning & Scanning Delivered in Integrated XY Linear Motor Drive ...more nanotechnology business news
The Struggle for Democracy: Activists Take the Offense By Virginia Rasmussen Remarks at the Empowering Democracy Conference, New York City, April 13, 2002 by Virginia Rasmussen, Women's International League for Peace and Freedom (WILPF), Program on Corporations, Law and Democracy (POCLAD) Empowering democracy. This phrase reaches the heart of every social justice activist's work. What does it mean to give power to democracy? It relates to making real the people's legal authority to govern. Whatever the focus of our particular struggle, success hinges fundamentally on our having the power to bring the change we envision. Every issue is anchored in the struggle for that legal authority. In his book, The First American Revolution: Before Lexington and Concord, Ray Raphael tells us about a democratic moment in Massachusetts history. In 1774, six months before the shot heard 'round the world,' crowds of men numbering in the thousands deposed every Crown-appointed official in rural Massachusetts. This was in response to Parliament's Massachusetts Government Act, which virtually withdrew the considerable self-governance granted to the colonists by the 1691 Massachusetts Charter. In Worcester, 4,622 militiamen lined Main Street and instructed the British-appointed officials to walk the gauntlet, hats in hand, as they recited their resignations 30 times so all could hear. In every county outside Boston, the British lost control and never regained it. Raphael claims that, 'Through it all, the revolutionaries engaged in a participatory democracy which far outreached the intentions of the so-called "Founding Fathers." 1 What is it about this glimpse of times past that's important for us today? Those colonists possessed some critical characteristics that we, despite all our material and technological pizzazz, now have in small measure. They assumed themselves capable of self-governing; they displayed the attitudes and behaviors of people who took for themselves the authority to be in charge. This story reveals the essence of democratic culture and helps us grasp what the work of activists struggling to empower democracy must be about: building a culture of communities with the assumptions, attitudes, and authority of sovereign citizens. This is a challenging task. In The Populist Moment, Lawrence Goodwyn describes us as 'not only culturally confused, our confusion makes it difficult for us even to imagine our confusion.' 2 But more and more people are cutting through the fog; our confusion is lifting. The right to assume that our basic nature just might be decent, cooperative, and compatible with self-governing has been stolen by the few who rule over us. And we're figuring it out. Our right to learn and live by the attitudes and behaviors of self-governance has been denied to us by the few who are in charge. And we're figuring it out. Our authority to be a nation of self-governing people was given away to the corporation, a 'legal fiction' created to serve us. We intended the corporation to concern itself with business and commerce, but it now dominates our politics and government. It was redesigned and legally empowered over the last 150 years to scoop up wealth and power. It has amassed so much legal authority in the USA that a propertied few, shielded by corporate 'rights,' now govern the many. And having seized most power and wealth in this country, those few now write international agreements they would have us believe are about 'trade,' but which, in fact, foist corporate governing rights on every nation of the world. What's an activist to do? WE'RE MAD AS HELL AND WE'RE NOT TAKING IT ANY MORE! What was done in the name of the Enron Corporation has made people furious — not only because it engaged in criminal activity like financial fraud and insider trading, but because most of what the Enron Corporation did was perfectly legal. Even worse, the laws condoning those actions were essentially written by Enron operatives and their cohorts: laws that allow them to pick candidates and bankroll them into office; make energy policy and define energy debate; hide debt in ghost entities called partnerships; buy and sell fictional 'derivatives'; put profits in tax-free, off-shore banks, eliminating Enron Corporation's tax burden in four of the last five years... all quite legal. It's legal for corporations to fund think tanks that tell us how to think and what to believe; to endow university chairs, write textbooks, control research. In a nation of self-governing people, these are our debates to define and decisions to make, and more and more activists are figuring it out. We're fed up with behaving like subordinates content to influence the decisions of corporate boards and the corporate class. Having influence is valuable, but influencing is not deciding. We're weary of waging long, hard battles simply for the 'right to know.' Knowing is critical, but knowing is not deciding. We're tired of exercising our right to dissent as the be-all and end-all. Dissent is vital, but dissenting is not deciding. Influencing, knowing, dissenting, participating — all are important to a democratic life, but not one of them carries with it the authority to decide, the power to be in charge. LAUNCHING THE OFFENSIVE More and more people are taking this power, shifting goals and strategies in order to defy corporate authority over our lives, work, communities, values, law and politics, culture and future. These initiatives are directed toward public officials, attorneys general, elected boards, and legislatures. We're not taking the subordinate role of asking the Enron Corporation to behave a little better. We're not content with putting a corporate-designed and -controlled regulatory agency on Enron's trail. Regulatory law protects corporations from pesky people. It enables and protects the corporate agenda as it was intended to do. We're catching on that the language and strategy, actions and arenas that frame our work determine its outcome. If we seek democratic outcomes, we must frame activism in the people's sovereign authority to rule. Coalitions of citizens and activist organizations around the country are conducting community-based study groups, learning how corporations acquired legal powers way beyond those possessed by human beings. We are getting clear that corporate lawyers relied on judges to turn into law whatever business practices gave corporate actors power over people and natural resources. They interpreted state-granted corporate charters to be contracts over which states were no longer sovereign; they made gifts of private property to corporate claimants that transformed We the People into trespassers. They saw to it that a corporation's future profits and the decision-making in its name are constitutionally protected from us -- beyond the people's authority. We are learning that the commerce clause, prohibiting states from interfering with interstate commerce, was the first incarnation of a free-trade agreement. Corporate insiders and their judge advocates used it to declare that laws protecting workers, communities, children, and the environment are unconstitutional impediments to free-flowing commerce. We are finding an early model for powerful international trade tribunals in the unelected, unaccountable Supreme Court. Where is the people's authority in this picture? Why do corporate entities have rights at all? Rights are for people. Corporations should have privileges only, to do what we ask of them. This was once obvious to people, until corporations were declared 'persons' under the law by the Supreme Court in 1886. The court extended 14th Amendment protections of due process of law to the corporate form, protections intended for recently freed slaves. From the day of that decision, corporate lawyers have not stopped seeking and winning protection after protection for corporations while African Americans have struggled to realize the promise of the 14th Amendment in their lives. Endowed with legal personhood status, the corporate form then acquired the protections of the Bill of Rights. First Amendment free speech rights for 'corporate persons' leave real people in the electoral dust; Fourth Amendment protections from search and seizure for 'corporate persons' trump workplace safety and health law. Now corporate lawyers say that the Fifth Amendment protects corporations from any government 'taking' without 'just compensation.' They are making the case that any environmental regulation encroaches on corporate property 'rights.' Some federal judges are agreeing, awarding compensation based on alleged lost future profits. The final curtain on environmental regulation may well be coming down. Indeed, corporate rights of private property give them power over the people, and their personhood rights bring them protection from the people. Unless we challenge corporatized law and culture, activists will be waging defensive battles against harm after endless harm forevermore. Where do we take action to oppose corporate rule? To our communities for conversation and learning, to the culture for reflection and rethinking, to town boards, public officials, and state legislators. This is where we have legal standing. In these arenas we have the opportunity to empower democracy, to write true democratic law. Such law can only arise from the will of the people and the vision of a democratic culture. It will never arise in the arenas of oppression: corporate boardrooms, courts of law, or regulatory bodies. The people in ten townships of south central Pennsylvania passed ordinances to protect family farms that are locally owned and managed. They wanted to prevent corporate hog farms from invading their communities. They could see that battles about parts per million of hog pollution in their creeks, or square feet of stinking hog waste in lagoons, was waging a fruitless battle on the corporation's terms. Like the 18th-century Massachusetts democrats before them, they sought to define their own lives and work, economies and communities. In response to this assertion of people's authority, lawyers for the farm bureau and agribusiness corporations filed a lawsuit declaring that Belfast Township has no constitutional authority to pass such an ordinance. They state that the Constitution's equal protection and due process clauses, its no takings clause, its commerce clause, its contracts clause, its privacy protections, its 14th Amendment protections are all stacked against the people and for the corporations. This action strengthened the people's and township supervisors' resolve, convinced as they are that the Constitution should be in service to people and not to property organized in the corporate form. At a recent meeting of Pennsylvania municipalities, 350 township governments voted to oppose the stripping away of local governmental control over corporate farming and sewage sludge management. This is forceful evidence of a growing determination to drive self-governance into the Constitution, which is what our activist labors must be about. This is not anti-corporate work. This is the work of healing our body politic, of coming to the defense of our common good. It's the work of empowering democracy. We are among generations of people who've struggled for the right to be self-governing. There were always those who understood, who pulled themselves together, took the offense, organized resistance, demanded democratic alternatives, established some of their own. And while their efforts were often ridiculed, crushed, or coopted, they offered lessons to inform this generation's work. Knowing their stories is essential if we are to create our own. Like our activist forebears, we are pulling ourselves together and pushing into the Constitution and the rule of law that was asserted by those in Massachusetts who tossed out British rule in 1774, and by our Declaration of Independence and the American Revolution: the right of the people to govern. It's a radical task, a large and long one. Whom do we summon to this assignment? Poet and author Annie Dillard has this to say: There is no one but us. There is no one to send, nor a clean hand nor a pure heart on the face of the earth, nor in the earth, but only us, a generation comforting ourselves with the notion that we have come at an awkward time, that our innocent fathers are all dead -- as if innocence had ever been -- and our children unfit, not yet ready, having each of us chosen wrongly, made a false start, failed, yielded to impulse and the tangled comfort of pleasures and grown exhausted, unable to seek the thread, weak, and involved. But there is no one but us. There never has been. 3 Raphael, Ray, The First American Revolution: Before Lexington and Concord, The New Press, New York, 2002. Goodwyn, Lawrence, The Populist Moment: A Short History of the Agrarian Revolt in America, Oxford University Press, NY, 1978, p. ix. Dillard, Annie, Holy the Firm, Harper and Row, New York, 1977, p. 56. A picture is not only worth a thousand words... ...It communicates to different parts of the brain -- especially those parts that may be unable to follow or unwilling to delve into complicated constitutional and legal issues. Effective graphic images support and reinforce messages so that they can be retained more easily and recalled far longer than text alone. Peter Kellman used the images below in the book Building Unions to make the text more accessible and entertaining. These images were created by freelance cartoonist Matt Wuerker and are available free of charge to activist groups and organizers working on these issues. Feel free to use them in your newsletters, pamphlets, or on the sides of blimps -- any way you can think of to promote your work. You can download whatever might be useful from www.poclad.org. Print-quality resolution versions are also available. Publishers who normally pay reprint fees are encouraged to contact Matt Wuerker directly at mcwuerker@yahoo.com. The goals of the 1830s labor movement -- the ten-hour day and public education -- focused on democracy. Labor people argued that if they were to build a democracy, they had to be educated, and to be educated they needed time to go to school. So they fought for the ten-hour day and free public education, not as benefits in and of themselves, but as conditions necessary to bring about a republican form of government. Because we understand that there is more to democracy than just voting every few years, we need to have the time to participate in the functioning of government. So taking our cue from the 1830s, we should advocate for a 32-hour work week with one day set aside for the 'common people' to study and participate in the functioning of a democratic government. On that day every week, we would participate in public meetings and instruct our elected representatives. This illustration suggests how corporate lobbyists and bought-off public officials might quake and shake if millions of working-class people actually had the time to participate in the legislative process. A boycott is a political action in which people ask others not to do something, such as patronize a store that sells a product made by scab workers. A boycott is an exercise of the First Amendment right of free speech. Although this right is not fully extended to workers, the law goes to great lengths to protect so-called 'commercial free speech,' a perversion of the principle of free speech. As Alexander Meiklejohn explains, the First Amendment 'does not intend to guarantee men to say what some private interest pays them to say for its own advantage. It intends only to make men free to say what, as citizens, they think.' This illustration demonstrates the absurdity of the imbalanced way the First Amendment is protected in this country. The illustration above shows how the Constitution was written by rich white men in a closed meeting. This elite group wrote a Constitution to protect the propertied class they represented. As the two illustrations below indicate, for more than two centuries the Constitution has continued to protect minority rule while the power-holders in our society maintain the illusion that we live in a democracy. Two ways in which this happens are free speech rights for 'corporate persons' and judicial bias toward the propertied class. There is a great disparity between our image of ourselves living in a democracy and the reality. The 'Abandon All Rights' illustration shatters that image by pointing out that at work, where the majority of us spends most of our waking hours, democracy is not spoken and the Bill of Rights does not apply to workers. This reality prompts us to ask questions: if we are prevented from practicing democracy at work, is it possible to practice democracy in our politics? If the corporate boss in our society has free speech but workers don't, can any of us say we live in a democratic society? Our Corporate Elite and the Constitution Richard Grossman and Ward Morehouse An excerpt from the foreword to The Elite Consensus: When Corporations Wield the Constitution 'Over the past 200 years, all over the world but especially in the United States, legal systems have been changed to accomplish two things: limit the legal liabilities of corporations, and give corporations the rights and protections of citizens' by extending 'constitutional rights to corporations.' So writes George Draffan in The Elite Consensus, a concise volume about techniques employed by the few to govern the many. What has this meant for people seeking justice and peace? Time and again we have come together to assert in the face of insane corporate plans: Not In Our Names. Not Here. Not There. Not Anywhere. Millions have devoted their lives to organizing against one corporate assault after another. This civic work has been vital — to save life and land, to lift the human spirit, to teach children. But more and more people are seeing that resistance to corporate assaults — while necessary — will not end corporate rule. So like many who lived under monarchy rule in this continent's English colonies, people today are evolving from asking our rulers to be a little less bad to organizing for independence and self-governance. George advances this exciting evolution as he dissects the elite consensus — 'larger than any industry' — pitching its manufactured histories, destructive values, false choices, American Empire; selling its mantra of endlessly increasing production as the source of liberty and security. He unveils this consensus forged in every generation by the corporate class: it is 'to build and maintain power itself.' To thwart democracy. To govern the Earth. The Revolutionary Era's propertied and slave-owning gentlemen denied rights to the people living all around them who created their comfort and wealth, who did their work. They wrote law to keep the histories, experiences, needs, and aspirations of the denied from being represented in the halls of government. And they labeled their own stolen powers as 'constitutional rights.' After the Civil War, men of property used the corporation to consolidate their grip over the nation's investment, labor, resources, and role in the world. Using the wealth that their Constitution had helped them amass, they redesigned the corporation to serve as their political - their governing - institution. And as they had previously wrapped themselves, they wrapped the corporation in the nation's sacred text. Today, corporate lobby and propaganda associations, think tanks, charities, foundations, and other juridical clones masquerade as We the People. They sport goodness and mercy monikers like 'Patriotic Citizens for Secure Jobs and All-American Energy' and 'Good Neighbors for Fair Chemicals.' In public offices, on talk shows, in op-ed pages, in sŽances with elected officials, at tribunals of global multilateral agencies, and in advertisements everywhere, their spokespeople are perpetually saying what they are paid to say. In this book George examines the full range of such institutions wielding the Constitution — from the World Bank, the International Monetary Fund, and the World Trade Organization to the Council on Foreign Relations, the Cato Institute, NATO, and the United Nations. He includes the public relations and advertising corporations into which elites pour hundreds of billions of tax-deductible dollars and identifies corporate propagandists posing as journalists. George's first chapter, 'Cultural Power: The Colonization of our Minds,' examines how mass media, PR, and other corporations shape people's understanding of the way things are supposed to be. George also describes how well-endowed corporate foundations, think tanks, and lobby groups do their daily work. Chapter two probes the judicial system, along with corporate use of the rule of law as a means of leveraging authority. Chapter three focuses on the reality that many industries and services are oligopolies dominated by a few corporate conglomerates wealthier than most nations. Next, George looks at the iron fist inside the PR-camouflaged corporate glove. He helps us remember that when people challenged governance by the corporate few, public officials have responded with violence. Abolitionists, suffragists, Knights of Labor, Populists, Socialists, and Wobblies of the past; war resisters, civil and human rights advocates, labor and environmental activists of past and present — all have experienced the nation's police, courts, national guards, militias, and jails compelling obedience to the elite's corporations. The second part of The Elite Consensus profiles leading terrorist corporations, such as the Chamber of Commerce, the Trilateral Commission, and the Council on Foreign Relations. George provides useful information about the origins, budgets, directors, and work of each. We learn, for instance, that Defense Secretary Donald Rumsfeld was a director of the Hoover Institution (which had placed many of its members in the Reagan administration). So was David Packard of military and electronics giant Hewlett-Packard Corporation. We see that in the mid-1990s, National Public Radio correspondent Anne Garrels spent two years in Russia as a fellow with the Council on Foreign Relations. As icing on the cake, George ranks corporate expenditures for writing laws, links top lobbyists with their corporate clients, follows corporate money as it violates the body politic, and summarizes studies examining creative corporate extractions of public funds. George Draffan's profiles bring our country's elite consensus to life! The Elite Consensus reveals how a propertied class that long ago figured out how to write — and keep on writing — the Constitution thwarts democratic impulses and public actions over and over again. Because George's analysis complements the publications of the Program on Corporations, Law and Democracy, POCLAD is pleased to join The Apex Press in creating this new edition of George's book. Whether you are contesting corporate-manufactured news of the day; charging a politician, judge, or corporate executive with usurpation; wringing single-issue struggles from regulatory agencies and driving them into the Constitution; or otherwise asserting the people's sovereign authority to govern: we urge you to keep The Elite Consensus by your side. The Elite Consensus: When Corporations Wield the Constitution, by George Draffan.* Published for POCLAD by The Apex Press, 2002. Prices: $14.95 for one copy (plus $4.00 S&H) ; $12.95 each for two to five copies (plus $4 S&H for the first book and $1 S&H for each additional book.). Make checks payable to POCLAD and mail to: P.O. Box 246, South Yarmouth, MA 02664-0246. For greater quantities or for credit card orders call 800.316.2739. *This is an updated and retitled version of The Corporate Consensus: A Guide to the Institutions of Global Power, published by the Blue Mountains Biodiversity Project, 2000.
SACFS provides Head Start eligible three- to five-year old children with a comprehensive early childhood development program focusing on education, health, nutrition, safety, and mental health services. SACFS also offers social services to the families of the children it serves and promotes parent involvement in the program. SACFS operates two centers: Bronx Site located on Montgomery Avenue in the South Bronx and Lenox Site located in Central Harlem on Lenox Avenue. SACFS Head Start funded enrollment is for 128 three- to five-year olds. The Lenox Site collaborates with the Universal Pre-Kindergarten program which enables this site to offer an extended-day program. SACFS operates year-round except for an annual closing for 3 weeks in August and one week for the winter holiday (December). SACFS is staffed by a highly qualified and dedicated team of 25 trained professionals which include former Head Start parents. Child health and development services: Each child enters the program with a current physical examination, including immunization and TB test. Social Service Staff work with the family to identify and access health providers and other services as needed. Vision, speech, and hearing screenings and dental examinations are provided on-site. Education and early childhood development: SACFS Head Start uses the Creative Curriculum which follows the philosophy that children learn best by doing. The Curriculum enables children’s awareness of the basic processes and incorporates opportunities in the classroom which enrich and extend their developmental skills. The Creative Curriculum readily aligns itself with the Head Start Outcomes framework. Utilizing themes and interest areas, children develop language and literacy, logical reasoning, and an understanding of spatial relations. Children’s imagination and physical growth are enhanced through movement, music and dramatic play. They are encouraged to develop their own sense of self through creative expression, initiative and social relationships. Child health and safety: To ensure children’s health and safety, SACFS requires parents to inform SACFS of any health or safety needs of their child such as allergies, asthma, high lead levels and other health concerns. The staff makes it possible for all enrolled children to receive appropriate health care services by consulting with parents immediately when health or developmental problems are suspected or identified. Fire drills are conducted each month and emergency procedures posted in each classroom. All classrooms and other public areas are child-proofed. Staff members are trained on the prevention of child abuse and certified in First Aid and CPR. Each classroom is supplied with a First Aid Kit and emergency contact numbers are posted in each center. Teachers incorporate health and safety information in class curriculum. A number of staff members have been trained to administer medication. Child nutrition: A well-balanced lunch and snack is provided daily, free of charge to every child in the program. Parents share in the child’s nutritional experience by bringing cultural dishes to class, participating in menu planning and taking part in the Nutrition Committee. A licensed Nutritionist Consultant assists in providing counseling and workshops for parents, prepares menus and designs child-appropriate nutrition activities. Child mental health: The Brigance Preschool Screening is used upon enrollment to identify children who may be in need of early intervention. Children identified with special needs receive on-site services from Special Needs Providers, as well as, referrals to appropriate agencies in the community. Share our website! CONTRIBUTE Donation Amount: (Currency: USD) SACFS Newsletter Volunteers are always appreciated and welcomed at SACFS. There are many opportunities to assist the program such as: Volunteering in the classroom, the kitchen or on trips; Participating in the planning and implementation of center-related activities; Serving as a Board member or as part of the Policy Committee; Providing special expertise to the management team; Teaching a special skill to parents (sewing, typing, computer training, cooking) ; Providing music, art or dance classes to the children in the program; Reading a story to the children. If you are interested in volunteering, contact us at info@7thavenuecenter.org. We look forward to hearing from you! © 2012 7th Avenue Center for Family Services | Privacy Policy
EpiServer & SmartLogic: Implementing Semantic Technology for Content Management By Josette Rigsby FILED UNDER: Semantic Web Web 3.0 is supposed to usher in the widespread use of semantic technologies. However, few web content management (WCM) and enterprise content management (ECM) platforms currently support the tools. A recent case study describes an integration between EPiServer 6 (news, site) and Smartlogic's Semaphore, which provides a model for other organizations that need to introduce semantic capabilities into their WCM and ECM platforms.The Value of Semantic Technology for Content Semantic technology is a set of standards administered by the World Wide Web Consortium (W3C) that allows machines to better understand the meaning of content and its relationships without human intervention. In the case study, a client wanted a semantic website that allowed users to more easily locate unstructured content in within their WCM platform. The key goals of the project were to: leverage semantic technology to allow users to classify content and relate it to other content allow users to tag content with terms in the ontology so that the search engine would index WCM content based on preferred terms provide query-able services that could be integrated into search results create reusable components that the organization could configure for future developments EPiServer 6 supports search, but does not take into account any semantic enhancement. Since EPiServer supports an extension API, the team was able integrate Semaphore with the WCM without reducing usability. What did Semaphore bring to the table? The semantic tool provides: Ontology Manager/Server -- Allows users to leverage controlled vocabularies -- thesauri, taxonomies and ontologies to create relationships between terms. Classification Server -- Extracts keywords or terms from documents to best describe the document. It does this by making use of the information created by the Ontology Manager and Rulebase Generator. Search Enhancement Server -- A server based application that provides an XML API to the model stored in Semaphore. A Custom Solution for a Semantic Website The team integrated to the tools using a custom developed solution between Semaphore and EPiServer. The solution allows users to classify documents and post the results to the search engine so that the content is indexed. This allows the search engine to return related content and recommend documents based on the semantic description and relationships between the information. While the solution described in the case study was specific to EPiServer 6 and a specific search provider, Google Search Appliance, a similar to technique could be leveraged to implement semantic capabilities into other content management solutions. Additional details regarding the implementation are available on the EpiServer blog. Tweet semantic technology, web 3.0, epi server, smartlogic, web cms Please enable JavaScript to view the comments powered by Disqus.
Climate: New study projects major habitat losses for birds, reptiles in Southwest Posted on April 8, 2014 by Bob Berwyn A gray jay searches for bugs in a stand of lodgepole pines near Frisco, Colorado. A few bird species may gain some ground By Summit Voice FRISCO — Reptile species like the iconic chuckwalla will probably experience significant habitat loss as global temperatures climb during the next few decades, scientists said this week in a new study projecting climate change impacts to southwestern birds and reptiles. The study was done by scientists with the U.S. Geological Survey, University of New Mexico, and Northern Arizona University. Overall, the findings suggests many reptile species will lose ground as conditions get warmer and more dry. The picture is a little more complex with breeding birds. Some species, including black-throated sparrows and gray vireos, could expand their range, but others, including pygmy nuthatches, sage thrashers and Williamson sapsuckers will likely lose breeding habitat and populations will decline. Pinyon jays could could lose as much as 25 to to 33 percent of their breeding habitat as pinyon pines give way to warmer conditions. “Not surprisingly, whether a species is projected to be a winner or a loser depends primarily on its natural history and habitat needs and requirements,” said USGS scientist Charles van Riper III, the lead author on the study. “Land managers should be aware of these potential changes so that they can adjust their management practices accordingly.” To conduct the study, scientists coupled existing global climate change models with newly developed species distribution models to estimate future losses and gains of 7 southwestern upland bird species and 5 reptile species. The study area focused on the Sonoran Desert and Colorado Plateau ecosystems within Arizona, western New Mexico, Utah, southwestern Colorado and southeastern California, but also included the rest of the Western United States. Focal wildlife species included resident and migratory birds and reptiles, which make up most of the vertebrate biodiversity in the region. Temperatures in this region are projected to increase 6.3-7.2 F (3.5–4°C) within the next 60–90 years while precipitation is projected to decline by 5–20 percent. “Changes of this magnitude may have profound effects on distribution and viability of many species,” said Stephen T. Jackson, director of the Interior Department’s Southwest Climate Science Center. “Temperature matters a lot, biologically, in arid and semi-arid regions.” One very practical result of the project is a website with a series of range maps projecting the potential effects of climate change on bird and reptile distributions in the Western United States for three different time periods in the next 90 years. These predictions can help managers and policy makers better prioritize conservation effects, van Riper said. “Wildlife resource managers need regionally specific information about climate change consequences so they better identify tools and strategies to conserve and sustain habitats in their region,” said Doug Beard, director of the USGS National Climate Change and Wildlife Science Center that supported the project. “Managers can use these results to help plan for ways to offset projected effects of climate change on these species.” Detailed Bird Species Projections: Black-throated sparrow: breeding range projected to increase by 34-47 percent between 2010 and 2099. Gray vireo: breeding range projected to increase from 58-71 percent between 2010 and 2099. Virginia’s warbler: breeding range projected to decrease slightly, by 1.5-7 percent between 2010 and 2099. Sage thrasher: breeding range projected to decrease by 78 percent between 2010 and 2099. Pinyon jay: breeding range projected to decrease by 25-31 percent between 2010 and 2099. Pygmy nuthatch: breeding range projected to decrease by 75-81 percent between 2010 and 2099. Williamson’s sapsucker: breeding range projected to decrease by 73-78 percent between 2010-2099. Overall: Future climate change will negatively affect the distributions of reptiles in the Western and Southwestern U.S. The one exception is the Sonoran desert tortoise, which is forecasted to expand, and, if a decrease happens, only by about one percent. Reptiles can’t move as easily as birds nor can they regulate their body temperature, so they can only move minimally in response to changing climates. The authors found that the greater the projected distributional gain or loss in a reptile species was directly tied to the warmth of its current range. Thus, the less mobile reptiles will be more greatly affected by increasing temperatures. Plateau striped whiptail: range projected to decrease by 42 percent, assuming no dispersal, or by 17 percent, with unlimited dispersal, between 2010 and 2099. Arizona black rattlesnake: range projected to decrease between 32 and 46 percent between 2010 and 2099. Sonoran desert tortoise: The Sonoran (Morafka’s) desert tortoise is the only species of reptile for which projections do not include a decrease in suitable habitat by 2099 but only when unlimited dispersal is assumed. When assuming no dispersal, a slight one percent decrease is forecasted in the extent of suitable habitat. Common lesser earless lizard: ranged projected to decrease by 22-49 percent from 2010 to 2099. Common chuckwalla: projected ranges are likely to decrease by between 13 and 23 percent between 2010 and 2099. The report, Projecting climate effects on birds and reptiles of the southwestern United States, is authored by Charles van Riper III, USGS; James Hatten, USGS; J. Tom Giermakowski, University of New Mexico; Jennifer A. Holmes and Matthew J. Johnson, Northern Arizona University; and others. For more information about the USGS National Climate Change and Wildlife Science Center, please visit this website. Filed under: climate and weather, global warming Tagged: | biodiversity, climate change, Environment, global warming, habitat loss, Southwest « Big comeback for renewable energy stocks in 2013 Climate: Arctic sea ice peaks for the year »
Humans Would Be Better Off If They Monkeyed Around Like the Muriquis Biologist Karen Strier has been studying these peace-loving Brazilian primates and their egalitarian lifestyle for decades Unlike the chest-beating primates of popular imagination, Brazil’s northern muriquis are easygoing and highly cooperative. (Mark Moffett / Minden Pictures) Steve Kemper It’s 9 o’clock on a June morning in a muggy tropical forest not far from Brazil’s Atlantic coast and brown howler monkeys have been roaring for an hour. But the muriquis—the largest primates in the Americas after human beings, and the animals that the anthropologist Karen Strier and I have huffed uphill to see—are still curled high in the crooks of trees, waiting for the morning sun to warm them. A federally protected reserve located along Brazil's coast is home to muriquis monkeys, the second largest primate in the Americas aside from humans. (5W Infographics) At first Karen Strier thought muriquis were anomalies. (Greg Ruffing / REDUX) Typically experts in the canopy, muriquis sometimes do fall, sustaining fractures and other serious injuries. (Daniel Ferraz) Muriquis are extremely acrobatic, spending much of their time in treetops searching for food. (Bart van Dorp) Photo Gallery As they begin to stir, the adults scratch, stretch and watch the suddenly frisky youngsters without moving much themselves. A few languidly grab leaves for breakfast. They are striking figures, with fur that varies between gray, light brown and russet. Their black faces inspired the Brazilian nickname “charcoal monkey,” after the sooty features of charcoal makers. Strier knows these faces well. At age 54, the University of Wisconsin-Madison professor has been observing muriquis here for three decades. One of the longest-running studies of its kind, it has upended conventional wisdom about primates and may have a surprising thing or two to say about human nature. “Louise!” Strier says, spotting one of her old familiars. Louise belongs to Strier’s original study group of 23—clássicos, Strier’s Brazilian students call them. “She’s the only female who’s never had a baby,” says Strier. “Her friends are some of the old girls.” Above us, two youngsters frolic near their mother. “That’s Barbara,” says Strier, “and her 3-year-old twins Bamba and Beleco.” Female muriquis typically emigrate out of their natal group at about age 6, but Barbara has never left hers, the Matão study group, named after a valley that bisects this part of the forest. Even today, more than two years after I visited Brazil, Barbara remains in the group. Strier first came to this federally protected reserve in 1982, at the invitation of Russell Mittermeier, now president of Conservation International and chairman of the primate specialist group of the International Union for Conservation of Nature’s Species Survival Commission, who had been conducting a survey of primates in eastern Brazil. The reserve at the time held only about 50 muriquis, and Strier, a Harvard graduate student, was smitten with the lanky creatures cavorting in the canopy. “As soon as I saw the muriquis,” says Strier, “I said, ‘This is it.’” She stayed for two months and then returned for 14 more. In those days, to reach this patch of forest she rode a bus almost 40 miles from the nearest town and walked the last mile to a simple house without electricity. Often alone, she rose before dawn to look for the monkeys and didn’t leave the forest until they had settled down at dusk. She cut her own network of footpaths, collecting data on births, relationships, diets, dispositions, daily locations and emigrations. At night, she sorted the data by the light of gas lanterns. “As my contact with the animals increased, they introduced me to new species of food that they ate, and allowed me to witness new behaviors,” Strier wrote in her 1992 book Faces in the Forest, now a classic of primatology. As a personal account of a field biologist’s extraordinary, often lonely efforts to become acquainted with a wild primate, Strier’s work has been compared to Jane Goodall’s In the Shadow of Man and Dian Fossey’s Gorillas in the Mist. When Strier was first getting to know the muriquis, primatology was still largely focused on just a handful of species that had adapted to life on the ground, including baboons, or that had close evolutionary relationships with humans, such as apes. This emphasis came to shape public perception of primates as essentially aggressive. We picture chest-beating, teeth-flashing dominant male gorillas competing to mate with any female they choose. We picture, as Goodall had witnessed beginning in 1974, chimpanzees invading other territories, biting and beating other chimps to death. Primates, including possibly the most violent one of all—us—seemed to be born ruffians. How Do Tropical Frogs Get Their Stunning Colors? Can Kenya Light the Way Toward a Clean-Energy Economy? Ask Smithsonian: Why Were Prehistoric Animals So Big? (1:05)
Building Resilience in Children The world can be a frightening place. As a parent, I am constantly aware of choices that I make to minimize my perception of fear and uncertainty. Death, illness, divorce, crime, war, child abductions, tsunamis, and terrorism — both here and abroad — have defined an evolving landscape for raising our families. How do we manage to parent from a place of love and understanding, not fear and paranoia? It’s not possible to protect our children from the ups and downs of life. Raising resilient children, however, is possible and can provide them with the tools they need to respond to the challenges of adolescence and young adulthood and to navigate successfully in adulthood. Despite our best efforts, we cannot prevent adversity and daily stress; but we can learn to be more resilient by changing how we think about challenges and adversities. Today’s families, especially our children, are under tremendous stress with the potential to damage both physical health and psychological well-being. The stress comes from families who are always on the go, who are overscheduled with extracurricular activities, and ever-present peer pressure. In the teen years, the anxiety and pressure are related to getting into “the” college. In today’s environment, children and teens need to develop strengths, acquire skills to cope, recover from hardships, and be prepared for future challenges. They need to be resilient in order to succeed in life. That is why Kenneth Ginsburg, M.D., MS Ed, FAAP, a pediatrician specializing in adolescent medicine at The Children’s Hospital of Philadelphia (CHOP), has joined forces with the American Academy of Pediatrics (AAP) to author A Parent’s Guide to Building Resilience in Children and Teens: Giving Your Child Roots and Wings. The new book provides a dynamic resource to help parents and caregivers build resilience in children, teens, and young adults. Dr. Ginsburg has identified seven “C”s of resilience, recognizing that “resilience isn’t a simple, one-part entity.” Parents can use these guidelines to help their children recognize their abilities and inner resources. Competence describes the feeling of knowing that you can handle a situation effectively. We can help the development of competence by: Helping children focus on individual strengths Focusing any identified mistakes on specific incidents Empowering children to make decisions Being careful that your desire to protect your child doesn’t mistakenly send a message that you don’t think he or she is competent to handle things Recognizing the competencies of siblings individually and avoiding comparisons A child’s belief in his own abilities is derived from competence. Build confidence by: Focusing on the best in each child so that he or she can see that, as well Clearly expressing the best qualities, such as fairness, integrity, persistence, and kindness Recognizing when he or she has done well Praising honestly about specific achievements; not diffusing praise that may lack authenticity Not pushing the child to take on more than he or she can realistically handle Developing close ties to family and community creates a solid sense of security that helps lead to strong values and prevents alternative destructive paths to love and attention. You can help your child connect with others by: Building a sense of physical safety and emotional security within your home Allowing the expression of all emotions, so that kids will feel comfortable reaching out during difficult times Addressing conflict openly in the family to resolve problems Creating a common area where the family can share time (not necessarily TV time) Fostering healthy relationships that will reinforce positive messages To continue reading follow this link: https://www.healthychildren.org/English/healthy-living/emotional-wellness/Building-Resilience/Pages/Building-Resilience-in-Children.aspx
Loss of CCDC6, the first identified RET partner gene, affects pH2AX S139 levels and accelerates mitotic entry upon DNA damage. Francesco Merolla, Chiara Luise, Mark T Muller, Roberto Pacelli, Alfredo Fusco, Angela Celetti. CCDC6 was originally identified in chimeric genes caused by chromosomal translocation involving the RET proto-oncogene in some thryoid tumors mostly upon ionizing radiation exposure. Recognised as a pro-apoptotic phosphoprotein that negatively regulates CREB1-dependent transcription, CCDC6 is an ATM substrate that is responsive to genotoxic stress. Here we report that following genotoxic stress, loss or inactivation of CCDC6 in cancers that carry the CCDC6 fusion, accelerates the dephosphorylation of pH2AX S139, resulting in defective G2 arrest and premature mitotic entry. Moreover, we show that CCDC6 depleted cells appear to repair DNA damaged in a shorter time compared to controls, based on reporter assays in cells. High-troughput proteomic screening predicted the interaction between the CCDC6 gene product and the catalytic subunit of Serin-Threonin Protein Phosphatase 4 (PP4c) recently identified as the evolutionarily conserved pH2AX S139 phosphatase that is activated upon DNA Damage. We describe the interaction between CCDC6 and PP4c and we report the modulation of PP4c enzymatic activity in CCDC6 depleted cells. We discuss the functional significance of CCDC6-PP4c interactions and hypothesize that CCDC6 may act in the DNA Damage Response by negatively modulating PP4c activity. Overall, our data suggest that in primary tumours the loss of CCDC6 function could influence genome stability and thereby contribute to carcinogenesis. Assessing Cell Cycle Progression of Neural Stem and Progenitor Cells in the Mouse Developing Brain after Genotoxic Stress Authors: Olivier Etienne, Amandine Bery, Telma Roque, Chantal Desmaze, François D. Boussin. JoVE Neuroscience Neurons of the cerebral cortex are generated during brain development from different types of neural stem and progenitor cells (NSPC), which form a pseudostratified epithelium lining the lateral ventricles of the embryonic brain. Genotoxic stresses, such as ionizing radiation, have highly deleterious effects on the developing brain related to the high sensitivity of NSPC. Elucidation of the cellular and molecular mechanisms involved depends on the characterization of the DNA damage response of these particular types of cells, which requires an accurate method to determine NSPC progression through the cell cycle in the damaged tissue. Here is shown a method based on successive intraperitoneal injections of EdU and BrdU in pregnant mice and further detection of these two thymidine analogues in coronal sections of the embryonic brain. EdU and BrdU are both incorporated in DNA of replicating cells during S phase and are detected by two different techniques (azide or a specific antibody, respectively), which facilitate their simultaneous detection. EdU and BrdU staining are then determined for each NSPC nucleus in function of its distance from the ventricular margin in a standard region of the dorsal telencephalon. Thus this dual labeling technique allows distinguishing cells that progressed through the cell cycle from those that have activated a cell cycle checkpoint leading to cell cycle arrest in response to DNA damage. An example of experiment is presented, in which EdU was injected before irradiation and BrdU immediately after and analyzes performed within the 4 hr following irradiation. This protocol provides an accurate analysis of the acute DNA damage response of NSPC in function of the phase of the cell cycle at which they have been irradiated. This method is easily transposable to many other systems in order to determine the impact of a particular treatment on cell cycle progression in living tissues. Quantitation and Analysis of the Formation of HO-Endonuclease Stimulated Chromosomal Translocations by Single-Strand Annealing in Saccharomyces cerevisiae Authors: Lauren Liddell, Glenn Manthey, Nicholas Pannunzio, Adam Bailis. Institutions: Irell & Manella Graduate School of Biological Sciences, City of Hope Comprehensive Cancer Center and Beckman Research Institute, University of Southern California, Norris Comprehensive Cancer Center. Genetic variation is frequently mediated by genomic rearrangements that arise through interaction between dispersed repetitive elements present in every eukaryotic genome. This process is an important mechanism for generating diversity between and within organisms1-3. The human genome consists of approximately 40% repetitive sequence of retrotransposon origin, including a variety of LINEs and SINEs4. Exchange events between these repetitive elements can lead to genome rearrangements, including translocations, that can disrupt gene dosage and expression that can result in autoimmune and cardiovascular diseases5, as well as cancer in humans6-9. Exchange between repetitive elements occurs in a variety of ways. Exchange between sequences that share perfect (or near-perfect) homology occurs by a process called homologous recombination (HR). By contrast, non-homologous end joining (NHEJ) uses little-or-no sequence homology for exchange10,11. The primary purpose of HR, in mitotic cells, is to repair double-strand breaks (DSBs) generated endogenously by aberrant DNA replication and oxidative lesions, or by exposure to ionizing radiation (IR), and other exogenous DNA damaging agents. In the assay described here, DSBs are simultaneously created bordering recombination substrates at two different chromosomal loci in diploid cells by a galactose-inducible HO-endonuclease (Figure 1). The repair of the broken chromosomes generates chromosomal translocations by single strand annealing (SSA), a process where homologous sequences adjacent to the chromosome ends are covalently joined subsequent to annealing. One of the substrates, his3-Δ3', contains a 3' truncated HIS3 allele and is located on one copy of chromosome XV at the native HIS3 locus. The second substrate, his3-Δ5', is located at the LEU2 locus on one copy of chromosome III, and contains a 5' truncated HIS3 allele. Both substrates are flanked by a HO endonuclease recognition site that can be targeted for incision by HO-endonuclease. HO endonuclease recognition sites native to the MAT locus, on both copies of chromosome III, have been deleted in all strains. This prevents interaction between the recombination substrates and other broken chromosome ends from interfering in the assay. The KAN-MX-marked galactose-inducible HO endonuclease expression cassette is inserted at the TRP1 locus on chromosome IV. The substrates share 311 bp or 60 bp of the HIS3 coding sequence that can be used by the HR machinery for repair by SSA. Cells that use these substrates to repair broken chromosomes by HR form an intact HIS3 allele and a tXV::III chromosomal translocation that can be selected for by the ability to grow on medium lacking histidine (Figure 2A). Translocation frequency by HR is calculated by dividing the number of histidine prototrophic colonies that arise on selective medium by the total number of viable cells that arise after plating appropriate dilutions onto non-selective medium (Figure 2B). A variety of DNA repair mutants have been used to study the genetic control of translocation formation by SSA using this system12-14. Genetics, Issue 55, translocation formation, HO-endonuclease, Genomic Southern blot, Chromosome blot, Pulsed-field gel electrophoresis, Homologous recombination, DNA double-strand breaks, Single-strand annealing Study of the DNA Damage Checkpoint using Xenopus Egg Extracts Authors: Jeremy Willis, Darla DeStephanis, Yogin Patel, Vrushab Gowda, Shan Yan. Institutions: University of North Carolina at Charlotte. On a daily basis, cells are subjected to a variety of endogenous and environmental insults. To combat these insults, cells have evolved DNA damage checkpoint signaling as a surveillance mechanism to sense DNA damage and direct cellular responses to DNA damage. There are several groups of proteins called sensors, transducers and effectors involved in DNA damage checkpoint signaling (Figure 1). In this complex signaling pathway, ATR (ATM and Rad3-related) is one of the major kinases that can respond to DNA damage and replication stress. Activated ATR can phosphorylate its downstream substrates such as Chk1 (Checkpoint kinase 1). Consequently, phosphorylated and activated Chk1 leads to many downstream effects in the DNA damage checkpoint including cell cycle arrest, transcription activation, DNA damage repair, and apoptosis or senescence (Figure 1). When DNA is damaged, failing to activate the DNA damage checkpoint results in unrepaired damage and, subsequently, genomic instability. The study of the DNA damage checkpoint will elucidate how cells maintain genomic integrity and provide a better understanding of how human diseases, such as cancer, develop. Xenopus laevis egg extracts are emerging as a powerful cell-free extract model system in DNA damage checkpoint research. Low-speed extract (LSE) was initially described by the Masui group1. The addition of demembranated sperm chromatin to LSE results in nuclei formation where DNA is replicated in a semiconservative fashion once per cell cycle. The ATR/Chk1-mediated checkpoint signaling pathway is triggered by DNA damage or replication stress 2. Two methods are currently used to induce the DNA damage checkpoint: DNA damaging approaches and DNA damage-mimicking structures 3. DNA damage can be induced by ultraviolet (UV) irradiation, γ-irradiation, methyl methanesulfonate (MMS), mitomycin C (MMC), 4-nitroquinoline-1-oxide (4-NQO), or aphidicolin3, 4. MMS is an alkylating agent that inhibits DNA replication and activates the ATR/Chk1-mediated DNA damage checkpoint 4-7. UV irradiation also triggers the ATR/Chk1-dependent DNA damage checkpoint 8. The DNA damage-mimicking structure AT70 is an annealed complex of two oligonucleotides poly-(dA)70 and poly-(dT)70. The AT70 system was developed in Bill Dunphy's laboratory and is widely used to induce ATR/Chk1 checkpoint signaling 9-12. Here, we describe protocols (1) to prepare cell-free egg extracts (LSE), (2) to treat Xenopus sperm chromatin with two different DNA damaging approaches (MMS and UV), (3) to prepare the DNA damage-mimicking structure AT70, and (4) to trigger the ATR/Chk1-mediated DNA damage checkpoint in LSE with damaged sperm chromatin or a DNA damage-mimicking structure. Genetics, Issue 69, Molecular Biology, Cellular Biology, Developmental Biology, DNA damage checkpoint, Xenopus egg extracts, Xenopus laevis, Chk1 phosphorylation, ATR, AT70, MMS, UV, immunoblotting A Quantitative Assay to Study Protein:DNA Interactions, Discover Transcriptional Regulators of Gene Expression, and Identify Novel Anti-tumor Agents Authors: Karen F. Underwood, Maria T. Mochin, Jessica L. Brusgard, Moran Choe, Avi Gnatt, Antonino Passaniti. Institutions: University of Maryland School of Medicine, University of Maryland School of Medicine, University of Maryland School of Medicine, University of Maryland School of Medicine, University of Maryland School of Medicine. Many DNA-binding assays such as electrophoretic mobility shift assays (EMSA), chemiluminescent assays, chromatin immunoprecipitation (ChIP)-based assays, and multiwell-based assays are used to measure transcription factor activity. However, these assays are nonquantitative, lack specificity, may involve the use of radiolabeled oligonucleotides, and may not be adaptable for the screening of inhibitors of DNA binding. On the other hand, using a quantitative DNA-binding enzyme-linked immunosorbent assay (D-ELISA) assay, we demonstrate nuclear protein interactions with DNA using the RUNX2 transcription factor that depend on specific association with consensus DNA-binding sequences present on biotin-labeled oligonucleotides. Preparation of cells, extraction of nuclear protein, and design of double stranded oligonucleotides are described. Avidin-coated 96-well plates are fixed with alkaline buffer and incubated with nuclear proteins in nucleotide blocking buffer. Following extensive washing of the plates, specific primary antibody and secondary antibody incubations are followed by the addition of horseradish peroxidase substrate and development of the colorimetric reaction. Stop reaction mode or continuous kinetic monitoring were used to quantitatively measure protein interaction with DNA. We discuss appropriate specificity controls, including treatment with non-specific IgG or without protein or primary antibody. Applications of the assay are described including its utility in drug screening and representative positive and negative results are discussed. Cellular Biology, Issue 78, Transcription Factors, Vitamin D, Drug Discovery, Enzyme-Linked Immunosorbent Assay (ELISA), DNA-binding, transcription factor, drug screening, antibody Two- and Three-Dimensional Live Cell Imaging of DNA Damage Response Proteins Authors: Jason M. Beckta, Scott C. Henderson, Kristoffer Valerie. Institutions: Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University. Double-strand breaks (DSBs) are the most deleterious DNA lesions a cell can encounter. If left unrepaired, DSBs harbor great potential to generate mutations and chromosomal aberrations1. To prevent this trauma from catalyzing genomic instability, it is crucial for cells to detect DSBs, activate the DNA damage response (DDR), and repair the DNA. When stimulated, the DDR works to preserve genomic integrity by triggering cell cycle arrest to allow for repair to take place or force the cell to undergo apoptosis. The predominant mechanisms of DSB repair occur through nonhomologous end-joining (NHEJ) and homologous recombination repair (HRR) (reviewed in2). There are many proteins whose activities must be precisely orchestrated for the DDR to function properly. Herein, we describe a method for 2- and 3-dimensional (D) visualization of one of these proteins, 53BP1. The p53-binding protein 1 (53BP1) localizes to areas of DSBs by binding to modified histones3,4, forming foci within 5-15 minutes5. The histone modifications and recruitment of 53BP1 and other DDR proteins to DSB sites are believed to facilitate the structural rearrangement of chromatin around areas of damage and contribute to DNA repair6. Beyond direct participation in repair, additional roles have been described for 53BP1 in the DDR, such as regulating an intra-S checkpoint, a G2/M checkpoint, and activating downstream DDR proteins7-9. Recently, it was discovered that 53BP1 does not form foci in response to DNA damage induced during mitosis, instead waiting for cells to enter G1 before localizing to the vicinity of DSBs6. DDR proteins such as 53BP1 have been found to associate with mitotic structures (such as kinetochores) during the progression through mitosis10. In this protocol we describe the use of 2- and 3-D live cell imaging to visualize the formation of 53BP1 foci in response to the DNA damaging agent camptothecin (CPT), as well as 53BP1's behavior during mitosis. Camptothecin is a topoisomerase I inhibitor that primarily causes DSBs during DNA replication. To accomplish this, we used a previously described 53BP1-mCherry fluorescent fusion protein construct consisting of a 53BP1 protein domain able to bind DSBs11. In addition, we used a histone H2B-GFP fluorescent fusion protein construct able to monitor chromatin dynamics throughout the cell cycle but in particular during mitosis12. Live cell imaging in multiple dimensions is an excellent tool to deepen our understanding of the function of DDR proteins in eukaryotic cells. Genetics, Issue 67, Molecular Biology, Cellular Biology, Biochemistry, DNA, Double-strand breaks, DNA damage response, proteins, live cell imaging, 3D cell imaging, confocal microscopy High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli Authors: Natalie J. Saez, Hervé Nozach, Marilyne Blemont, Renaud Vincentelli. Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment. Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (www.venomics.eu), our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays. Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides Live Imaging of Mitosis in the Developing Mouse Embryonic Cortex Authors: Louis-Jan Pilaz, Debra L. Silver. Institutions: Duke University Medical Center, Duke University Medical Center. Although of short duration, mitosis is a complex and dynamic multi-step process fundamental for development of organs including the brain. In the developing cerebral cortex, abnormal mitosis of neural progenitors can cause defects in brain size and function. Hence, there is a critical need for tools to understand the mechanisms of neural progenitor mitosis. Cortical development in rodents is an outstanding model for studying this process. Neural progenitor mitosis is commonly examined in fixed brain sections. This protocol will describe in detail an approach for live imaging of mitosis in ex vivo embryonic brain slices. We will describe the critical steps for this procedure, which include: brain extraction, brain embedding, vibratome sectioning of brain slices, staining and culturing of slices, and time-lapse imaging. We will then demonstrate and describe in detail how to perform post-acquisition analysis of mitosis. We include representative results from this assay using the vital dye Syto11, transgenic mice (histone H2B-EGFP and centrin-EGFP), and in utero electroporation (mCherry-α-tubulin). We will discuss how this procedure can be best optimized and how it can be modified for study of genetic regulation of mitosis. Live imaging of mitosis in brain slices is a flexible approach to assess the impact of age, anatomy, and genetic perturbation in a controlled environment, and to generate a large amount of data with high temporal and spatial resolution. Hence this protocol will complement existing tools for analysis of neural progenitor mitosis. Neuroscience, Issue 88, mitosis, radial glial cells, developing cortex, neural progenitors, brain slice, live imaging Chromosome Replicating Timing Combined with Fluorescent In situ Hybridization Authors: Leslie Smith, Mathew Thayer. Institutions: Oregon Health & Science University. Mammalian DNA replication initiates at multiple sites along chromosomes at different times during S phase, following a temporal replication program. The specification of replication timing is thought to be a dynamic process regulated by tissue-specific and developmental cues that are responsive to epigenetic modifications. However, the mechanisms regulating where and when DNA replication initiates along chromosomes remains poorly understood. Homologous chromosomes usually replicate synchronously, however there are notable exceptions to this rule. For example, in female mammalian cells one of the two X chromosomes becomes late replicating through a process known as X inactivation1. Along with this delay in replication timing, estimated to be 2-3 hr, the majority of genes become transcriptionally silenced on one X chromosome. In addition, a discrete cis-acting locus, known as the X inactivation center, regulates this X inactivation process, including the induction of delayed replication timing on the entire inactive X chromosome. In addition, certain chromosome rearrangements found in cancer cells and in cells exposed to ionizing radiation display a significant delay in replication timing of >3 hours that affects the entire chromosome2,3. Recent work from our lab indicates that disruption of discrete cis-acting autosomal loci result in an extremely late replicating phenotype that affects the entire chromosome4. Additional 'chromosome engineering' studies indicate that certain chromosome rearrangements affecting many different chromosomes result in this abnormal replication-timing phenotype, suggesting that all mammalian chromosomes contain discrete cis-acting loci that control proper replication timing of individual chromosomes5. Here, we present a method for the quantitative analysis of chromosome replication timing combined with fluorescent in situ hybridization. This method allows for a direct comparison of replication timing between homologous chromosomes within the same cell, and was adapted from6. In addition, this method allows for the unambiguous identification of chromosomal rearrangements that correlate with changes in replication timing that affect the entire chromosome. This method has advantages over recently developed high throughput micro-array or sequencing protocols that cannot distinguish between homologous alleles present on rearranged and un-rearranged chromosomes. In addition, because the method described here evaluates single cells, it can detect changes in chromosome replication timing on chromosomal rearrangements that are present in only a fraction of the cells in a population. Genetics, Issue 70, Biochemistry, Molecular Biology, Cellular Biology, Chromosome replication timing, fluorescent in situ hybridization, FISH, BrdU, cytogenetics, chromosome rearrangements, fluorescence microscopy Ex vivo Culture of Drosophila Pupal Testis and Single Male Germ-line Cysts: Dissection, Imaging, and Pharmacological Treatment Authors: Stefanie M. K. Gärtner, Christina Rathke, Renate Renkawitz-Pohl, Stephan Awe. Institutions: Philipps-Universität Marburg, Philipps-Universität Marburg. During spermatogenesis in mammals and in Drosophila melanogaster, male germ cells develop in a series of essential developmental processes. This includes differentiation from a stem cell population, mitotic amplification, and meiosis. In addition, post-meiotic germ cells undergo a dramatic morphological reshaping process as well as a global epigenetic reconfiguration of the germ line chromatin—the histone-to-protamine switch. Studying the role of a protein in post-meiotic spermatogenesis using mutagenesis or other genetic tools is often impeded by essential embryonic, pre-meiotic, or meiotic functions of the protein under investigation. The post-meiotic phenotype of a mutant of such a protein could be obscured through an earlier developmental block, or the interpretation of the phenotype could be complicated. The model organism Drosophila melanogaster offers a bypass to this problem: intact testes and even cysts of germ cells dissected from early pupae are able to develop ex vivo in culture medium. Making use of such cultures allows microscopic imaging of living germ cells in testes and of germ-line cysts. Importantly, the cultivated testes and germ cells also become accessible to pharmacological inhibitors, thereby permitting manipulation of enzymatic functions during spermatogenesis, including post-meiotic stages. The protocol presented describes how to dissect and cultivate pupal testes and germ-line cysts. Information on the development of pupal testes and culture conditions are provided alongside microscope imaging data of live testes and germ-line cysts in culture. We also describe a pharmacological assay to study post-meiotic spermatogenesis, exemplified by an assay targeting the histone-to-protamine switch using the histone acetyltransferase inhibitor anacardic acid. In principle, this cultivation method could be adapted to address many other research questions in pre- and post-meiotic spermatogenesis. Developmental Biology, Issue 91, Ex vivo culture, testis, male germ-line cells, Drosophila, imaging, pharmacological assay Measuring Cell Cycle Progression Kinetics with Metabolic Labeling and Flow Cytometry Authors: Helen Fleisig, Judy Wong. Precise control of the initiation and subsequent progression through the various phases of the cell cycle are of paramount importance in proliferating cells. Cell cycle division is an integral part of growth and reproduction and deregulation of key cell cycle components have been implicated in the precipitating events of carcinogenesis 1,2. Molecular agents in anti-cancer therapies frequently target biological pathways responsible for the regulation and coordination of cell cycle division 3. Although cell cycle kinetics tend to vary according to cell type, the distribution of cells amongst the four stages of the cell cycle is rather consistent within a particular cell line due to the consistent pattern of mitogen and growth factor expression. Genotoxic events and other cellular stressors can result in a temporary block of cell cycle progression, resulting in arrest or a temporary pause in a particular cell cycle phase to allow for instigation of the appropriate response mechanism. The ability to experimentally observe the behavior of a cell population with reference to their cell cycle progression stage is an important advance in cell biology. Common procedures such as mitotic shake off, differential centrifugation or flow cytometry-based sorting are used to isolate cells at specific stages of the cell cycle 4-6. These fractionated, cell cycle phase-enriched populations are then subjected to experimental treatments. Yield, purity and viability of the separated fractions can often be compromised using these physical separation methods. As well, the time lapse between separation of the cell populations and the start of experimental treatment, whereby the fractionated cells can progress from the selected cell cycle stage, can pose significant challenges in the successful implementation and interpretation of these experiments. Other approaches to study cell cycle stages include the use of chemicals to synchronize cells. Treatment of cells with chemical inhibitors of key metabolic processes for each cell cycle stage are useful in blocking the progression of the cell cycle to the next stage. For example, the ribonucleotide reductase inhibitor hydroxyurea halts cells at the G1/S juncture by limiting the supply of deoxynucleotides, the building blocks of DNA. Other notable chemicals include treatment with aphidicolin, a polymerase alpha inhibitor for G1 arrest, treatment with colchicine and nocodazole, both of which interfere with mitotic spindle formation to halt cells in M phase and finally, treatment with the DNA chain terminator 5-fluorodeoxyridine to initiate S phase arrest 7-9. Treatment with these chemicals is an effective means of synchronizing an entire population of cells at a particular phase. With removal of the chemical, cells rejoin the cell cycle in unison. Treatment of the test agent following release from the cell cycle blocking chemical ensures that the drug response elicited is from a uniform, cell cycle stage-specific population. However, since many of the chemical synchronizers are known genotoxic compounds, teasing apart the participation of various response pathways (to the synchronizers vs. the test agents) is challenging. Here we describe a metabolic labeling method for following a subpopulation of actively cycling cells through their progression from the DNA replication phase, through to the division and separation of their daughter cells. Coupled with flow cytometry quantification, this protocol enables for measurement of kinetic progression of the cell cycle in the absence of either mechanically- or chemically- induced cellular stresses commonly associated with other cell cycle synchronization methodologies 10. In the following sections we will discuss the methodology, as well as some of its applications in biomedical research. Cellular Biology, Issue 63, cell cycle, kinetics, metabolic labeling, flow cytometry, biomedical, genetics, DNA replication Quantitation of γH2AX Foci in Tissue Samples Authors: Michelle M. Tang, Li-Jeen Mah, Raja S. Vasireddy, George T. Georgiadis, Assam El-Osta, Simon G. Royce, Tom C. Karagiannis. Institutions: The Alfred Medical Research and Education Precinct, The Alfred Medical Research and Education Precinct, The University of Melbourne, Royal Children's Hospital, The University of Melbourne. DNA double-strand breaks (DSBs) are particularly lethal and genotoxic lesions, that can arise either by endogenous (physiological or pathological) processes or by exogenous factors, particularly ionizing radiation and radiomimetic compounds. Phosphorylation of the H2A histone variant, H2AX, at the serine-139 residue, in the highly conserved C-terminal SQEY motif, forming γH2AX, is an early response to DNA double-strand breaks1. This phosphorylation event is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)2. Overall, DSB induction results in the formation of discrete nuclear γH2AX foci which can be easily detected and quantitated by immunofluorescence microscopy2. Given the unique specificity and sensitivity of this marker, analysis of γH2AX foci has led to a wide range of applications in biomedical research, particularly in radiation biology and nuclear medicine. The quantitation of γH2AX foci has been most widely investigated in cell culture systems in the context of ionizing radiation-induced DSBs. Apart from cellular radiosensitivity, immunofluorescence based assays have also been used to evaluate the efficacy of radiation-modifying compounds. In addition, γH2AX has been used as a molecular marker to examine the efficacy of various DSB-inducing compounds and is recently being heralded as important marker of ageing and disease, particularly cancer3. Further, immunofluorescence-based methods have been adapted to suit detection and quantitation of γH2AX foci ex vivo and in vivo4,5. Here, we demonstrate a typical immunofluorescence method for detection and quantitation of γH2AX foci in mouse tissues. Cellular Biology, Issue 40, immunofluorescence, DNA double-strand breaks, histone variant, H2AX, DNA damage, ionising radiation, reactive oxygen species Identifying the Effects of BRCA1 Mutations on Homologous Recombination using Cells that Express Endogenous Wild-type BRCA1 Authors: Jeffrey Parvin, Natsuko Chiba, Derek Ransburgh. Institutions: The Ohio State University, Tohoku University. The functional analysis of missense mutations can be complicated by the presence in the cell of the endogenous protein. Structure-function analyses of the BRCA1 have been complicated by the lack of a robust assay for the full length BRCA1 protein and the difficulties inherent in working with cell lines that express hypomorphic BRCA1 protein1,2,3,4,5. We developed a system whereby the endogenous BRCA1 protein in a cell was acutely depleted by RNAi targeting the 3'-UTR of the BRCA1 mRNA and replaced by co-transfecting a plasmid expressing a BRCA1 variant. One advantage of this procedure is that the acute silencing of BRCA1 and simultaneous replacement allow the cells to grow without secondary mutations or adaptations that might arise over time to compensate for the loss of BRCA1 function. This depletion and add-back procedure was done in a HeLa-derived cell line that was readily assayed for homologous recombination activity. The homologous recombination assay is based on a previously published method whereby a recombination substrate is integrated into the genome (Figure 1)6,7,8,9. This recombination substrate has the rare-cutting I-SceI restriction enzyme site inside an inactive GFP allele, and downstream is a second inactive GFP allele. Transfection of the plasmid that expresses I-SceI results in a double-stranded break, which may be repaired by homologous recombination, and if homologous recombination does repair the break it creates an active GFP allele that is readily scored by flow cytometry for GFP protein expression. Depletion of endogenous BRCA1 resulted in an 8-10-fold reduction in homologous recombination activity, and add-back of wild-type plasmid fully restored homologous recombination function. When specific point mutants of full length BRCA1 were expressed from co-transfected plasmids, the effect of the specific missense mutant could be scored. As an example, the expression of the BRCA1(M18T) protein, a variant of unknown clinical significance10, was expressed in these cells, it failed to restore BRCA1-dependent homologous recombination. By contrast, expression of another variant, also of unknown significance, BRCA1(I21V) fully restored BRCA1-dependent homologous recombination function. This strategy of testing the function of BRCA1 missense mutations has been applied to another biological system assaying for centrosome function (Kais et al, unpublished observations). Overall, this approach is suitable for the analysis of missense mutants in any gene that must be analyzed recessively. Cell Biology, Issue 48, BRCA1, homologous recombination, breast cancer, RNA interference, DNA repair Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae Authors: Melissa N. Patterson, Patrick H. Maxwell. Institutions: Rensselaer Polytechnic Institute. Saccharomyces cerevisiae has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging. Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer Authors: Pelagia Deriziotis, Sarah A. Graham, Sara B. Estruch, Simon E. Fisher. Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour. Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA. Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations Identifying DNA Mutations in Purified Hematopoietic Stem/Progenitor Cells Authors: Ziming Cheng, Ting Zhou, Azhar Merchant, Thomas J. Prihoda, Brian L. Wickes, Guogang Xu, Christi A. Walter, Vivienne I. Rebel. Institutions: UT Health Science Center at San Antonio, UT Health Science Center at San Antonio, UT Health Science Center at San Antonio, UT Health Science Center at San Antonio, UT Health Science Center at San Antonio. In recent years, it has become apparent that genomic instability is tightly related to many developmental disorders, cancers, and aging. Given that stem cells are responsible for ensuring tissue homeostasis and repair throughout life, it is reasonable to hypothesize that the stem cell population is critical for preserving genomic integrity of tissues. Therefore, significant interest has arisen in assessing the impact of endogenous and environmental factors on genomic integrity in stem cells and their progeny, aiming to understand the etiology of stem-cell based diseases. LacI transgenic mice carry a recoverable λ phage vector encoding the LacI reporter system, in which the LacI gene serves as the mutation reporter. The result of a mutated LacI gene is the production of β-galactosidase that cleaves a chromogenic substrate, turning it blue. The LacI reporter system is carried in all cells, including stem/progenitor cells and can easily be recovered and used to subsequently infect E. coli. After incubating infected E. coli on agarose that contains the correct substrate, plaques can be scored; blue plaques indicate a mutant LacI gene, while clear plaques harbor wild-type. The frequency of blue (among clear) plaques indicates the mutant frequency in the original cell population the DNA was extracted from. Sequencing the mutant LacI gene will show the location of the mutations in the gene and the type of mutation. The LacI transgenic mouse model is well-established as an in vivo mutagenesis assay. Moreover, the mice and the reagents for the assay are commercially available. Here we describe in detail how this model can be adapted to measure the frequency of spontaneously occurring DNA mutants in stem cell-enriched Lin-IL7R-Sca-1+cKit++(LSK) cells and other subpopulations of the hematopoietic system. Infection, Issue 84, In vivo mutagenesis, hematopoietic stem/progenitor cells, LacI mouse model, DNA mutations, E. coli Reconstitution Of β-catenin Degradation In Xenopus Egg Extract Authors: Tony W. Chen, Matthew R. Broadus, Stacey S. Huppert, Ethan Lee. Institutions: Vanderbilt University Medical Center, Cincinnati Children&#39;s Hospital Medical Center, Vanderbilt University School of Medicine. Xenopus laevis egg extract is a well-characterized, robust system for studying the biochemistry of diverse cellular processes. Xenopus egg extract has been used to study protein turnover in many cellular contexts, including the cell cycle and signal transduction pathways1-3. Herein, a method is described for isolating Xenopus egg extract that has been optimized to promote the degradation of the critical Wnt pathway component, β-catenin. Two different methods are described to assess β-catenin protein degradation in Xenopus egg extract. One method is visually informative ([35S]-radiolabeled proteins), while the other is more readily scaled for high-throughput assays (firefly luciferase-tagged fusion proteins). The techniques described can be used to, but are not limited to, assess β-catenin protein turnover and identify molecular components contributing to its turnover. Additionally, the ability to purify large volumes of homogenous Xenopus egg extract combined with the quantitative and facile readout of luciferase-tagged proteins allows this system to be easily adapted for high-throughput screening for modulators of β-catenin degradation. Molecular Biology, Issue 88, Xenopus laevis, Xenopus egg extracts, protein degradation, radiolabel, luciferase, autoradiography, high-throughput screening Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays Authors: Alla Gagarinova, Mohan Babu, Jack Greenblatt, Andrew Emili. Institutions: University of Toronto, University of Toronto, University of Regina. Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7, but GI information remains sparse for prokaryotes8, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10. Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9. Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression Quantification of γH2AX Foci in Response to Ionising Radiation Authors: Li-Jeen Mah, Raja S. Vasireddy, Michelle M. Tang, George T. Georgiadis, Assam El-Osta, Tom C. Karagiannis. Institutions: The Alfred Medical Research and Education Precinct, The University of Melbourne, The Alfred Medical Research and Education Precinct. DNA double-strand breaks (DSBs), which are induced by either endogenous metabolic processes or by exogenous sources, are one of the most critical DNA lesions with respect to survival and preservation of genomic integrity. An early response to the induction of DSBs is phosphorylation of the H2A histone variant, H2AX, at the serine-139 residue, in the highly conserved C-terminal SQEY motif, forming γH2AX1. Following induction of DSBs, H2AX is rapidly phosphorylated by the phosphatidyl-inosito 3-kinase (PIKK) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)2. Typically, only a few base-pairs (bp) are implicated in a DSB, however, there is significant signal amplification, given the importance of chromatin modifications in DNA damage signalling and repair. Phosphorylation of H2AX mediated predominantly by ATM spreads to adjacent areas of chromatin, affecting approximately 0.03% of total cellular H2AX per DSB2,3. This corresponds to phosphorylation of approximately 2000 H2AX molecules spanning ~2 Mbp regions of chromatin surrounding the site of the DSB and results in the formation of discrete γH2AX foci which can be easily visualized and quantitated by immunofluorescence microscopy2. The loss of γH2AX at DSB reflects repair, however, there is some controversy as to what defines complete repair of DSBs; it has been proposed that rejoining of both strands of DNA is adequate however, it has also been suggested that re-instatement of the original chromatin state of compaction is necessary4-8. The disappearence of γH2AX involves at least in part, dephosphorylation by phosphatases, phosphatase 2A and phosphatase 4C5,6. Further, removal of γH2AX by redistribution involving histone exchange with H2A.Z has been implicated7,8. Importantly, the quantitative analysis of γH2AX foci has led to a wide range of applications in medical and nuclear research. Here, we demonstrate the most commonly used immunofluorescence method for evaluation of initial DNA damage by detection and quantitation of γH2AX foci in γ-irradiated adherent human keratinocytes9. Medicine, Issue 38, H2AX, DNA double-strand break, DNA damage, chromatin modification, repair, ionising radiation Identification of Protein Interaction Partners in Mammalian Cells Using SILAC-immunoprecipitation Quantitative Proteomics Authors: Edward Emmott, Ian Goodfellow. Institutions: University of Cambridge. Quantitative proteomics combined with immuno-affinity purification, SILAC immunoprecipitation, represent a powerful means for the discovery of novel protein:protein interactions. By allowing the accurate relative quantification of protein abundance in both control and test samples, true interactions may be easily distinguished from experimental contaminants. Low affinity interactions can be preserved through the use of less-stringent buffer conditions and remain readily identifiable. This protocol discusses the labeling of tissue culture cells with stable isotope labeled amino acids, transfection and immunoprecipitation of an affinity tagged protein of interest, followed by the preparation for submission to a mass spectrometry facility. This protocol then discusses how to analyze and interpret the data returned from the mass spectrometer in order to identify cellular partners interacting with a protein of interest. As an example this technique is applied to identify proteins binding to the eukaryotic translation initiation factors: eIF4AI and eIF4AII. Biochemistry, Issue 89, mass spectrometry, tissue culture techniques, isotope labeling, SILAC, Stable Isotope Labeling of Amino Acids in Cell Culture, proteomics, Interactomics, immunoprecipitation, pulldown, eIF4A, GFP, nanotrap, orbitrap Genetic Studies of Human DNA Repair Proteins Using Yeast as a Model System Authors: Monika Aggarwal, Robert M. Brosh Jr.. Institutions: National Institute on Aging, NIH. Understanding the roles of human DNA repair proteins in genetic pathways is a formidable challenge to many researchers. Genetic studies in mammalian systems have been limited due to the lack of readily available tools including defined mutant genetic cell lines, regulatory expression systems, and appropriate selectable markers. To circumvent these difficulties, model genetic systems in lower eukaryotes have become an attractive choice for the study of functionally conserved DNA repair proteins and pathways. We have developed a model yeast system to study the poorly defined genetic functions of the Werner syndrome helicase-nuclease (WRN) in nucleic acid metabolism. Cellular phenotypes associated with defined genetic mutant backgrounds can be investigated to clarify the cellular and molecular functions of WRN through its catalytic activities and protein interactions. The human WRN gene and associated variants, cloned into DNA plasmids for expression in yeast, can be placed under the control of a regulatory plasmid element. The expression construct can then be transformed into the appropriate yeast mutant background, and genetic function assayed by a variety of methodologies. Using this approach, we determined that WRN, like its related RecQ family members BLM and Sgs1, operates in a Top3-dependent pathway that is likely to be important for genomic stability. This is described in our recent publication [1] at www.impactaging.com. Detailed methods of specific assays for genetic complementation studies in yeast are provided in this paper. Microbiology, Issue 37, Werner syndrome, helicase, topoisomerase, RecQ, Bloom's syndrome, Sgs1, genomic instability, genetics, DNA repair, yeast
Resources Ars Reminiscendi: Mind and Memory in Renaissance Culture Connect Now The Art of Memory in Renaissance scholarship was, for many years, confined to a footnote in classical rhetoric, until Francis Yates’s groundbreaking study of 1966 argued for its considerable influence on hermetic philosophy and literature. Over the last few decades, another shift in scholarship has occurred that goes well beyond Yates’s conceptualization of memory as an occult and occulted phenomenon in the history of ideas. Recent studies suggest memory to be less a theme or idea than the prevailing episteme, whose discourses, practices, and mentations produce and reproduce Renaissance culture. Humanism’s project of recovering the past by retrieving and reconstructing textuality privileges recollection as a mode of epistemological engagement with the world, as a means of subjective and collective identity formation, and as an organ for achieving ethical goals. For that reason, memory finds itself involved in the passage to modernity, when its ascendancy is challenged by the rise of seventeenth-century science and fall of rhetoric, the emergence of the European nation state, and the explosion of the printing press and book technologies. Acknowledging this new direction in scholarship, this volume seeks to trace the plurality and complexity of memory’s cultural work throughout the English and Continental Renaissance. Among the thinkers and writers to receive attention are Thomas Hoby, Conrad Gesner, Erasmus, Conrad Celtis, Johann Sturm, Machiavelli, Jehan du Pré, Spenser, Robert Hooke, Milton, Sebastian Münster, and Shakespeare. A long critical and historical afterword extends the historical contexts around the contributions and provides an overview of the materials central to the field, as well as a sense of the field’s future development. Donald Beecher and Grant Williams teach Renaissance literature and culture in the Department of English at Carleton University in Ottawa.
Measuring the magnetic fields on the hottest planets in the galaxy June 19, 2017 Credit: Newcastle University It is now possible to measure the magnetic field strengths of the hottest planets in the galaxy, new research has shown. Studying a class of planets known as 'hot Jupiters', experts from Newcastle University, UK, have shown the planets' magnetic field is responsible for the unusual behaviour of the atmospheric winds which move around it. Instead of moving in an eastward direction as has always been assumed, new observations have shown the winds varied from eastward to westward on the hot planet HAT-P-7b. Using this observation, Dr Tamara Rogers, from Newcastle University, was able to estimate the magnetic field strength of this far-off planet. Publishing her findings this month in the leading academic journal Nature Astronomy, Dr Rogers says this new understanding of the magnetic fields of these far-distant planets will help astronomers understand their formation, size and migration paths and ultimately help us understand the formation and evolution of our own solar system. "The extreme temperature of these unusual planets causes metals such as lithium, sodium and potassium to become ionized and this allows the magnetic field to be coupled to the atmospheric winds," explains Dr Rogers, who is based in the School of Mathematics and Statistics at Newcastle University. "These magnetic forces are able to then disrupt the strong eastward winds, leading to variable and even oppositely directed winds. This then allowed us to estimate the magnetic field strength of the planet." The "roasted" planets Modern astronomical research investigates not just stars and galaxies but also the planets around distant stars, termed "exoplanets", often thousands of light years from Earth. The best studied of these exoplanets are called hot Jupiters - Jupiter-sized planets that are very close to their home stars. Because of their size and temperature, hot Jupiters are an extreme class of planets which test modern theories about gas dynamics. In December 2016, observations were made by researchers at Warwick University that implied variable winds on HAT-P-7b. HAT-P-7b is nearly 40 percent larger than our own Jupiter and orbits its star every couple of days. It is so close that its dayside temperature may be up to 2500 C with a night side temperature of 1400 C. "Astronomers were able to trace the brightest point – the 'hot spot' – in the planet's atmosphere," explains Dr Rogers. "The extreme day-night temperature difference drives strong eastward winds in the atmosphere and shifts the hot spot away from the point directly beneath the star on the dayside. "However, we saw this hot spot shift significantly over time – even ending up on the west side of the sub-stellar point. This shows that the winds are also varying significantly and even completely changing direction." Explore further: Variable winds on hot giant exoplanet help study of magnetic field T. M. Rogers. Constraints on the magnetic field strength of HAT-P-7 b and other hot giant exoplanets, Nature Astronomy (2017). DOI: 10.1038/s41550-017-0131 Journal reference: Nature Astronomy Relevant PhysicsForums posts U.S. Solar Eclipse of Aug. 21, 2017 9 hours ago Does Magnetic Braking Theory hold up? 10 hours ago Formation of Asteroids 10 hours ago Dark matter and black hole interaction 13 hours ago "Space and Stuff" 18 hours ago Will Michigan Ever Get An Eclipse Jun 23, 2017 More from Astronomy and Astrophysics Variable winds on hot giant exoplanet help study of magnetic field Senior Scientist Tamara M. Rogers of the Planetary Science Institute has discovered that substantial variability in the winds on the hot giant exoplanet HAT-P-7b are due to magnetism, and used those measurements to develop ... The space weather forecast for Proxima Centauri B Proxima Centauri, the closest star to the Earth (only 4.28 light-years away) is getting a lot of attention these days. It hosts a planet, Proxima Cen b, whose mass is about 1.3 Earth-mass (though it could be larger, depending ... Hubble's tale of two exoplanets: Nature vs. nurture Is it a case of nature versus nurture when it comes to two "cousin" exoplanets? In a unique experiment, scientists used NASA's Hubble Space Telescope to study two "hot Jupiter" exoplanets. Because these planets are virtually ... Stellar winds may electrify exoplanets (Phys.org) —The strangest class of exoplanets found to date might be even stranger than astronomers have thought. A new model suggests that they are partially heated by electric currents linked to their host stars. Florida ... 5400mph winds discovered hurtling around exoplanet Winds of over 2km per second have been discovered flowing around planet outside of the Earth's solar system, new research has found. Winds of rubies and sapphires strike the sky of giant planet Signs of powerful changing winds have been detected on a planet 16 times larger than Earth, over 1000 light years away - the first time ever that weather systems have been found on a gas giant outside our solar system - according ... NASA's senior Mars rover, Opportunity, is examining rocks at the edge of Endeavour Crater for signs that they may have been either transported by a flood or eroded in place by wind. CHESS mission will check out the space between stars Deep in space between distant stars, space is not empty. Instead, there drifts vast clouds of neutral atoms and molecules, as well as charged plasma particles called the interstellar medium—that may, over millions of years, ... Sun eruptions hit Earth like a 'sneeze', say scientists Long-term power cuts, destruction of electronic devices and increased cancer risk for aeroplane passengers are all potential effects of the Earth being hit by a powerful solar eruption. Dutch astronomers discover recipe to make cosmic glycerol A team of laboratory astrophysicists from Leiden University (the Netherlands) managed to make glycerol under conditions comparable to those in dark interstellar clouds. They allowed carbon monoxide ice to react with hydrogen ... Scientists uncover origins of the Sun's swirling spicules At any given moment, as many as 10 million wild jets of solar material burst from the sun's surface. They erupt as fast as 60 miles per second, and can reach lengths of 6,000 miles before collapsing. These are spicules, and ... Rare US total solar eclipse excites Americans coast-to-coast For the first time in almost a century the United States is preparing for a coast-to-coast solar eclipse, a rare celestial event millions of Americans, with caution, will be able to observe.
Patients Seeking Health News On Internet More Likely To Receive Latest Treatments A new analysis finds that when colorectal cancer patients seek out health information from the Internet and news media, they are more likely to be aware of and receive the latest treatments for their disease. Patients who sought information about treatments for colorectal cancer were 2.83 times more likely to have heard about targeted therapies and 3.22 times more likely to have received targeted therapies than people who did not seek information. A new analysis finds that when colorectal cancer patients seek out health information from the internet and news media, they are more likely to be aware of and receive the latest treatments for their disease. The study indicates that patients can influence their own treatment, in some cases in inappropriate ways. In their review, authors led by Stacy Gray, M.D. of the Dana-Farber Cancer Institute in Boston note that in the last several decades, patients have become more involved in their health care as patient autonomy has become increasingly important. That change has been accompanied by unprecedented growth in the amount of health information available to patients. Studies show nearly four out of ten of cancer patients seek cancer information on the internet. But the authors say it is unclear how these phenomena influence a cancer patient's treatment. Dr. Gray and colleagues from the NCI Center of Excellence in Cancer Communication Research at the University of Pennsylvania Annenberg School designed a study to examine the relationship between information-seeking among 633 colorectal cancer patients chosen at random from the Pennsylvania Cancer Registry and the use of novel new agents for the disease. The investigators focused on the use of the targeted therapies bevacizumab (Avastin) and cetuximab (Erbitux) because of these drugs' clinical importance, significant media coverage, and recent approval by the U.S. Food and Drug Administration. Dr. Gray and her team hypothesized that there would be a relationship between information seeking and awareness of these targeted therapies among colorectal cancer patients. They also hypothesized that patients who seek information may ask their physicians about these targeted therapies and may be more likely to receive them than patients who do not seek information. The researchers found that high levels of information seeking were strongly associated with both awareness of and receiving treatment using targeted therapies. Patients who sought information about treatments for colorectal cancer were 2.83 times more likely to have heard about targeted therapies and 3.22 times more likely to have received targeted therapies than people who did not seek information. These associations were present for patients with advanced disease where use of targeted therapies is FDA approved as well as for patients with early stages of the disease where their use is not FDA approved. "These findings emphasize the importance of exploring patient influence on physician prescribing patterns and understanding the impact of information seeking on cancer outcomes," the authors write. The above post is reprinted from materials provided by American Cancer Society. Note: Materials may be edited for content and length. Stacy W. Gray, Katrina Armstrong, Angela DeMichele, J. Sanford Schwartz, and Robert C. Hornik. Colon cancer patient information seeking and the adoption of targeted therapy for on-label and off-label indications. Cancer, Published Online: February 23, 2009; Print Issue Date: April 1, 2009 DOI: 10.1002/cncr.24186 American Cancer Society. "Patients Seeking Health News On Internet More Likely To Receive Latest Treatments." ScienceDaily. ScienceDaily, 4 March 2009. <www.sciencedaily.com/releases/2009/02/090223083146.htm>. American Cancer Society. (2009, March 4). Patients Seeking Health News On Internet More Likely To Receive Latest Treatments. ScienceDaily. Retrieved February 14, 2016 from www.sciencedaily.com/releases/2009/02/090223083146.htm American Cancer Society. "Patients Seeking Health News On Internet More Likely To Receive Latest Treatments." ScienceDaily. www.sciencedaily.com/releases/2009/02/090223083146.htm (accessed February 14, 2016). Website Information on Colon Cancer Too Complex, Fails to Address Key Concerns, Researcher Finds Apr. 14, 2014 — Popular web information on colorectal cancer is too difficult for most lay people to read and doesn’t address the appropriate risks to and concerns of patients, a study by a gastroenterologists ... read more Blacks Have Less Access to Cancer Specialists, Treatment Nov. 19, 2013 — Researchers say metastatic colorectal cancer patients of African-American descent are less likely to be seen by cancer specialists or receive cancer treatments. This difference in treatment explains ... read more Patients Trust Doctors but Consult the Internet July 6, 2012 — Patients look up their illnesses online to become better informed and prepared to play an active role in their care -- not because they mistrust their doctors, a new study suggests. The study ... read more Pharmaceutical Spam Dec. 8, 2011 — Spam advertising of pharmaceutical products is leading patients to seek out information about prescription drugs online, according to a new report. If those drugs are not available to the internet ... read more Strange & Offbeat
Automated Gene Classification using Nonnegative Matrix Factorization on Biomedical Literature Kevin Erich Heinrich, University of Tennessee, Knoxville Michael W. Berry Ramin Homayouni, Jens Gregor, Michael Thomason Understanding functional gene relationships is a challenging problem for biological applications. High-throughput technologies such as DNA microarrays have inundated biologists with a wealth of information, however, processing that information remains problematic. To help with this problem, researchers have begun applying text mining techniques to the biological literature. This work extends previous work based on Latent Semantic Indexing (LSI) by examining Nonnegative Matrix Factorization (NMF). Whereas LSI incorporates the singular value decomposition (SVD) to approximate data in a dense, mixed-sign space, NMF produces a parts-based factorization that is directly interpretable. This space can, in theory, be used to augment existing ontologies and annotations by identifying themes within the literature. Of course, performing NMF does not come without a price—namely, the large number of parameters. This work attempts to analyze the effects of some of the NMF parameters on both convergence and labeling accuracy. Since there is a dearth of automated label evaluation techniques as well as “gold standard” hierarchies, a method to produce “correct” trees is proposed as well as a technique to label trees and to evaluate those labels. Heinrich, Kevin Erich, "Automated Gene Classification using Nonnegative Matrix Factorization on Biomedical Literature. " PhD diss., University of Tennessee, 2007. Computer Sciences Commons
Increasing women’s voice through agriculture By Chelsea Graham, P4P, World Food Program Throughout the pilot phase, P4P has focused on assisting women farmers to benefit economically from their work, gaining confidence and voice in their communities and homes. Mazouma Sanou, a farmer from Burkina Faso, has first-hand experience of these benefits as well as the challenges still facing women farmers. “P4P started as a gender conscious project,” says P4P gender consultant Batamaka Somé’, during the 2014 P4P Annual Consultation. From its inception, he says, P4P faced many challenges to women’s empowerment, such as women’s limited access to inputs and credit, their unpaid contributions to farming, and the male-held control of household production and marketing. To address these challenges, P4P’s first step was to create realistic goals, and a framework within which these could be achieved. This was documented in a gender strategy. The development of the strategy was led by the Agricultural Learning Impacts Network (ALINe), and included extensive field research and literature review, which provided a nuanced and culturally specific view of women in agriculture. Today, a number of P4P’s targets related to gender have been met. Women’s participation in P4P has tripled since the beginning of the pilot, and some 200,000 women have been trained in various capacities. Skills and income gained through P4P have boosted women’s confidence, enabling them to participate and engage more in markets. However, many challenges remain to further assist women to access markets and benefit economically from their work. One woman’s experience Mazouma Sanou represented the Burkina Faso cooperative union UPPA-Houet at P4P’s Annual Consultation in Rome. Copyright: WFP/Ahnna Gudmunds Mazouma Sanou is a 43-year old woman farmer from Burkina Faso. She is married and the mother of three children. Mazouma is a member of a P4P-supported cooperative union called UPPA-Houet. Today, the union has 20,500 members, 11,000 of whom are women. Mazouma contributes maize, sorghum, and niébé (cowpeas) to her union’s sales to WFP. Mazouma also works as a field monitor paid by WFP and OXFAM to coach 25 rural women’s groups affiliated to her union, assisting them to produce and earn more. She works as an intermediary between groups and partners, and assists women to better organize their groups. She also supports them throughout the production process, making sure their products meet standards and working with them to improve their marketing and gain access to credit. Changing family and community dynamics P4P has contributed to an improvement in family dynamics by increasing women’s economic power through P4P-supported sales, finding that with money in their hands, women have more voice in their communities and homes. P4P and its partners also carry out gender sensitization training for both men and women, illustrating the tangible benefits which can be realized by households when women participate fully in farming activities. Mazouma says that since their involvement in P4P, many women are able to make family decisions in collaboration with their husbands. She states that this has made income management easier, allowing families to plan for the possibility of unexpected illness, and to set aside money for enrolling their children in school. Additionally, Mazouma has seen great changes at the community level. She says that thanks to their increased economic power, women are now more involved in decision-making and planning both in the cooperative union and their communities. Challenges ahead While Mazouma says that gender dynamics are certainly changing for the better in her community, she acknowledges that there are still challenges ahead. She says that certain men do resist women’s increasing voice, and that she often works with women to discuss family life and helps them negotiate with their husbands. “Women have to help educate their husbands. Dialogue can certainly change attitudes, but you can’t command people to do things,” she says. “I ask the woman ‘if you get that money, what will you do,’ and she says ‘help the children,’ so I say ‘your husband can take another wife but your children can’t have another mother. Your children can really benefit from this.’” Many women in Mazouma’s farmers’ group have benefited economically from their work with P4P. Despite this, while over 50% of the UPPA Houet’s members are women, only 32% of the farmers’ organization’s sales to WFP were supplied by women, putting just 22% of the union’s sales directly into women’s hands. The five-year pilot illustrated that progress has been made, however continued efforts are required to ensure that more women benefit economically from their work with P4P. When asked about the future of her cooperative, Mazouma says, “from the very start P4P has been a school where we have learned how to improve our work, how to improve quality. I think we need more training, so women can help women train each other and develop their work.” Though women such as Mazouma have received benefits from their participation in P4P, there is still is a long way to go. Change at a community and household level is slow, and many of the deep-seated cultural and social challenges identified at the beginning of the project have still not been completely overcome. However, the progress made so far is an indicator of the potential impact of culturally specific, flexible and nuanced gender programming. “A great deal of work still needs to be done for gender equity to be fully realized,” says WFP gender advisor Veronique Sainte-Luce. “But P4P has been identified as something valuable, something positive, which has made a difference in women’s lives.” For more information go to www.wfp.org/purchase-progress Follow WFP on Twitter agrculture, Burkina Faso, food security, Oxfam,
Threats to Dental Health/ What You Need to Know About Smoking and Oral Health by Donna Pleis Lung-based diseases – lung cancer, emphysema, chronic bronchitis – usually come to mind when thinking about the health consequences of smoking. But because cigarette smoking can affect nearly every organ in your body, it's not surprising that your oral health can take a hit too. Here's what you need to know about smoking and oral health in order to stay healthy. The Centers for Disease Control and Prevention (CDC) reports that smoking is the leading preventable cause of death and disease in the United States. So whether you smoke cigarettes, cigars or a smokeless tobacco product, the fact remains: There really is no healthy level of exposure in a tobacco product, even second-hand. Your risk for tobacco-related diseases, including those affecting your oral health, depends on how long you've smoked and the number of cigarettes smoked each day. Oral cancer involves the gradual mutation of healthy cells in your mouth and can occur a number of ways. Smoking plays a significant role in the many cases of oral cancer diagnosed each year, according to the Oral Cancer Foundation. A University of California study showed that 8 out of 10 patients with oral cancer were smokers. Whenever you inhale, the harmful chemicals in tobacco smoke first pass through your mouth and throat before reaching your lungs. Through time and repeat exposure, these chemicals cause changes to your oral cavity, which can lead to oral cancer. Nevertheless, this is a preventable disease. By avoiding smoking and other high-risk behaviors and seeing a dentist regularly for routine checkups, you can keep oral cancer out of your future. Periodontal disease, an infection of the gums and bone surrounding the teeth, comes from buildups of harmful oral bacteria and can lead to tooth loss. But bacteria are not the only culprits when it comes to gum disease. The CDC reports that smokers have more than twice the risk for gum disease compared to non-smokers. Smoking interferes with your immune system, making it difficult for your body to fight off conditions like gum infections. Periodontal treatment may not even have the same successful outcome for a smoker as a nonsmoker, because smoking makes it harder for your gums to heal. Bad Breath and Stained Teeth Besides the more serious risks of oral cancer and gum disease, according to the American Dental Association (ADA) Mouth Healthy site, smoking can also affect your sense of taste and smell and delay your recovery after a tooth extraction or other dental procedure. In addition, the tar from cigarette smoke stains your teeth, causes bad breath and can discolor your tongue. The only way to remove these stains is with a professional cleaning in the dentist's office. Keeping Up with Home Care The nicotine in cigarettes is an extremely addictive substance; this is why breaking a smoking habit isn't easy. However, if you are a smoker, quitting will be an important step in improving your overall health. Because quitting can be so challenging, most people need support. Don't hesitate to talk to your dental professional about your desire to stop. As you develop a course of action to help you quit smoking, keeping your mouth and teeth as clean as possible can be daily encouragement to push for perfect health. Brushing often with fluoride toothpaste and flossing daily prevents tooth decay and periodontal disease, and if you battle tartar buildup and other forms of stains, try using toothpaste like Colgate® Tartar Protection Whitening. Now that you know the dangers of smoking and oral health, remember that it's never too late to start the process of a better and healthier lifestyle. Smoking: A Danger To Healthy Gums You've probably seen the warning on cigarette packages: "Quitting smoking now greatly reduces serious risks to your health." What smoking-related diseases come to mind? Lung cancer, probably. Emphysema, maybe.Read More Counseling by Student Dentists Helps Patients Quit Smoking Students at the State University of New York at Buffalo's School of Dental Medicine are using a new approach to help patients quit smoking.Read More Smokeless Tobacco and Oral Health Smokeless tobacco goes by many names, such as dip and chew, snuff, chewing tobacco or spit tobacco. No matter what it is called, smokeless tobacco is highly addictive and can harm one's health.Read More Did you know that smoking tobacco products can make gum disease get worse faster? Studies have shown that smokers were three to six times more likely to suffer from advanced gum disease than nonsmokers. Sometimes things we do to look “cool” can also be a health hazard, like oral piercings and smoking. Oral infections are common, but they can also contribute to cracked or chipped teeth. Oral piercings can also lead to gum recession, which can cause teeth to come loose and fallout. How to Brush Correctly (0:53) How Is Tobacco a THREAT TO ORAL HEALTH? Tobacco's greatest threat to your health may be its association with oral cancer. The American Cancer Society reports that: About 90 percent of people with mouth cancer and some types of throat cancer have used tobacco. The risk of developing these cancers increases as people smoke or chew more often or for a longer time. Smokers are six times more likely than nonsmokers to develop these cancers. About 37 percent of patients who continue to smoke after cancer treatment will develop second cancers of the mouth, throat or larynx. While only 6 percent of people who quit smoking will develop these secondary cancers. Smokeless tobacco has been linked to cancers of the cheek, gums and inner surface of the lips. Smokeless tobacco increases the risk of these cancers by nearly 50 times.7
William & Mary Law Review > Vol. 58 (2016-2017) David Fagundes Buying Happiness: Property, Acquisition, and Subjective Well-Being Acquiring property is a central part of the modern American vision of the good life. The assumption that accruing more land or chattels will make us better off is so central to the contemporary preoccupation with acquisition that it typically goes without saying. Yet an increasing body of evidence from psychologists and economists who study hedonics—the science of happiness—yields the surprising conclusion that getting and having property does not actually increase our subjective well-being. In fact, it might even decrease it. While scholars have integrated the insights of hedonics into other areas of law, no scholarship has yet done so with respect to property. This Article maps this novel territory in three steps. In Part I, it summarizes recent findings on the highly conflicted effect of the acquisition of both land and chattels on subjective well-being. In Part II, it explores the implications of these findings for four leading normative theories of property law, showing that in different ways the evidence produced by happiness studies undermines the core empirical propositions on which these theories rest. Part II also explores the potential of subjective well-being as a framework for assessing the optimal regulation of ownership. Finally, Part III investigates how looking at property through the lens of happiness can help us see this ancient body of law in a new light. Evidence from happiness studies casts doubt on some policies (state promotion of homeownership), while suggesting the appeal of others (tax incentives and disincentives designed to nudge acquisition in the direction of greater subjective well-being). Happiness analysis also suggests promising new insights about related aspects of property, including law’s attempts to prevent dispossession, the proper allocation of public versus private land, and the nascent sharing economy. This Article concludes by showing why these findings actually tell an optimistic, if nonobvious, story about the nature and future of property. David Fagundes, Buying Happiness: Property, Acquisition, and Subjective Well-Being, 58 Wm. & Mary L. Rev. 1851 (2017), http://scholarship.law.wm.edu/wmlr/vol58/iss6/3 Law and Psychology Commons, Property Law and Real Estate Commons ISSN: 0043-5589 (print), 2374-8524 (online) WMLR Website
Substituting Brown Rice for White Rice: Effect on Diabetes Risk Factors in India Hu, Frank B. Viswanathan, Mohan Harvard University, Boston, MA, United States See 95 grants from Frank Hu See 11 grants from Mohan Viswanathan See 28727 grants from Harvard University Currently, India has the largest absolute number of people with diabetes in the world and the escalation in diabetes incidence is occurring as global free trade continues to fuel rapid economic and nutrition transitions, especially in urban settings. These transitions are accompanied by a shift in dietary consumption towards more highly refined carbohydrates, fats, and animal products. Evidence indicates that consumption of whole-grains can decrease diabetes by improving glycemic control. The proposed proof-of-concept trial herein will evaluate the efficacy of substituting brown rice, a whole-grain, for white rice in Chennai, India, on biomarkers of diabetes risk and also will obtain glycemic index values of local rice and other staples. The long-term goal is to develop a multi-center nutritional-intervention in India, China, Africa and Latin America to create sustainable diabetes prevention strategies through improving carbohydrate quality. We propose to conduct a 4-month randomized parallel-group intervention trial to evaluate the effects of substitution of brown rice for white rice in two meals per day six days per week on biomarkers for diabetes risk among adults in Chennai, India who are at high risk for the development of diabetes. The first aim of this research is to determine the glycemic index of different local rice varieties (brown, red, and fully polished white) and preparations (regular or parboiled). The second aim i s to determine the effects of brown rice substitution on fasting biomarker measurements of glucose metabolism (i.e., glucose, insulin, hemoglobin A1c, homeostasis model assessment for insulin resistance), dyslipidemia (i.e., triglycerides, total cholesterol, LDL- cholesterol, and HDL-cholesterol), and inflammation (i.e., C-reactive protein). This trial will also demonstrate the feasibility and cultural appropriateness of this type of intervention in the local environment to which results may be translated in the future. This study will have important implications for policy and help local governments develop national nutrition strategies for diabetes prevention such as wide- spread education campaigns about the health benefits of whole-grains, and school lunch programs which serve whole-grains. Such policies could also encourage ministries of agriculture to support production of whole grains, thereby improving accessibility and regulate costs. This work is intended to be part of a larger global initiative to identify local, feasible and sustainable dietary interventions to reduce diabetes risk in countries experiencing epidemiologic transition by improving the carbohydrate quality of staple foods. Initiatives have already been launched by our group in China, and are planned for Tanzania, Nigeria, Puerto Rico and Mexico. Currently, India has the largest absolute number of people with diabetes in the world, and evidence indicates that consumption of whole-grains can decrease diabetes by improving glycemic control. Our study will evaluate the efficacy of substituting brown rice, a whole-grain, for white rice in Chennai, India, on biomarkers of diabetes risk. The ultimate goal of this research is to provide data for use in designing a global dietary intervention study aimed at reducing diabetes risk through simple, culturally appropriate, feasible and sustainable dietary changes. Fogarty International Center (FIC) Small Research Grants (R03) 5R03TW008726-03 Special Emphasis Panel (ZRG1-ICP3-B (50)) Liu, Xingzhu R03 TW Substituting Brown Rice for White Rice: Effect on Diabetes Risk Factors in India Hu, Frank B.; Viswanathan, Mohan / Harvard University $59,895 Wedick, Nicole M; Sudha, Vasudevan; Spiegelman, Donna et al. (2015) Study design and methods for a randomized crossover trial substituting brown rice for white rice on diabetes risk factors in India. Int J Food Sci Nutr 66:797-804 Mattei, Josiemer; Malik, Vasanti; Wedick, Nicole M et al. (2015) Reducing the global burden of type 2 diabetes by improving the quality of staple foods: The Global Nutrition and Epidemiologic Transition Initiative. Global Health 11:23 Farvid, Maryam S; Qi, Lu; Hu, Frank B et al. (2014) Phobic anxiety symptom scores and incidence of type 2 diabetes in US men and women. Brain Behav Immun 36:176-82 Mohan, Viswanathan; Spiegelman, Donna; Sudha, Vasudevan et al. (2014) Effect of brown rice, white rice, and brown rice with legumes on blood glucose and insulin responses in overweight Asian Indians: a randomized controlled trial. Diabetes Technol Ther 16:317-25 Sudha, Vasudevan; Spiegelman, Donna; Hong, Biling et al. (2013) Consumer Acceptance and Preference Study (CAPS) on brown and undermilled Indian rice varieties in Chennai, India. J Am Coll Nutr 32:50-7 Shobana, S; Malleshi, N G; Sudha, V et al. (2011) Nutritional and sensory profile of two Indian rice varieties with different degrees of polishing. Int J Food Sci Nutr 62:800-10
Thank you to the Helen Ley Charitable Trust People with severe MS will benefit from new bursaries worth £150,000 which will fund the training of specialist nurses and allied health professionals, enabling them to offer the best care possible. The finance behind this initiative comes from the Helen Ley Charitable Trust, an organisation which was formed 42 years ago by the Coventry MS branch with the assistance of local business people. The Helen Ley Trust was originally set up to create a care home at which people affected by Multiple Sclerosis could enjoy a holiday, giving their carers at home a much-needed break. It was the first Respite Home in the UK that specialised in the care of the most complex and severe cases of MS. This could not have happened without the support, volunteering commitment and fundraising within the local community. “The trustees wish to thank everyone who put in such fantastic hard work and assistance over the past 40 years,” says Ann Crossley, chair and founder member of the Helen Ley Trust. In recent years the ownership of the Helen Ley Centre passed from the MS Society to Castell Froma, a local charity that offers long term residential care and respite for people with neurological conditions as well as MS. This change in direction enabled the release of funds to power the new bursary, which is being organised by the MS Trust and will ensure that more people with severe MS can access the high quality care they deserve “For 42 years we have been active in respite care for people with severe and complex MS,” says Mrs Crossley. “This care continues to a degree at Helen Ley Care Centre. But we feel that the time has now arrived to make more positive use of funds, specifically targeting severe and complex MS requirements. We welcome the opportunity to create these new bursaries and fund other training events organised by the MS Trust.” Accepting a cheque on behalf of the MS Trust, Pam Macfarlane, chief executive, said “The MS Trust has long recognised the commitment and dedication of the volunteers, fundraisers and Trustees of the Helen Ley Charitable Trust. Their generosity has made a tremendous difference for many families living with MS. We are delighted that they have chosen to work with us to establish bursaries for health professionals to ensure they can access training and education to improve the care of people with severe MS. For over 20 years the MS Trust has been the primary provider of MS education for those working with people affected by MS and we believe this is a wonderful way of preserving the legacy of the Helen Ley Charitable Trust.”
Plant water use efficiency over geological time - evolution of leaf stomata configurations affecting plant gas exchange. Shmuel Assouline, Dani Or. PUBLISHED: 01-01-2013 Plant gas exchange is a key process shaping global hydrological and carbon cycles and is often characterized by plant water use efficiency (WUE - the ratio of CO2 gain to water vapor loss). Plant fossil record suggests that plant adaptation to changing atmospheric CO2 involved correlated evolution of stomata density (d) and size (s), and related maximal aperture, amax . We interpreted the fossil record of s and d correlated evolution during the Phanerozoic to quantify impacts on gas conductance affecting plant transpiration, E, and CO2 uptake, A, independently, and consequently, on plant WUE. A shift in stomata configuration from large s-low d to small s-high d in response to decreasing atmospheric CO2 resulted in large changes in plant gas exchange characteristics. The relationships between gas conductance, gws , A and E and maximal relative transpiring leaf area, (amax ?d), exhibited hysteretic-like behavior. The new WUE trend derived from independent estimates of A and E differs from established WUE-CO2 trends for atmospheric CO2 concentrations exceeding 1,200 ppm. In contrast with a nearly-linear decrease in WUE with decreasing CO2 obtained by standard methods, the newly estimated WUE trend exhibits remarkably stable values for an extended geologic period during which atmospheric CO2 dropped from 3,500 to 1,200 ppm. Pending additional tests, the findings may affect projected impacts of increased atmospheric CO2 on components of the global hydrological cycle. Design and Operation of a Continuous 13C and 15N Labeling Chamber for Uniform or Differential, Metabolic and Structural, Plant Isotope Labeling Authors: Jennifer L Soong, Dan Reuss, Colin Pinney, Ty Boyack, Michelle L Haddix, Catherine E Stewart, M. Francesca Cotrufo. Published: 01-16-2014 JoVE Environment Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13C with 15N, 18O or 2H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4. From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e. leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7. The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13C and 15N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components. Here, we present the construction and operation of a continuous 13C and 15N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13C and 6.7 atom%15N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13C and 0.56 atom%15N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2 concentration, and light levels in an airtight 13C-CO2 atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling. 20 Related JoVE Articles! Play ButtonLignin Down-regulation of Zea mays via dsRNAi and Klason Lignin AnalysisAuthors: Sang-Hyuck Park, Rebecca Garlock Ong, Chuansheng Mei, Mariam Sticklen. Institutions: University of Arizona, Michigan State University, The Institute for Advanced Learning and Research, Michigan State University.To facilitate the use of lignocellulosic biomass as an alternative bioenergy resource, during biological conversion processes, a pretreatment step is needed to open up the structure of the plant cell wall, increasing the accessibility of the cell wall carbohydrates. Lignin, a polyphenolic material present in many cell wall types, is known to be a significant hindrance to enzyme access. Reduction in lignin content to a level that does not interfere with the structural integrity and defense system of the plant might be a valuable step to reduce the costs of bioethanol production. In this study, we have genetically down-regulated one of the lignin biosynthesis-related genes, cinnamoyl-CoA reductase (ZmCCR1) via a double stranded RNA interference technique. The ZmCCR1_RNAi construct was integrated into the maize genome using the particle bombardment method. Transgenic maize plants grew normally as compared to the wild-type control plants without interfering with biomass growth or defense mechanisms, with the exception of displaying of brown-coloration in transgenic plants leaf mid-ribs, husks, and stems. The microscopic analyses, in conjunction with the histological assay, revealed that the leaf sclerenchyma fibers were thinned but the structure and size of other major vascular system components was not altered. The lignin content in the transgenic maize was reduced by 7-8.7%, the crystalline cellulose content was increased in response to lignin reduction, and hemicelluloses remained unchanged. The analyses may indicate that carbon flow might have been shifted from lignin biosynthesis to cellulose biosynthesis. This article delineates the procedures used to down-regulate the lignin content in maize via RNAi technology, and the cell wall compositional analyses used to verify the effect of the modifications on the cell wall structure. Bioengineering, Issue 89, Zea mays, cinnamoyl-CoA reductase (CCR), dsRNAi, Klason lignin measurement, cell wall carbohydrate analysis, gas chromatography (GC)51340Play ButtonAssessing Stomatal Response to Live Bacterial Cells using Whole Leaf ImagingAuthors: Reejana Chitrakar, Maeli Melotto. Institutions: University of Texas at Arlington .Stomata are natural openings in the plant epidermis responsible for gas exchange between plant interior and environment. They are formed by a pair of guard cells, which are able to close the stomatal pore in response to a number of external factors including light intensity, carbon dioxide concentration, and relative humidity (RH). The stomatal pore is also the main route for pathogen entry into leaves, a crucial step for disease development. Recent studies have unveiled that closure of the pore is effective in minimizing bacterial disease development in Arabidopsis plants; an integral part of plant innate immunity. Previously, we have used epidermal peels to assess stomatal response to live bacteria (Melotto et al. 2006); however maintaining favorable environmental conditions for both plant epidermal peels and bacterial cells has been challenging. Leaf epidermis can be kept alive and healthy with MES buffer (10 mM KCl, 25 mM MES-KOH, pH 6.15) for electrophysiological experiments of guard cells. However, this buffer is not appropriate for obtaining bacterial suspension. On the other hand, bacterial cells can be kept alive in water which is not proper to maintain epidermal peels for long period of times. When an epidermal peel floats on water, the cells in the peel that are exposed to air dry within 4 hours limiting the timing to conduct the experiment. An ideal method for assessing the effect of a particular stimulus on guard cells should present minimal interference to stomatal physiology and to the natural environment of the plant as much as possible. We, therefore, developed a new method to assess stomatal response to live bacteria in which leaf wounding and manipulation is greatly minimized aiming to provide an easily reproducible and reliable stomatal assay. The protocol is based on staining of intact leaf with propidium iodide (PI), incubation of staining leaf with bacterial suspension, and observation of leaves under laser scanning confocal microscope. Finally, this method allows for the observation of the same live leaf sample over extended periods of time using conditions that closely mimic the natural conditions under which plants are attacked by pathogens.Plant Biology, Issue 44, Plant innate immunity, propidium iodide staining, biotic and abiotic stress, leaf microscopy, guard cell, stomatal defense, plant defense, Arabidopsis, Pseudomonas syringae2185Play ButtonTandem High-pressure Freezing and Quick Freeze Substitution of Plant Tissues for Transmission Electron MicroscopyAuthors: Krzysztof Bobik, John R. Dunlap, Tessa M. Burch-Smith. Institutions: University of Tennessee, Knoxville, University of Tennessee, Knoxville.Since the 1940s transmission electron microscopy (TEM) has been providing biologists with ultra-high resolution images of biological materials. Yet, because of laborious and time-consuming protocols that also demand experience in preparation of artifact-free samples, TEM is not considered a user-friendly technique. Traditional sample preparation for TEM used chemical fixatives to preserve cellular structures. High-pressure freezing is the cryofixation of biological samples under high pressures to produce very fast cooling rates, thereby restricting ice formation, which is detrimental to the integrity of cellular ultrastructure. High-pressure freezing and freeze substitution are currently the methods of choice for producing the highest quality morphology in resin sections for TEM. These methods minimize the artifacts normally associated with conventional processing for TEM of thin sections. After cryofixation the frozen water in the sample is replaced with liquid organic solvent at low temperatures, a process called freeze substitution. Freeze substitution is typically carried out over several days in dedicated, costly equipment. A recent innovation allows the process to be completed in three hours, instead of the usual two days. This is typically followed by several more days of sample preparation that includes infiltration and embedding in epoxy resins before sectioning. Here we present a protocol combining high-pressure freezing and quick freeze substitution that enables plant sample fixation to be accomplished within hours. The protocol can readily be adapted for working with other tissues or organisms. Plant tissues are of special concern because of the presence of aerated spaces and water-filled vacuoles that impede ice-free freezing of water. In addition, the process of chemical fixation is especially long in plants due to cell walls impeding the penetration of the chemicals to deep within the tissues. Plant tissues are therefore particularly challenging, but this protocol is reliable and produces samples of the highest quality.Plant Biology, Issue 92, High-pressure freezing, freeze substitution, transmission electron microscopy, ultrastructure, Nicotiana benthamiana, Arabidopsis thaliana, imaging, cryofixation, dehydration51844Play ButtonIn Situ SIMS and IR Spectroscopy of Well-defined Surfaces Prepared by Soft Landing of Mass-selected IonsAuthors: Grant E. Johnson, K. Don Dasitha Gunaratne, Julia Laskin. Institutions: Pacific Northwest National Laboratory.Soft landing of mass-selected ions onto surfaces is a powerful approach for the highly-controlled preparation of materials that are inaccessible using conventional synthesis techniques. Coupling soft landing with in situ characterization using secondary ion mass spectrometry (SIMS) and infrared reflection absorption spectroscopy (IRRAS) enables analysis of well-defined surfaces under clean vacuum conditions. The capabilities of three soft-landing instruments constructed in our laboratory are illustrated for the representative system of surface-bound organometallics prepared by soft landing of mass-selected ruthenium tris(bipyridine) dications, [Ru(bpy)3]2+ (bpy = bipyridine), onto carboxylic acid terminated self-assembled monolayer surfaces on gold (COOH-SAMs). In situ time-of-flight (TOF)-SIMS provides insight into the reactivity of the soft-landed ions. In addition, the kinetics of charge reduction, neutralization and desorption occurring on the COOH-SAM both during and after ion soft landing are studied using in situ Fourier transform ion cyclotron resonance (FT-ICR)-SIMS measurements. In situ IRRAS experiments provide insight into how the structure of organic ligands surrounding metal centers is perturbed through immobilization of organometallic ions on COOH-SAM surfaces by soft landing. Collectively, the three instruments provide complementary information about the chemical composition, reactivity and structure of well-defined species supported on surfaces.Chemistry, Issue 88, soft landing, mass selected ions, electrospray, secondary ion mass spectrometry, infrared spectroscopy, organometallic, catalysis51344Play ButtonHigh-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme ActivitiesAuthors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein. Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil50961Play ButtonMeasuring the Osmotic Water Permeability Coefficient (Pf) of Spherical Cells: Isolated Plant Protoplasts as an ExampleAuthors: Arava Shatil-Cohen, Hadas Sibony, Xavier Draye, François Chaumont, Nava Moran, Menachem Moshelion. Institutions: The Hebrew University of Jerusalem, Université catholique de Louvain, Université catholique de Louvain.Studying AQP regulation mechanisms is crucial for the understanding of water relations at both the cellular and the whole plant levels. Presented here is a simple and very efficient method for the determination of the osmotic water permeability coefficient (Pf) in plant protoplasts, applicable in principle also to other spherical cells such as frog oocytes. The first step of the assay is the isolation of protoplasts from the plant tissue of interest by enzymatic digestion into a chamber with an appropriate isotonic solution. The second step consists of an osmotic challenge assay: protoplasts immobilized on the bottom of the chamber are submitted to a constant perfusion starting with an isotonic solution and followed by a hypotonic solution. The cell swelling is video recorded. In the third step, the images are processed offline to yield volume changes, and the time course of the volume changes is correlated with the time course of the change in osmolarity of the chamber perfusion medium, using a curve fitting procedure written in Matlab (the ‘PfFit’), to yield Pf.Plant Biology, Issue 92, Osmotic water permeability coefficient, aquaporins, protoplasts, curve fitting, non-instantaneous osmolarity change, volume change time course51652Play ButtonExperimental Protocol for Manipulating Plant-induced Soil HeterogeneityAuthors: Angela J. Brandt, Gaston A. del Pino, Jean H. Burns. Institutions: Case Western Reserve University.Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition.Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch51580Play ButtonPhysical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated SitesAuthors: Mackenzie J. Denyes, Michèle A. Parisien, Allison Rutter, Barbara A. Zeeb. Institutions: Royal Military College of Canada, Queen's University.The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment52183Play ButtonEfficient Agroinfiltration of Plants for High-level Transient Expression of Recombinant ProteinsAuthors: Kahlin Leuzinger, Matthew Dent, Jonathan Hurtado, Jake Stahnke, Huafang Lai, Xiaohong Zhou, Qiang Chen. Institutions: Arizona State University .Mammalian cell culture is the major platform for commercial production of human vaccines and therapeutic proteins. However, it cannot meet the increasing worldwide demand for pharmaceuticals due to its limited scalability and high cost. Plants have shown to be one of the most promising alternative pharmaceutical production platforms that are robust, scalable, low-cost and safe. The recent development of virus-based vectors has allowed rapid and high-level transient expression of recombinant proteins in plants. To further optimize the utility of the transient expression system, we demonstrate a simple, efficient and scalable methodology to introduce target-gene containing Agrobacterium into plant tissue in this study. Our results indicate that agroinfiltration with both syringe and vacuum methods have resulted in the efficient introduction of Agrobacterium into leaves and robust production of two fluorescent proteins; GFP and DsRed. Furthermore, we demonstrate the unique advantages offered by both methods. Syringe infiltration is simple and does not need expensive equipment. It also allows the flexibility to either infiltrate the entire leave with one target gene, or to introduce genes of multiple targets on one leaf. Thus, it can be used for laboratory scale expression of recombinant proteins as well as for comparing different proteins or vectors for yield or expression kinetics. The simplicity of syringe infiltration also suggests its utility in high school and college education for the subject of biotechnology. In contrast, vacuum infiltration is more robust and can be scaled-up for commercial manufacture of pharmaceutical proteins. It also offers the advantage of being able to agroinfiltrate plant species that are not amenable for syringe infiltration such as lettuce and Arabidopsis. Overall, the combination of syringe and vacuum agroinfiltration provides researchers and educators a simple, efficient, and robust methodology for transient protein expression. It will greatly facilitate the development of pharmaceutical proteins and promote science education.Plant Biology, Issue 77, Genetics, Molecular Biology, Cellular Biology, Virology, Microbiology, Bioengineering, Plant Viruses, Antibodies, Monoclonal, Green Fluorescent Proteins, Plant Proteins, Recombinant Proteins, Vaccines, Synthetic, Virus-Like Particle, Gene Transfer Techniques, Gene Expression, Agroinfiltration, plant infiltration, plant-made pharmaceuticals, syringe agroinfiltration, vacuum agroinfiltration, monoclonal antibody, Agrobacterium tumefaciens, Nicotiana benthamiana, GFP, DsRed, geminiviral vectors, imaging, plant model50521Play ButtonA Rapid and Efficient Method for Assessing Pathogenicity of Ustilago maydis on Maize and Teosinte LinesAuthors: Suchitra Chavan, Shavannor M. Smith. Institutions: University of Georgia.Maize is a major cereal crop worldwide. However, susceptibility to biotrophic pathogens is the primary constraint to increasing productivity. U. maydis is a biotrophic fungal pathogen and the causal agent of corn smut on maize. This disease is responsible for significant yield losses of approximately $1.0 billion annually in the U.S.1 Several methods including crop rotation, fungicide application and seed treatments are currently used to control corn smut2. However, host resistance is the only practical method for managing corn smut. Identification of crop plants including maize, wheat, and rice that are resistant to various biotrophic pathogens has significantly decreased yield losses annually3-5. Therefore, the use of a pathogen inoculation method that efficiently and reproducibly delivers the pathogen in between the plant leaves, would facilitate the rapid identification of maize lines that are resistant to U. maydis. As, a first step toward indentifying maize lines that are resistant to U. maydis, a needle injection inoculation method and a resistance reaction screening method was utilized to inoculate maize, teosinte, and maize x teosinte introgression lines with a U. maydis strain and to select resistant plants. Maize, teosinte and maize x teosinte introgression lines, consisting of about 700 plants, were planted, inoculated with a strain of U. maydis, and screened for resistance. The inoculation and screening methods successfully identified three teosinte lines resistant to U. maydis. Here a detailed needle injection inoculation and resistance reaction screening protocol for maize, teosinte, and maize x teosinte introgression lines is presented. This study demonstrates that needle injection inoculation is an invaluable tool in agriculture that can efficiently deliver U. maydis in between the plant leaves and has provided plant lines that are resistant to U. maydis that can now be combined and tested in breeding programs for improved disease resistance.Environmental Sciences, Issue 83, Bacterial Infections, Signs and Symptoms, Eukaryota, Plant Physiological Phenomena, Ustilago maydis, needle injection inoculation, disease rating scale, plant-pathogen interactions50712Play ButtonCharacterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case StudyAuthors: Johannes Felix Buyel, Rainer Fischer. Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody51216Play ButtonOptimization and Utilization of Agrobacterium-mediated Transient Protein Production in NicotianaAuthors: Moneim Shamloul, Jason Trusa, Vadim Mett, Vidadi Yusibov. Institutions: Fraunhofer USA Center for Molecular Biotechnology.Agrobacterium-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana plants with Agrobacteria carrying launch vectors. Optimization of Agrobacterium cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana, N. excelsiana (N. benthamiana × N. excelsior) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium harboring pBID4-GFP (Tobacco mosaic virus-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium laboratory strain GV3101 showed the highest protein production compared to Agrobacteria laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria51204Play ButtonMonitoring Plant Hormones During Stress ResponsesAuthors: Marie J. Engelberth, Jurgen Engelberth. Institutions: University of Texas.Plant hormones and related signaling compounds play an important role in the regulation of plant responses to various environmental stimuli and stresses. Among the most severe stresses are insect herbivory, pathogen infection, and drought stress. For each of these stresses a specific set of hormones and/or combinations thereof are known to fine-tune the responses, thereby ensuring the plant's survival. The major hormones involved in the regulation of these responses are jasmonic acid (JA), salicylic acid (SA), and abscisic acid (ABA). To better understand the role of individual hormones as well as their potential interaction during these responses it is necessary to monitor changes in their abundance in a temporal as well as in a spatial fashion. For the easy, sensitive, and reproducible quantification of these and other signaling compounds we developed a method based on vapor phase extraction and gas chromatography/mass spectrometry (GC/MS) analysis (1, 2, 3, 4). After extracting these compounds from the plant tissue by acidic aqueous 1-propanol mixed with dichloromethane the carboxylic acid-containing compounds are methylated, volatilized under heat, and collected on a polymeric absorbent. After elution into a sample vial the analytes are separated by gas chromatography and detected by chemical ionization mass spectrometry. The use of appropriate internal standards then allows for the simple quantification by relating the peak areas of analyte and internal standard.Plant Biology, Issue 28, Jasmonic acid, salicylic acid, abscisic acid, plant hormones, GC/MS, vapor phase extraction1127Play ButtonInvestigating the Microbial Community in the Termite Hindgut - InterviewAuthors: Jared Leadbetter. Institutions: California Institute of Technology - Caltech.Jared Leadbetter explains why the termite-gut microbial community is an excellent system for studying the complex interactions between microbes. The symbiotic relationship existing between the host insect and lignocellulose-degrading gut microbes is explained, as well as the industrial uses of these microbes for degrading plant biomass and generating biofuels.Microbiology, issue 4, microbial community, diversity196Play ButtonExpired CO2 Measurement in Intubated or Spontaneously Breathing Patients from the Emergency DepartmentAuthors: Franck Verschuren, Maidei Gugu Kabayadondo, Frédéric Thys. Institutions: Universit Catholique de Louvain Cliniques Universitaires Saint-Luc.Carbon dioxide (CO2) along with oxygen (O2) share the role of being the most important gases in the human body. The measuring of expired CO2 at the mouth has solicited growing clinical interest among physicians in the emergency department for various indications: (1) surveillance et monitoring of the intubated patient; (2) verification of the correct positioning of an endotracheal tube; (3) monitoring of a patient in cardiac arrest; (4) achieving normocapnia in intubated head trauma patients; (5) monitoring ventilation during procedural sedation. The video allows physicians to familiarize themselves with the use of capnography and the text offers a review of the theory and principals involved. In particular, the importance of CO2 for the organism, the relevance of measuring expired CO2, the differences between arterial and expired CO2, the material used in capnography with their artifacts and traps, will be reviewed. Since the main reluctance in the use of expired CO2 measurement is due to lack of correct knowledge concerning the physiopathology of CO2 by the physician, we hope that this explanation and the video sequences accompanying will help resolve this limitation.Medicine, Issue 47, capnography, CO2, emergency medicine, end-tidal CO22508Play ButtonTesting Nicotine Tolerance in Aphids Using an Artificial Diet ExperimentAuthors: John Sawyer Ramsey, Georg Jander. Institutions: Cornell University.Plants may upregulate the production of many different seconday metabolites in response to insect feeding. One of these metabolites, nicotine, is well know to have insecticidal properties. One response of tobacco plants to herbivory, or being gnawed upon by insects, is to increase the production of this neurotoxic alkaloid. Here, we will demonstrate how to set up an experiment to address this question of whether a tobacco-adapted strain of the green peach aphid, Myzus persicae, can tolerate higher levels of nicotine than the a strain of this insect that does not infest tobacco in the field.Plant Biology, Issue 15, Annual Review, Nicotine, Aphids, Plant Feeding Resistance, Tobacco701Play ButtonUse of Arabidopsis eceriferum Mutants to Explore Plant Cuticle BiosynthesisAuthors: Lacey Samuels, Allan DeBono, Patricia Lam, Miao Wen, Reinhard Jetter, Ljerka Kunst. Institutions: University of British Columbia - UBC, University of British Columbia - UBC.The plant cuticle is a waxy outer covering on plants that has a primary role in water conservation, but is also an important barrier against the entry of pathogenic microorganisms. The cuticle is made up of a tough crosslinked polymer called "cutin" and a protective wax layer that seals the plant surface. The waxy layer of the cuticle is obvious on many plants, appearing as a shiny film on the ivy leaf or as a dusty outer covering on the surface of a grape or a cabbage leaf thanks to light scattering crystals present in the wax. Because the cuticle is an essential adaptation of plants to a terrestrial environment, understanding the genes involved in plant cuticle formation has applications in both agriculture and forestry. Today, we'll show the analysis of plant cuticle mutants identified by forward and reverse genetics approaches. Plant Biology, Issue 16, Annual Review, Cuticle, Arabidopsis, Eceriferum Mutants, Cryso-SEM, Gas Chromatography709Play ButtonChoice and No-Choice Assays for Testing the Resistance of A. thaliana to Chewing InsectsAuthors: Martin De Vos, Georg Jander. Institutions: Cornell University.Larvae of the small white cabbage butterfly are a pest in agricultural settings. This caterpillar species feeds from plants in the cabbage family, which include many crops such as cabbage, broccoli, Brussel sprouts etc. Rearing of the insects takes place on cabbage plants in the greenhouse. At least two cages are needed for the rearing of Pieris rapae. One for the larvae and the other to contain the adults, the butterflies. In order to investigate the role of plant hormones and toxic plant chemicals in resistance to this insect pest, we demonstrate two experiments. First, determination of the role of jasmonic acid (JA - a plant hormone often indicated in resistance to insects) in resistance to the chewing insect Pieris rapae. Caterpillar growth can be compared on wild-type and mutant plants impaired in production of JA. This experiment is considered "No Choice", because larvae are forced to subsist on a single plant which synthesizes or is deficient in JA. Second, we demonstrate an experiment that investigates the role of glucosinolates, which are used as oviposition (egg-laying) signals. Here, we use WT and mutant Arabidopsis impaired in glucosinolate production in a "Choice" experiment in which female butterflies are allowed to choose to lay their eggs on plants of either genotype. This video demonstrates the experimental setup for both assays as well as representative results.Plant Biology, Issue 15, Annual Review, Plant Resistance, Herbivory, Arabidopsis thaliana, Pieris rapae, Caterpillars, Butterflies, Jasmonic Acid, Glucosinolates683Play ButtonMeasurement of Leaf Hydraulic Conductance and Stomatal Conductance and Their Responses to Irradiance and Dehydration Using the Evaporative Flux Method (EFM)Authors: Lawren Sack, Christine Scoffoni. Institutions: University of California, Los Angeles .Water is a key resource, and the plant water transport system sets limits on maximum growth and drought tolerance. When plants open their stomata to achieve a high stomatal conductance (gs) to capture CO2 for photosynthesis, water is lost by transpiration1,2. Water evaporating from the airspaces is replaced from cell walls, in turn drawing water from the xylem of leaf veins, in turn drawing from xylem in the stems and roots. As water is pulled through the system, it experiences hydraulic resistance, creating tension throughout the system and a low leaf water potential (Ψleaf). The leaf itself is a critical bottleneck in the whole plant system, accounting for on average 30% of the plant hydraulic resistance3. Leaf hydraulic conductance (Kleaf = 1/ leaf hydraulic resistance) is the ratio of the water flow rate to the water potential gradient across the leaf, and summarizes the behavior of a complex system: water moves through the petiole and through several orders of veins, exits into the bundle sheath and passes through or around mesophyll cells before evaporating into the airspace and being transpired from the stomata. Kleaf is of strong interest as an important physiological trait to compare species, quantifying the effectiveness of the leaf structure and physiology for water transport, and a key variable to investigate for its relationship to variation in structure (e.g., in leaf venation architecture) and its impacts on photosynthetic gas exchange. Further, Kleaf responds strongly to the internal and external leaf environment3. Kleaf can increase dramatically with irradiance apparently due to changes in the expression and activation of aquaporins, the proteins involved in water transport through membranes4, and Kleaf declines strongly during drought, due to cavitation and/or collapse of xylem conduits, and/or loss of permeability in the extra-xylem tissues due to mesophyll and bundle sheath cell shrinkage or aquaporin deactivation5-10. Because Kleaf can constrain gs and photosynthetic rate across species in well watered conditions and during drought, and thus limit whole-plant performance they may possibly determine species distributions especially as droughts increase in frequency and severity11-14. We present a simple method for simultaneous determination of Kleaf and gs on excised leaves. A transpiring leaf is connected by its petiole to tubing running to a water source on a balance. The loss of water from the balance is recorded to calculate the flow rate through the leaf. When steady state transpiration (E, mmol • m-2 • s-1) is reached, gs is determined by dividing by vapor pressure deficit, and Kleaf by dividing by the water potential driving force determined using a pressure chamber (Kleaf= E /- Δψleaf, MPa)15. This method can be used to assess Kleaf responses to different irradiances and the vulnerability of Kleaf to dehydration14,16,17.Plant Biology, Issue 70, Molecular Biology, Physiology, Ecology, Biology, Botany, Leaf traits, hydraulics, stomata, transpiration, xylem, conductance, leaf hydraulic conductance, resistance, evaporative flux method, whole plant4179Play ButtonTesting the Physiological Barriers to Viral Transmission in Aphids Using MicroinjectionAuthors: Cecilia Tamborindeguy, Stewart Gray, Georg Jander. Institutions: Cornell University, Cornell University.Potato loafroll virus (PLRV), from the family Luteoviridae infects solanaceous plants. It is transmitted by aphids, primarily, the green peach aphid. When an uninfected aphid feeds on an infected plant it contracts the virus through the plant phloem. Once ingested, the virus must pass from the insect gut to the hemolymph (the insect blood ) and then must pass through the salivary gland, in order to be transmitted back to a new plant. An aphid may take up different viruses when munching on a plant, however only a small fraction will pass through the gut and salivary gland, the two main barriers for transmission to infect more plants. In the lab, we use physalis plants to study PLRV transmission. In this host, symptoms are characterized by stunting and interveinal chlorosis (yellowing of the leaves between the veins with the veins remaining green). The video that we present demonstrates a method for performing aphid microinjection on insects that do not vector PLVR viruses and tests whether the gut is preventing viral transmission. The video that we present demonstrates a method for performing Aphid microinjection on insects that do not vector PLVR viruses and tests whether the gut or salivary gland is preventing viral transmission.Plant Biology, Issue 15, Annual Review, Aphids, Plant Virus, Potato Leaf Roll Virus, Microinjection Technique700
Scientists suggest that type-1 and type-2 diabetes are result of same mechanism Work by scientists at the Universities of Manchester and Auckland suggest that both major forms of diabetes are the result of the same mechanism. The findings, published today in the FASEB Journal (20 August), provide compelling evidence that juvenile-onset or type-1 diabetes and type-2 diabetes are both caused by the formation of toxic clumps of a hormone called amylin. The results, based on 20 years' work in New Zealand, suggest that type-1 and type-2 diabetes could both be slowed down and potentially reversed by medicines that stop amylin forming these toxic clumps. Professor Garth Cooper, from The University of Manchester working with his University of Auckland-based research team, led the study. Related StoriesGLP-1 receptor agonist may benefit high-risk diabetesScientists uncover molecular identity of previously unknown Glima molecule in Type 1 diabetesNew research shows type 2 diabetes as important risk factor for chronic liver diseaseAs well as producing insulin, cells in the pancreas also produce another hormone called amylin. Insulin and amylin normally work together to regulate the body's response to food intake. If they are no longer produced, then levels of sugar in the blood rise resulting in diabetes and causing damage to organs such as the heart, kidneys, eyes and nerves if blood sugar levels aren't properly controlled. However, some of the amylin that is produced can get deposited around cells in the pancreas as toxic clumps, which then, in turn, destroy those cells that produce insulin and amylin. The consequence of this cell death is diabetes. Research published previously by Professor Cooper suggested that this is the causative mechanism in type-2 diabetes. This new research provides strong evidence that type-1 diabetes results from the same mechanism. The difference is that the disease starts at an earlier age and progresses more rapidly in type-1 compared to type-2 diabetes because there is more rapid deposition of toxic amylin clumps in the pancreas. Professor Cooper's group expects to have potential medicines ready to go into clinical trials in the next two years and it is anticipated that these will be tested in both type-1 and type-2 diabetic patients. These clinical trials are being planned with research groups in England and Scotland. Source:http://www.manchester.ac.uk/discover/news/article/?id=12649 daff00c2-a994-4a1e-bbcb-621bba8bd081|0|.0 Posted in: Medical Science News | Medical Research News | Medical Condition News Tags: Blood, Blood Sugar, Cell, Diabetes, Hormone, Insulin, Pancreas, Type 1 Diabetes, Type 2 Diabetes, Type I Diabetes Permalink Longitude and NovaQuest acquire California Cryobank Hartford/VA Scholars Program selects three geriatric social work researchers Nocturnal hypoxemia closely linked to diabetic microvascular complicationsLittle and often gives exercise benefits in Type 2 diabetesClinical inertia puts Type 2 diabetes patients at further risk of preventable complicationsStudy points to novel way to treat diabetes in some patientsTransplanted human islets prevent hypoglycemic events in Type 1 diabetes patientsType 2 diabetes can lead to hearing impairmentNYU Stern innovation expert uncovers new link between diabetes and Alzheimer's diseaseNew clinical study to evaluate inexpensive drug to prevent type 1 diabetes Low levels of 25-hydroxyvitamin D highly prevalent among children with type 1 diabetes
Center for Data Analysis Report #09-02 on Department of Homeland Security Effective Counterterrorism: State and Local Capabilities Trump Federal Policy By Matt A. Mayer Print PDF Following the September 11, 2001, terrorist attacks, and then Hurricane Katrina, Americans generally assumed that authorities in Washington, D.C., would shoulder the primary responsibility for securing the safety of the American homeland. This assumption is understandable given that over the past half-century the federal government has amassed far more authority than was ever envi­sioned in the U.S. Constitution. Despite a rich his­tory of civilian defense in which states and localities have taken responsibility for their own affairs, the U.S. government is federalizing more and more of the homeland security Not only is this approach constitutionally incor­rect, but the states themselves could do the job bet­ter. Washington's one-size-fits-all solutions rarely succeed. The country's needs are too diverse, federal resources are physically too far from any one loca­tion to secure rapid responses, and federal decision-making is notoriously inept. The Heritage Foundation's Homeland Security and the States Project seeks to place responsibility where it should be according to the Constitution and where the most efficient, effective leadership resides. This project focuses on four areas where state and local leadership is preferable to federal oversight: preparedness for and resiliency against terrorist attacks and natural disasters, disaster response, interior enforcement of laws against ille­gal immigration, and counterterrorism. The project involves four key phases: Research and outreach to state and local associations in Washington, D.C.; State and local outreach using 10 regional roundtables; Drafting, circulating for review and comment, and finalizing a suite of solutions across the four areas of focus for states and localities to enact or adopt; and Launching an adoption campaign. As part of the research process, we have gathered the homeland security budget data for specific states, cities, and counties; analyzed disaster re­sponse activities at the federal level historically; compiled initiatives and legislative actions to com­bat illegal immigration; and conducted a survey of state and local counterterrorism capabilities. (See Appendix A.) State and Local Law Enforcement Must As The Heritage Foundation's previous report on state and local homeland security budgets viv­idly demonstrated, state and local resources far exceed federal resources.[1] Specifically, in addition to appropriating more money every year to domestic law enforcement efforts, states and localities employ over 1.1 million officers, compared to the roughly 25,000 agents working for the Federal Bureau of Investigation and Immigration and Customs Enforcement. This imbalance makes sense given the chronic public safety issues in American cities and states. Constitutionally, states and localities are the proper leads on domestic security issues. As Alexander Hamilton noted in The Federalist No. 17, "There is one transcendent advantage belonging to the province of the State governments, which alone suffices to place the matter in a clear and satisfactory light--I mean the ordinary administration of criminal and civil justice."[2] But the importance of a state and local lead on domestic counterterrorism goes beyond money, personnel, and even constitutional appropriateness. As the counterterrorism survey reveals, the vast majority of state and local law enforcement agencies use one or more of the three primary policing techniques-- community policing, intelligence-led policing, and problem-oriented policing--to secure their jurisdic­tions. These techniques, first widely deployed by then-New York City Transit Authority Chief William Bratton in 1990, have resulted in significant reduc­tions in crime all across the United States. Unlike federal agents who really enter communities only as part of active investigations, state and local law enforcement personnel see it as a source of success to become active parts of their community. Whether it is by walking an assigned beat or patrolling sections of a city by car, local law enforcement officers come to know their communities inside and out. This familiarity results in two critical developments: Community members trust them and share key information about what is going on in the area, and Law enforcement personnel develop a gut instinct that allows them to sense when some­one or something just is not As the International Association of Chiefs of Police has noted, "Over the past decade, simulta­neous to federally led initiatives to improve intelli­gence gathering, thousands of community policing officers have been building close and personal rela­tionships with the citizens they serve." These activ­ities provide them "immediate and unfettered access to local, neighborhood information as it develops...[where the people] provide them with new information."[3] In addition to their community knowledge, state and local governments house roughly 90 percent of America's prison population. Given the increasing concern that some prison inmates are susceptible to radicalization, the work being done in U.S. jails and prisons to monitor, detect, and thwart terrorist activities must remain closely connected to the same activities occurring in our communities, espe­cially as potentially radicalized prisoners are paroled. This linkage becomes even more impor­tant as gang and drug cartels consider connecting with terrorist groups. This investment in money, people, policing techniques, and communities gives America its best chance to detect and prevent a terrorist attack once the terrorists have entered the country or when homegrown radicals emerge. To be successful, state and local law enforcement must have the ability to do its job. Developing State and Local As detailed in the Target Capabilities List (TCL) developed by the U.S. Department of Homeland Security (DHS) in close partnership with state and local partners, there are five critical prevention capabilities that states and localities should possess to deal with the threat from terrorists: Information-gathering and recognition of indicators and warnings; Intelligence analysis and production; Intelligence and information-sharing and dissemination; Counterterrorism investigation and law enforcement; Chemical, biological, radiological, nuclear, and explosive (CBRNE) threat detection. Each capability has specific outcomes, objectives, preparedness measures, performance measures, resource elements, planning assumptions, and tar­get-capability preparedness levels. The TCL capa­bilities assume a requisite level of staffing to perform the tasks within each capability.[4] (For details on each of the five TCL capabilities, see Table 1.) The 9/11 Commission's conclusions pertaining to the staffing capabilities needed by the FBI are con­sistent with the TCL personnel requirements and apply with equal force to state and local counterter­rorism units. Specifically, units should possess "agents, analysts, linguists, and surveillance special­ists who are recruited, and retained to ensure the development of an institutional culture imbued with a deep expertise in intelligence and national security."[5] Ideally, agencies will possess distinct counterterrorism units with dedicated full-time officers and a leadership structure that reports directly to the head of the agency. Agencies should ensure that being part of the counterterrorism units provides career advance­ment for their personnel so that they can attract and retain officers. To do this, they "should fully imple­ment a recruiting, hiring, and selection process for agents and analysts that enhances [their] ability to target and attract individuals with educational and professional backgrounds in intelligence, international relations, language, technology, and other relevant skills."[6] Although many small to medium-size cities may not need the full gamut of counterterrorism capabil­ities, many higher-risk jurisdictions, given al-Qaeda's global history of launching attacks in large urban centers, should have them. This requires city and county leaders to restructure their budgets to ensure that the requisite level of funding goes to acquiring, creating, and maintaining vibrant coun­terterrorism capabilities. DHS grant funding can then be used to supplement the state and local bud­gets to acquire the necessary TCL capabilities. Regional Counterterrorism Today Due to the sensitivity of publicizing existing capabilities of specific states, cities, and counties, the Heritage survey asked respondents to identify themselves by Federal Emergency Management Agency (FEMA) region and population. Heritage sent the counterterrorism survey to the principal state and local law enforcement officials (state superintendent or secretary, chief of police, and sheriff) in 129 jurisdictions across America. The list represented 28 states and the District of Columbia, as well as 54 cities and 46 counties. The cities and counties are jurisdictions that DHS has made eligi­ble for the Urban Areas Security Initiative (UASI) grant program. (For the list of jurisdictions, see Appendix B.) Heritage received responses from 64 of the 129 jurisdictions. The 64 responses cover nine of the 10 FEMA regions. Heritage did not receive any responses from Region VIII (in Denver, Colorado) and received only one response from Region VII (in Kansas City, Missouri). Those two regions, however, have only eight survey recipients because of their lack of higher-risk urban areas (only four UASI jurisdictions across the 10-state area). Critically, Heritage did receive responses from more than half of the recipients in four regions: II, IV, IX, and X. These four regions contain almost half of the higher-risk urban areas that received UASI funds in fiscal year 2008, including Atlanta, the San Francisco Bay Area, Los Angeles-Long Beach, Miami, New York City-Northern New Jersey, and Seattle. (For the distribution of recipients and responses by region, see Table 2.) Based on the survey responses, it is clear that much work remains to be done to ensure that the higher-risk states and localities possess the counter­terrorism capabilities highlighted in the TCL that are necessary to keep their citizens safe from another terrorist attack. Specifically, of the 64 jurisdictions, only 42 pos­sess counterterrorism units. Of those units, only 20 were deemed critical enough to have leadership that reported directly to the head of the agency. Staffing levels also were weak. Even though six jurisdictions had 31 or more "full-time officers [who] work on terrorism issues," 12 had no full-time officers, and another 30 had only one to five full-time officers. In terms of more specialized staffing, only three jurisdictions had 21 or more full-time intelligence analysts. Twenty jurisdictions did not have any full-time intelligence analysts, and 27 had between one and five intelligence analysts, which together repre­sented 73 percent of the jurisdictions. Jurisdictions with full-time linguists were even worse: Only two jurisdictions had 21 or more full-time linguists, and one had between 11 and 20 full-time linguists. A total of 52 jurisdictions lacked a full-time linguist. Despite the lack of full-time linguists, many juris­dictions had some ability to translate and communi­cate in one of 16 different languages. Not surprisingly, the language that most jurisdictions could handle was Spanish (36). The second language was Arabic (24), followed by Russian (23), Korean (17), and Farsi (14). Other languages were Portuguese (12), Mandarin (11), Cantonese (10), Hindi (8), Urdu (7), Pashto (6), Punjabi (5), Bahasa Indonesian (4), Somali (4), Turkish (4), and Bangla (3). To close the gaps in intelligence and linguistics, states and localities need to partner with higher-education institutions to develop analytic and lan­guage programs. The jurisdiction with the most capabilities had a counterterrorism unit with 31 full-time officers, 21 intelligence analysts, and 21 linguists; could trans­late and communicate in all 16 languages, and belonged to a Joint Terrorism Task Force (JTTF). The jurisdiction with the least capabilities had no counterterrorism unit, no intelligence analysts, and no linguists; could not translate or communicate in any of the 16 languages; and did not belong to either a JTTF or a fusion center. Finally, when it comes to the continued inter­agency fight between DHS and the U.S. Department of Justice over which agency is the primary federal partner for state and local law enforcement on infor­mation- and intelligence-sharing, the Justice Department has far more connections to the nation's major law enforcement entities. Specifically, almost every one of the major law enforcement jurisdictions that responded to the survey (61) belonged to a JTTF, while only 43 jurisdictions participated in or had a fusion or data center. Because state and local law enforcement agencies already face budget constraints and very limited resources, the demands-- in many cases redundant--by DHS and the Justice Department can overwhelm them. What Should Be Done Washington needs to end the dual-headed fed­eral agency fight over which entity should be the primary federal partner of state and local law enforcement. Rather, the federal government needs to present a federal enterprise solution to state and local governments. The bottom line is that too many of the United States' higher-risk jurisdictions lack the requisite level of counterterrorism capabilities to engage in effective prevention activities. This defi­ciency must end. First, state and local political leaders must stop underfunding their law enforcement agencies and thereby preventing those agencies from building robust counterterrorism programs. These elected officials must also stop cutting law enforcement budgets during budget crises. With the explosion of state and local budgets unrelated to public safety over the past decade, surely there are other agencies that could be downsized and still maintain minimum functionality. The nation's security must come Second, states and localities should reorganize their law enforcement agencies in accordance with the 9/11 Commission's recommendations. To attract top candidates, law enforcement agencies must make clear that a career in counterterrorism has the same upward mobility as a career in more traditional units. Candidates also need to know that their jobs will be secure when money gets tight. Third, there must be a realistic assessment of risk. Are there really 60 urban areas that can be classified as "high risk," or did DHS simply make a political decision when it enlarged the number of fully eligible urban areas from 35 to 60 last year? Although the DHS risk formula is classified, those who have seen it know that the curve on the chart begins to flatline once the line hits the 30th urban area. By extending eligibility to 60 urban areas, DHS is merely diluting the finite federal funds that truly at-risk urban areas need to supplement their local budgets, thereby delaying the implementation of critical counterterrorism capabilities. Since DHS has failed to make the tough choices, Congress must expressly limit the number of urban areas that are eligible for the UASI grant program to 35 or fewer. In the eight years since the 9/11 attacks, too much of the debate about how to fix domestic intelligence deficiencies has been focused on the federal aspect. Whether the debate centered on the creation of the Information Sharing Environment (ISE) or the role of the Director of National Intelligence, there was too little serious discussion of the role of states and localities. Too often, Washington viewed states and localities as mere sources for Rather than spending yet more years talking about the need for state and local "information-sharing," which really just means sending informa­tion to the federal government, the United States should first properly apportion the roles and responsibilities between the federal government and states and localities based on the respective resources that each possesses (money, people, and experience). Then the federal government should help states and localities, especially the higher-risk jurisdictions, to fill gaps in their counterterrorism capabilities. Finally, the federal government should get out of the way of state and local law enforcement agencies so that they can do the job they have done since the founding of our country: protect us. Thankfully, it is not too late to do these things so that we increase the odds of preventing a terrorist attack on American Matt A. Mayer is a Visiting Fellow at The Heritage Foundation, President and Chief Executive Officer of Provisum Strategies LLC, and an Adjunct Professor at Ohio State University. He has served as Counselor to the Deputy Secretary and Acting Executive Director for the Office of Grants and Training in the U.S. Department of Homeland Security. He is author of Homeland Security and Federalism: Protecting America from Out­side the Beltway, which will be published in June 2009. The author thanks all the state and local law enforcement agencies that responded to Homeland Security and the States Counterterrorism Survey Cities and States Eligible for UASI Region I Survey Results (click to view) Region II Survey Results (click to Region III Survey Results (click to Region IV Survey Results (click to Region V Survey Results (click to Region VI Survey Results (click to Region VII Survey Results (click to Region VIII Survey Results (click to Region IX Survey Results (click to Region X Survey Results (click to Show references in this reportHide References [1]Matt A. Mayer, "An Analysis of Federal, State, and Local Homeland Security Budgets," Heritage Foundation Center for Data Analysis Report No. CDA09-01, March 9, 2009, at http://www.heritage.org/Research/ HomelandSecurity/cda0901.cfm. [2]"The Federalist Papers: No. 17," Yale Law School, Lillian Goldman Law Library, Avalon Project, at http://ava­lon.law.yale.edu/18th_century/ fed17.asp. [3]International Association of Chiefs of Police, "Criminal Intelligence Sharing: A National Plan for Intelligence-Led Policing at the Local, State and Federal Levels," August 2002, p. 2, at http://www.cops.usdoj.gov /files/RIC/Publications/criminalintelligencesharing_web.pdf (May 12, 2009). [4]U.S. Department of Homeland Security, Target Capabilities List: A Companion to the National Preparedness Guidelines, Sep­tember 2007, at /static/reportimages/97967A2AEF541163942379DC0A5C6177.pdf [5]National Commission on Terrorist Attacks Upon the United States, The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks Upon the United States (New York: W. W. Norton and Company, 2004), pp. 425- 426, at /static/reportimages/F44F08E003B9ADB2C35C8630BA8037CB.pdf [6]Ibid., p. 426. READ OUR LATEST REPORT ON Department of Homeland Security U.S. Engagement Required: Afghanistan Must Avoid an Iraq-Style Breakdown Washington needs to end the dual-headed federal agency fight overwhich entity should be the primary federal partner of state andlocal law enforcement. Rather, the federal government needs topresent a federal enterprise solution to state and localgovernments. Too many of the United States' higher-riskjurisdictions lack the requisite level of counterterrorismcapabilities to engage in effective prevention activities. Thisdeficiency must end. Matt A. Mayer Other Reports By This Author Special Report After Hurricane Sandy: Time to Learn and Implement the Lessons in Preparedness, Response, and Resilience Backgrounder Preventing the Next “Lone Wolf” Terrorist Attack Requires Stronger Federal–State–Local Capabilities One good way you can keep up with Department of Homeland Security Department of Homeland Security
M.A. in Justice Administration and Public Service College of Saint Elizabeth offers a Master of Arts degree in Justice Administration and Public Service for students and professionals who are interested in leadership or advanced positions in law, law enforcement, or the criminal justice system. Students will learn about the various research methods, statistics, program planning and management, policy analysis, and program evaluation systems used in criminal justice and public services. The curriculum consists of 30 credit hours of classes that include Legal and Critical Issues in Justice Studies, Ethical Issues in Human Services Leadership, and Cybercrime, Security, and the Law. Through the program, students will work on developing their understanding of the criminal justice system and work on developing their leadership abilities to take on administrative roles within the criminal justice system. Students who successfully complete this degree should have proven their mastery of the ability to utilize theory and research to analyze historical trends and current perspectives in criminal justice, to use research to evaluate factors related to crime and the impact of crime on criminal justice policies, practices, and procedures, and to evaluate the impact of U.S. constitutional policy and procedures on the criminal justice system. Additionally, students will be required to be able to analyze the impact of the criminal justice process on victims and perpetrators, and to synthesize principles from criminal justice theories, processes, and practices to promote social justice and positive social change. They should also be able to compare the structural functions and interactions of law enforcement, courts, and corrections within the criminal justice system at the local, state, and federal level. Founded in 1899 by the Sisters of Charity of Saint Elizabeth, the College has a strong tradition of concern for the poor, for developing leadership in a spirit of service and social responsibility, and a commitment to the promotion of women as full partners in society. The College is proud to offer a large variety of programs to meet the challenges of the 21st century. Students have the following options in selecting a program of study that best meets their needs: Undergraduate – the undergraduate program enables students to gain the knowledge and experiences they need to achieve their full potential academically, personally, spiritually and professionally. Continuing Studies – flexible evening, weekend, and online programs leading to a bachelor's degree meet the needs of adult learners seeking to advance their careers. Graduate Programs – evening, weekend and online master's programs and a fully subscribed doctoral program are designed to help students advance in high-demand fields and professions. These programs prepare our students to stand out as ethical, knowledgeable leaders, who acting in the spirit of service and active engagement, excel in their professions, communities, and daily lives. Professional Certificate Programs – CSE offers evening, weekend and online programs leading to certification in counseling, education, health care, management, ministry, nutrition and other fields.
The latest news about space exploration and technologies, astrophysics, cosmology, the universe... Core work: Iron vapor gives clues to formation of Earth and moon (Nanowerk News) Recreating the violent conditions of Earth's formation, scientists are learning more about how iron vaporizes and how this iron rain affected the formation of the Earth and Moon. The study is published March 2 in Nature Geoscience. "We care about when iron vaporizes because it is critical to learning how Earth's core grew," said co-author Sarah Stewart, UC Davis professor of Earth and Planetary Sciences. The Z machine is in Albuquerque, N.M., and is part of the Pulsed Power Program, which started at Sandia National Laboratories in the 1960s. Pulsed power concentrates electrical energy and turns it into short pulses of enormous power, which are then used to generate X-rays and gamma rays. (Image: Randy Montoya) Shock and release Scientists from Lawrence Livermore National Laboratory, Sandia National Laboratory, Harvard University and UC Davis used one of the world's most powerful radiation sources, the Sandia National Laboratories Z-machine, to recreate conditions that led to Earth's formation. They subjected iron samples to high shock pressures in the machine, slamming aluminum plates into iron samples at extremely high speeds. They developed a new shock-wave technique to determine the critical impact conditions needed to vaporize the iron. The researchers found that the shock pressure required to vaporize iron is much lower than expected, which means more iron was vaporized during Earth's formation than previously thought. Iron rain Lead author Richard Kraus, formerly a graduate student under Stewart at Harvard, is now a research scientist at Lawrence Livermore National Laboratory. He said the results may shift how planetary scientists think about the processes and timing of Earth's core formation. "Rather than the iron in the colliding objects sinking down directly to the Earth's growing core, the iron is vaporized and spread over the surface within a vapor plume," said Kraus. "This means that the iron can mix much more easily with Earth's mantle." After cooling, the vapor would have condensed into an iron rain that mixed into the Earth's still-molten mantle. This process may also explain why the Moon, which is thought to have formed by this time, lacks iron-rich material despite being exposed to similarly violent collisions. The authors suggest the Moon's reduced gravity could have prevented it from retaining most of the vaporized iron. Source: University of California - Davis Galactic winds push researchers to probe galaxies at unprecedented scale (w/video) The sun's core rotates four times faster than its surface - here's why it matters The mystery of the pulsating blue stars New theory on the origin of dark matter Primordial black holes may have helped to forge heavy elements Standard model of the universe withstands most precise test by Dark Energy Survey New clue to solving the mystery of the sun's hot atmosphere Hubble detects exoplanet with glowing water atmosphere Running out of gas: Gas loss puts breaks on stellar baby boom Moon of Saturn has chemical that could form 'membranes' (w/video) Astrophysicists map out the light energy contained within the Milky Way Cosmologists produce new maps of dark matter dynamics Booze in space: how the universe is absolutely drowning in the hard stuff Flashes of light on the dark matter How do you work out if a signal from space is a message from aliens? How giant atoms may help catch gravitational waves from the Big Bang Six mysteries of Jupiter's Great Red Spot Shedding light on galaxies' rotation secrets Complex gas motion in the centre of the Milky Way New evidence in support of the Planet Nine hypothesis Spiky ferrofluid thrusters can move satellites Astronomers track the birth of a 'super-earth' Heart of an exploded star observed in 3-D ...more space exploration news
EnvironmentSpace & Cosmos Health Physicists Strive to Build A Black Hole By GEORGE JOHNSON To see black holes, those gravitational whirlpools that suck in matter and even light, you need not just a powerful telescope but a bit of imagination. You can't observe the holes themselves, just the bad effects they have on their neighborhoods: gobs of stellar matter screaming out radiation as they are pulled toward what appears to be an omnivorous, bottomless pit. It is comforting to think that something so voracious is so far away. But there are times when physicists wish that they could take a closer look. Some of the newest ideas in high-energy physics suggest that this may soon be possible. The next generation of particle accelerators, like the Large Hadron Collider, which is under construction at CERN, the European physics center near Geneva, may be able to produce miniature black holes on demand. Some particle physicists say they may be in a better position than the cosmologists to establish, once and for all, that black holes are real. ''Future colliders could become black hole factories,'' said Dr. Steven B. Giddings, a physicist at the University of California at Santa Barbara. If some recent theories turn out to be right, the effect would be far from subtle, with one tiny black hole popping into existence every second and harmlessly disappearing with an unmistakable burst of energy. ''Black hole production should light up the detectors like Christmas trees,'' Dr. Giddings said. Dr. Greg Landsberg, a Brown University physicist who also works at the Fermi National Accelerator Laboratory in Batavia, Ill., is hoping to give the astronomers a run for their money. ''Despite what cosmologists like to tell the general public,'' he said, ''there is no compelling evidence that they have seen a single black hole. There will essentially be a competition to see who finds a black hole first.'' Cosmologists consider the case for black holes to be more persuasive all the time. Just last week, they announced strong new evidence that one lurks at the core of the Milky Way. But being able to create and manipulate black holes in an accelerator would make them seem more palpable and open up new theoretical terrain to explore. Miniature black holes mark a point where the theory of gravity, called general relativity, comes together with quantum mechanics, which describes nature's other forces. Artificial black holes could be used to test ideas about merging the two disciplines into the long-sought Theory of Everything. ''We've been trying for a century, and we still don't fully understand black holes,'' said Dr. Andrew Strominger, a physicist at Harvard. ''If there is some possibility we actually could make them in an accelerator lab and watch what they do, that would be just fantastic. This could guide us toward understanding the fundamental mystery of how quantum mechanics and general relativity fit together.'' The experiments will also be used to explore one of the most jarring ideas in contemporary physics, that the universe consists of hidden dimensions, beyond the three we call home. According to the new theories, a reserve of extremely strong gravity lies dormant within the other dimensions. If physicists could tap that wellspring, they might be able to smash subatomic particles together so hard that they form black holes. If so, that in itself would attest to the possibility that the extra dimensions are real. ''It would be not just a smoking gun but a smoking cannon,'' said Dr. Landsberg, of Brown. Out in the cosmos, black holes are created when stars burn out and collapse. As a star becomes denser and more compact, its gravitational field intensifies, causing it to contract even more. Smaller stars reach a point where gravity can compress them no further. But if the star is very large to begin with, so much mass becomes concentrated into so small a volume that a threshold is passed. The star becomes overwhelmed by its own gravity and goes right on collapsing, pulling in everything that falls into its ''event horizon,'' the boundary beyond which nothing can escape. In theory you do not need stars to make black holes. Even two objects as small as subatomic particles would form a black hole if they were squeezed into an extremely small space. That, however, would require the energy of a particle accelerator the size of a galaxy, something that would never get through Congress. But there may be a cost-cutting alternative, a recipe for making black holes at energies around a trillion electron-volts -- within the range of the Large Hadron Collider. The plan is based on the possibility that scientists have been underestimating the full strength of gravity. Suppose that within spaces of less than a millimeter, where gravity has yet to be reliably measured, it is far more powerful. Then subatomic particles would not have to be compressed so severely before their gravity took over and sucked them into a black hole. Why would gravity behave that way? The answer requires taking a leap of faith: when one reaches into the submillimeter realm, extra dimensions open up. And when gravity has more dimensions in which to operate, it becomes far more intense. 1
Chronic intake of green propolis negatively affecting the rat testis Grasiela Dias de Campos Severi-Aguiar1, Suellen Josine Pinto2, Cristina Capucho1, Camila Andrea Oliveira3, Maria Aparecida Diamante1, Renata Barbieri2, Fabrícia Souza Predes4, Heidi Dolder1 1 Reproductive Biology Laboratory, Department of Structural and Functional Biology, Biology Institute, State University of Campinas – UNICAMP, Campinas, Brazil2 Heath Sciences Nucleus, Hermínio Ometto Univerity Center, UNIARARAS, Araras, São Paulo, Brazil3 Graduate Program in Biomedical Sciences, Hermínio Ometto University Center, UNIARARAS, Araras, São Paulo, Brazil4 Department of Biological Sciences, State University of Paraná (UNESPAR), Paranaguá, Paraná, Brazil Correspondence Address:Grasiela Dias de Campos Severi-AguiarDepartamento de Biologia Estrutural e Funcional, Universidade Estadual de Campinas – UNICAMP, CP 6109, CEP 13083-863, Campinas, São Paulo BrazilSource of Support: None, Conflict of Interest: NoneCheckDOI: 10.4103/0974-8490.199777 Abstract Background: Human and animal evidence suggests that environmental toxicants may have an adverse impact on male reproductive health, reducing the population's reproductive output. Owing to the renewed attraction for natural products, some of them constitute effective alternatives to mitigate these effects. Propolis is a candidate for this use because of its intrinsic properties. In many situations, it improved the testicular damage and alleviated the toxic effects induced by environmental contaminant exposure. Objective: The aim of this study was to investigate possible alterations of testicular parameters and certify if its use is really advantageous to the testis, since this could affect rat reproductive function. Materials and Methods: Forty-eight adult male Wistar rats were divided into four groups (Co = control, T1 = 3 mg propolis/kg/day, T2 = 6 mg/kg/day, T3 = 10 mg/kg/day) and were exposed during 56 days. The testes were assessed with morphometrical, stereological, and ultrastructural analyses. Cell proliferation and death were diagnosed, respectively, by immunocytochemistry. Connexin 43 (Cx43) and N-cadherin transcript levels were determined by reverse transcription-polymerase chain reaction. Results: Increased cell proliferation and Leydig cell volume were observed in T2, and in contrast, Cx43 upregulation and cell death were observed in T3. Both T2 and T3 showed ultrastructural abnormalities in testicular parenchyma. Conclusion: We recommend a cautious intake of propolis to avoid deleterious effects.Keywords: Cell death and proliferation, connexin 43, morphometry, N-cadherin, stereology, ultrastructure How to cite this article:Severi-Aguiar GD, Pinto SJ, Capucho C, Oliveira CA, Diamante MA, Barbieri R, Predes FS, Dolder H. Chronic intake of green propolis negatively affecting the rat testis. Phcog Res 2017;9:27-33 How to cite this URL:Severi-Aguiar GD, Pinto SJ, Capucho C, Oliveira CA, Diamante MA, Barbieri R, Predes FS, Dolder H. Chronic intake of green propolis negatively affecting the rat testis. Phcog Res [serial online] 2017 [cited 2017 May 26];9:27-33. Available from: http://www.phcogres.com/text.asp?2017/9/1/27/199777 Summary Chronic intake of Brazilian green propolis induced N.-cadherin downregulation and decreased on seminiferous tubule volume Increase on connexin 43 expression and cell death and decrease in Leydig cell.(LC) number/testis with the concentration of 10 mg/kg/day were observed Increase on cell proliferation, cytoplasmic proportion, and volume of LC with the concentration of 6 mg/kg/day was detected The presence of empty spaces between spermatids and malformed spermatozoa in the lumen of seminiferous tubule was showed This male reproductive disruption can be linked to phenolic compounds present in Brazilian green propolis. Introduction There is increasing evidence from sound epidemiological studies in humans as well as from experiments with animals, to support the claim that environmental pollutants alter the regulation of puberty in males.[1] Pubescent development in schoolboys from an Indian village where endosulfan had been aerially sprayed for more than 20 years was compared with the development in boys from nonsprayed areas. It was shown that development of pubic hair, testes, and penis, as well as serum testosterone level, was negatively correlated to the group exposed to aerial endosulfan exposure.[1] The connection between reproductive disorders and environmental pollution that can lead to infertility is still a highly debated issue.[1],[2],[3],[4]In recent years, the therapeutic effect of natural products has been widely used to prevent or mitigate the damage of heavy metal contamination.[5],[6],[7] Because of the wide range of biological activities, propolis has also been extensively used in food and beverages to improve health and fight diseases.[8],[9]Baccharis dracunculifolia D. C. (Asteraceae) is a native plant from Brazil, commonly known as “Alecrim do campo” or “Vassoura” that is an ingredient of green propolis and is well known for its use by insects, mainly Apis mellifera L. (de Sousa et al., 2011).[10]Yousef and Salama [11] described increased activity of 17-ketosteroid reductase and testosterone levels in rats treated with propolis after aluminum chloride exposure. They suggested that it increased steroidogenesis, improved sperm proliferation, and hence increased fertility. Similarly, ElMazoudy et al.[12] and Yousef et al.[13] showed an increase in testosterone levels for rats and rabbits treated with propolis, respectively, after chlorpyrifos and triphenyltin exposure. Capucho et al.[14] showed that animals chronically treated with Brazilian green propolis presented higher sperm production and greater epithelium height of the epididymis initial segment.These data point toward propolis being able to alter reproductive parameters, increasing steroidogenesis, and sperm production and suggest that this natural product could promote better male reproductive performance.This study was designed based on the hypothesis that properties of Brazilian green propolis might be related to its ability to restore reproductive capacity. However, if propolis is able to do this, it may also be able to alter reproductive parameters. Hence, it can be asked: Which cellular mechanisms are involved in its action mode? Because there is a little information about testicular changes after chronic propolis intake, we looked for biomarkers to evaluate testicular morphological and molecular events.Therefore, the aim of this study was to evaluate effects of propolis on morphological and molecular parameters of rat testis through morphometrical, stereological, ultrastructural, and immunocytochemical analyses. We also quantified cell proliferation and cell death with immunocytochemical assays, using, respectively, positive labeling for proliferating cell nuclear antigen (PCNA) and the terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) method. Connexin 43. connexin 43 (Cx43) and N-cadherin (N-cad) mRNA levels were determined by reverse transcription-polymerase chain reaction (RT-PCR).[15],[16],[17],[18]Another relevant aspect pointed by Sforcin [9] was that propolis extracts have been used deliberately for a long time. Propolis-containing products present different recommendations according to the producers or sellers, not always mentioning the chemical composition, botanical source, and the methods of extraction. In addition, recommendations should include defined concentrations, doses, and consumption period, in view of the expected outcomes.In this context, the understanding of the action mechanisms could help to establish an ideal dose and intake period that would ensure its beneficial activity on reproduction and avoid deleterious effects on the testis. Materials and Methods Subjects of researchForty-eight adult male Wistar rats (Rattus norvegicus) (90 days) were maintained throughout the experiment at the Animal Experimentation Center, Centro Universitário Hermínio Ometto (UNIARARAS) (3 animals/cage), on a 12-h light/dark cycle at a temperature of 25°C and 60% air humidity. The animals received water and Purina standard chow, ad libitum. The animals were randomly divided into four groups (n = 12): A control group (Co) receiving only filtered water and three experimental groups T1, T2, and T3 exposed to 3, 6, and 10 mg/kg/day, respectively, of the aqueous propolis extract. The study was approved by the Ethics Committee of Centro Universitário Hermínio Ometto, UNIARARAS (protocol 860/2009), and was conducted in accordance with the ethical guidelines of the Brazilian College of Animal Experimentation (COBEA).Substance testedGreen propolis extract was obtained according to Sforcin et al.[19] and Pagliarone et al.[20] Propolis was produced by A. mellifera L. bees in the apiary located in Araras city (region of Campinas, SP, Brazil) and harvested using plastic nets in February 2010. Green propolis was ground and prepared under sterile conditions according to Capucho et al.[14] Its composition depends on its plant source, but high-performance liquid chromatography analysis detected seven phenolic compounds in propolis including caffeic acid phenethyl ester (CAPE), ferulic acid (FA), aromadendrin-40-methyl ether, isosakuranetin, artepillin C, baccarin, and 2-dimethyl-6-carboxyethenyl-2H-1-benzopyran acid (de Souza et al., 2011; Abdallah et al., 2012).[10] To obtain good qualitative and quantitative results for these phenolic compounds, the best time for harvesting this plant is between December and April (de Souza et al., 2011)[10] although Sforcin [9],[21] has demonstrated that no seasonal effect was noted for Brazilian propolis composition and variations were predominantly quantitative throughout the year.Doses and administration routeThe animals were weighed and received Brazilian green propolis, by gavage, for 56 days, which is the period necessary to complete a spermatogenic cycle.[22] Water or subacute doses of 3, 6, and 10 mg/kg/day (LD50 is 2–7.3 g/kg in mice) were applied according to previous data in literature.[14],[23] According to Burdock,[24] multiple doses between 200 and 5000 mg/kg body weight/day did not cause deaths in laboratory animals. Applying the safety factor of 1000 for humans, the calculated safe dose would be 1.4 mg/kg body weight/day or approximately 70 mg/day. The animals received ketamine (70 mg/BW) and xylazine (10 mg/BW) before euthanasia.Tissue preparationSix animals of different exposed groups (24) were perfused with glutaraldehyde 4% and paraformaldehyde 4% in sodium cacodylate buffer 0.1 M (pH 7.2) for 25–30 min. Testis was removed, post-fixed in 1% osmium in the same buffer solution, overnight, and then weighed. Historesin (Leica) embedded testes fragments were sectioned at 3 µm thickness and stained with toluidine blue in 1% sodium borate (TB) for structural and morphometrical evaluations.Morphometry and stereologyThe weight of the testicular parenchyma was obtained subtracting the mass occupied by the albuginea from the total right testis weight, thus providing the net weight of the organ's functional portion. Representative areas of testicular tissue were photographed with a Leica DM2000 microscope and subjected to morphometrical and stereological analyses with an imaging system Pro-Plus software version 4.5 (Media Cybernetics). The seminiferous tubule epithelium height and tubule diameter were measured in 15 tubules per animal at ×200 magnification.[22] The stereological analysis of the testis was made for 15 random testis cross-sections per animal. This analysis was performed with a 494-point grid to determine the proportion of the testis components (epithelium, lumen, and interstitium) in the experimental groups.The volumetric proportions of the Leydig cells (LCs) were assessed using a grid mask of 494 points placed over microscope fields for each animal at ×1000 magnification, counting points over nucleus and cytoplasm until completing 1000 points per animal. The volumetric proportions of the intertubular space components (lymphatic space, connective tissue, blood vessels, macrophages, and LCs) were assessed using a grid mask of 494 points placed over microscope fields for each animal at ×1000 magnification, until completing 3000 points per animal. The volume, expressed in milliliter, of each component described above was determined as the product of the testicular volume and volumetric proportions. Since the specific gravity of the testis is nearly 1.0, its volume was considered the same as its weight.[25] To obtain a more precise liquid testis volume, 6.5% of its weight, relative to the tunica albuginea, was excluded from this organ's weight.[22]The proportion between nucleus and cytoplasm of LCs was assessed using a grid mask with 494 points placed over images at ×1000 magnification. One thousand points over nuclei and cytoplasm of LCs was counted per animal. The nuclear diameter of LCs was obtained assessing 30 nuclei/animal. The nuclear volume was calculated and the individual volume of LCs was obtained from the nuclear volume and the proportion between nucleus and cytoplasm. The number of LCs per testis was obtained dividing the total nuclear volume of these cells by the average individual nuclear volume.[25]Immunocytochemistry for cell death and proliferationParaffin-embedded 5 µm thick testis sections were submitted to immunodetection of DNA fragmentation, indicative of cell death using the ApopTag Plus kit (Chemicon Int., Temecula, CA, USA) and in situ Cell Death Detection kit (ISCDDK, Roche, Germany) according to the manufacturer's instructions. Both kits use antifluorescein peroxidase-conjugated antibody and diaminobenzidine for the development of the reaction. The positive control of the kit was used and the negative control (reaction control) consisted of omission of the TUNEL reaction enzyme.For PCNA detection, testicular sections were incubated with the primary antibody (anti-PCNA polyclonal rabbit antibody; Abcam, Cambridge, MA, USA – ab2426) (1:1000) at 37°C for 1 h and they were incubated with the secondary antibody (goat anti-rabbit IgG polyclonal secondary antibody HRP; Abcam, Cambridge, MA, USA – ab97200) (1:1000) for 2 h at room temperature in a dark chamber. Immunostaining was visualized using 0.05% (w/v) 3-amino-9-ethylcarbazole (Sigma-Aldrich, USA – a6926) in PBS and 0.01% (v/v) hydrogen peroxide. PCNA-labeled cells and TUNEL positive nuclei were identified by brown nuclear staining and counted.[17],[26] Four slides per animal were mounted with six cross-sections at different depths and 15 aleatory fields were obtained per animal (×40 objective lens), where positive nuclei were counted in tubular epithelium.Transmission electron microscopyAfter whole-body perfusion fixation, the specimen fragments were fixed in the same fixative for 24 h. Then, the specimens were rinsed 3 times with 0.1 M sodium phosphate buffer, pH 7.2, post-fixed in 1% osmium tetroxide, rinsed and dehydrated in an increasing acetone series, and embedded in epoxy resin. Ultrathin sections were cut and contrasted with 2% uranyl acetate and 2% lead citrate before observation with a transmission electron microscope (Zeiss, Leo906).Detection of connexin 43 and N-cadherin mRNA by reverse transcription-polymerase chain reactionTotal RNA was isolated from 100 mg testicular parenchyma of six animals of each group that was not perfused (totalizing 24) with the TRIzol ® reagent (Invitrogen, Carlsbad, CA, USA), including the digestion of contaminating DNA with DNAse I, amplification grade (Invitrogen), following the manufacturers' instructions. RNA purity and concentration were determined spectrophotometrically. cDNA was synthesized from 2 µg RNA in the presence of dithiothreitol, deoxyribonucleotide phosphate, random primers, RNAseOUT, and SuperScript™ II Reverse Transcriptase (Invitrogen) in a final volume of 20 µl. Semiquantitative RT-PCR was used to amplify Cx43 and N-cad mRNA and to compare their expression for three experimental groups exposed to the aqueous extract of propolis and the control group.The PCR conditions were performed as reported previously by Gagliano et al.[27] The signal intensities of the bands were measured densitometrically using the Scion Image analysis software (Scion Corp., Frederick, MD, USA). Each value was determined as the mean of three densitometric readings. The results were expressed as average ratios of the relative expression of transcripts normalized with β-actin as the control housekeeping gene.Statistical analysisComparison of the values of control and treated groups was carried out by statistical variance analysis (one-way ANOVA), followed by Tukey's test. A value of P < 0.05 was considered statistically significant. For all values, the standard deviation mean (means ±) was calculated. Results Morphometry and stereologyAccording to the data presented in [Table 1], lumen proportion of seminiferous tubules decreased significantly and the interstitium proportion showed a significant increase in the T3 group. Epithelium proportion increased significantly in T1 group. All exposed groups showed a significant decrease in seminiferous tubule volume, and T3 group showed reduced length. No significant alteration was observed in the interstitium component proportion when compared with the control group. However, major alterations were observed in LCs.Table 1: Testicular morphometry and stereology of adult rats exposed to Brazilian green propolis (mean±standard deviation)Click here to viewAll treated groups presented a significant decrease in LC nuclear proportion and a significant increase in LC cytoplasmic proportion and volume when compared to the control group. LC nuclear volume was significantly increased in T3. LC volume was significantly increased in T2 and T3 groups and LC number showed a significant decrease in T3 group.Immunocytochemistry for cell death and proliferationResults showed no significant difference in the number of positive labeling both for cell death and proliferation [Figure 1] in all groups studied. This fact probably occurred because, among the values obtained, there are some that deviate from the average (1.41–9.7) reaching the extremities. However, an evident tendency toward cell proliferation induction can be observed at the dose of 6 mg/kg/day. Cell proliferation decreases at the highest concentration (10 mg/kg/day), and in contrast, cell death induction occurred.Figure 1: Testicular cross-sections submitted to terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling method and proliferating cell nuclear antigen labeling showing positive nuclei in brown (arrowhead) and a number of positive nuclei in all groups studiedClick here to viewUltrastructural analysisUltrastructure of the control animals showed healthy Sertoli cells with large oval nuclei. Sertoli cells' cytoplasm extends from the basal lamina to the lumen of the seminiferous tubules, enveloping the adjacent germinal cells. There were numerous lipid droplets in the cytoplasm of the Sertoli cells [Figure 2]a. After propolis exposure, we can observe a dose-dependent increase in lipid content and endoplasmic reticulum swelling with the presence of an electron-dense content [Figure 3]a and b]. Damage to the blood–testis barrier can be observed [Figure 3]b.Figure 2: Ultrathin testicular sections of the control animals analyzed by transmission electron microscopy. Sertoli cell supported by a basal lamina. Numerous lipid droplets, endoplasmic reticulum swelling, and electron dense materials are observed in the cytoplasm of the Sertoli cells (a). Spermatocytes and round spermatids at seminiferous tubules are closely linked (b). Leydig cells in the testes of the control group showed lipid droplets, mitochondria, and Golgi apparatus (3c). Barr = 10 μm (a), 5 μm (b and c). SC: Sertoli cell; bl: Basal lamina; ld: Lipid droplets; ER: Endoplasmic reticulum swelling; dm: Dense materials; sp: Spermatocytes; spm: Spermatids; mi: Mitochondria; GA: Golgi apparatusClick here to viewFigure 3: Ultrathin testicular sections of the exposed groups analyzed by transmission electron microscopy. Sertoli cell supported by a basal lamina. Numerous lipid droplets, endoplasmic reticulum swelling, and electron-dense materials are observed in the cytoplasm of the Sertoli cells (a and b). Damage to the structure of the blood–testis barrier can be observed in T3 (b), empty spaces between Sertoli cells and spermatids were clearly visible after propolis exposure (c and d). Residual bodies in the cytoplasm of Sertoli cells (d). Spermatozoa with accumulated cytoplasm (d) and head and tail abnormalities (d). Barr = 10 μm (a and d), 5 μm (b, c, and e), 2 μm (f). SC: Sertoli cell; bl: Basal lamina; ld: Lipid droplets; ER: Endoplasmic reticulum swelling; dm: Dense materials; BTB: Blood–testis barrier; rb: Residual bodies; ac: Accumulated cytoplasmClick here to viewA large number of spermatocytes and round spermatids occurred near the lumen of the seminiferous tubules in the control group [Figure 2]b. The spermatocytes were round with prominent nuclei. The nuclei had a distinct chromatin network and well-defined nuclear membranes. Characteristic of the round spermatids is the well-defined nuclei with distinct nuclear membranes and filamentous chromatin and their cytoplasm occupied by a large number of mitochondria. The formation of the acrosome with a marked presence of Golgi vesicles appeared normal [Figure 2]b. Empty spaces between Sertoli cells and spermatids were clearly visible after propolis exposure [Figure 3]c. Electron micrographs showed deformed elongating spermatids undergoing phagocytosis by Sertoli cells, which resulted in engulfed spermatids and many residual bodies in the cytoplasm of Sertoli cells [Figure 3]d. The seminiferous tubule lumen contains spermatozoa with accumulated cytoplasm [Figure 3]d and head and tail abnormalities [Figure 3]d.LCs in the testes of the control group showed normal ultrastructure with intact nuclei and varied numbers of lipid droplets and organelles, many mitochondria and a Golgi complex [Figure 2]c. Their nuclear and cytoplasmic volumes increased in the exposure groups, and both cytoplasm and nucleus were denser [Figure 3]e and [Figure 3]f.connexin 43 and N-cadherin mRNA level in testicular tissueTo investigate whether there was a difference in the modulation of Cx43 and N-cad in rat testis treated with different concentrations of Brazilian green propolis, the mRNA expression of these genes was detected by RT-PCR. As shown in [Figure 4], the RT-PCR assay showed a significant increase in the expression of Cx43 transcripts in T3 as compared to the control group. In contrast, N-cad transcript levels decreased significantly in all exposed groups.Figure 4: Analysis of mRNA expression by semiquantitative reverse transcriptase-polymerase chain reaction. (a) Bars represent densitometric analysis of connexin 43 and N-cadherin mRNA expression in control and exposed animals. (b) Changes in mRNA are expressed as normalized densitometric units relative to β-actin mRNA. Values are the mean ± standard error of mean P < 0.05 indicates statistical significance. Transcriptional levels obtained by semiquantitative reverse transcription polymerase chain reaction are shown in electrophoretic images. (*) Significant difference from the control group (*1 = 0.0001, *2 = 0.0011, *3 = 0.0107, *4 = 0.021, *5 = 0.0038, *6 = 0.0047, *7 = 0.0142 and *8 = 0.0058); (#) significant difference from the T1 group (#1 = 0.002, #2 = 0.0077, #3 = 0.0075, #4 = 0.048). Co = control; T1 = 6 mg/kg/day; T2 = 10 mg/kg/day, T3 = 15 mg/kg/dayClick here to view Discussion Researchers demonstrated an increase in testosterone levels for rats and rabbits treated with propolis after xenobiotic exposure.[11],[12],[13] This effect raises the following question: If propolis is able to improve androgen secretion, would it cause any deleterious effect on the testis?To answer this question, it is necessary to understand the action mechanism of propolis in the testis. In this respect, our study constitutes an important tool to clarify cellular responses that occur in testicular parenchyma, highlighting the importance of basic cell biology in developing an integrated approach to clinical application.In the present study, we reported that chronic intake of propolis really affects testicular homeostasis. If so, what is the underlying mechanism?Our study demonstrated, for the 1st time, that chronic propolis intake with the concentration of 10 mg/kg/day increased Cx43 expression in the T3 group that could lead to testicular cell death.[28] Kameritsch et al. showed that Cx expression enhances apoptosis induction and suggested that this effect is due to the transfer of proapoptotic signals, which corroborates our results.Apoptosis is a natural event during spermatogenesis. It selectively removes the dysfunctioning or excess germ cells and limits germ cell number to ensure that the fixed number of Sertoli cells can provide enough nutrients to nurture and substantiate the continuous germ cell generations.[29] Spermatogenesis is a process depending on the presence of gap junction (GJ) channels between testicular cells, mainly between LCs, between Sertoli cells, and between Sertoli and germ cells.[16]According to Kidder and Cyr,[30] GJ intercellular communication (GJIC) within the testis and epididymis represents a critical aspect of male reproductive function and fertility. The implications of this mode of intercellular communication for male fertility remain a poorly understood but important facet of male reproduction.Connexins are transmembrane proteins that participate in GJs and that reside in lipid raft domains of the plasma membrane and function in this lipid environment.[31]Cx43 is the most highly expressed Cx during spermatogenesis.[16] In contrast, reduction in Cx43 expression and disruption of GJ were observed in various cancers and after toxicant exposures.[17],[31],[32] Kidder and Cyr [30] pointed Cx43 as a target of environmental toxicants.Wang et al.[31] claimed that the exploration of drugs to increase GJ would be of considerable value for the treatment of tumors. We pointed out that the higher concentration of propolis (10 mg/kg/day) increased Cx43 expression and cell death in the testis. This mechanism could avoid tumorigenesis.Another cell junction protein that we analyzed, the N-cad, is implicated in various aspects of tumorigenesis including tumor cell survival, migration, and invasion. N-cad upregulation has been observed in many cancers.[32] After propolis intake, we demonstrated N-cad downregulation.This protein belongs to the cadherin superfamily, one of the pivotal members of the adhesion junction; it mediates calcium-dependent homotypic intercellular adhesion and has been widely associated with spermatogenesis impairment in mice and rats.[17] Upregulation of the expression of N-cad was reported in response to cellular stress after phthalate exposure and neonatal exposure to bisphenol A.[17] As hypothesized by Domke et al.[33] in the seminiferous tubules, N-cad has an additional developmental biological role in topogenesis and specific cell–cell interactions. This includes the local stabilization of other cell–cell connection structures as has been proposed for certain synapses. Thus, it is probable that the propolis-induced mechanism to diminish N-cad expression would initially maintain the ideal microenvironment to guarantee the spermatogenic process. However, with a further decrease of N-cad, this could affect the hematotesticular barrier, or the junctions between Sertoli cell prolongations that penetrate between the germ cells, leading to their regression leaving the empty spaces observed ultrastructurally between spermatids. Corroborating our supposition, there is a study developed by Bremmer et al.[34] which showed an increase of N-cad protein expression in LC tumors, suggesting that N-cad could play a role in the pathogenesis of this tumor.However, all mechanisms activated by cells were not sufficient to avoid cellular damage. Round spermatids, released from Sertoli cells prolongations and malformed spermatozoa, were observed in the lumen although we have already demonstrated an increase of daily sperm production (DSP).[14] LCs also were affected. These results are in contrast with other studies [11],[12],[13] where propolis was advantageous.Capucho et al.[14] showed increased epithelium height for the initial caput segment at a dose of 10 mg/kg/day of propolis and suggested that it would be a compensatory response to help prevent tissue damage and to maintain the morphological and functional integrity of the organ and the spermatozoa. They hypothesized that the epithelium cells increase their secretion of molecules or their endocytic activity. Our results corroborated this hypothesis when malformed spermatozoa were observed arriving at the epididymis with a still uncompleted maturation process.To understand our results, it is necessary to take into account that we used Brazilian green propolis and to review some aspects of propolis composition, mentioned in the Material and Methods section. This information is vital to further understand the role of propolis in testicular function. FA (4-hydroxy-3-methoxycinnamic acid) is a ubiquitous phenolic compound that is a bioactive ingredient of many foods as well as in propolis. Recently, a study developed by Roy et al.[35] investigated the effect of FA (50 mg/kg – alternate-day administration) on postdiabetic rat testicular damage and provided evidence that FA reduced oxidative stress, proinflammatory cytokines, and apoptosis in the testes, among others parameters, and upregulated serum testosterone, contributing to its protective effects on postdiabetic complications, such as infertility. They suggested that FA inhibits testicular damage in rats by diminishing oxidative stress since its antioxidant properties are well known. However, it is known that antioxidant compounds can develop pro-oxidant activity when used for a long period and/or in high doses.[36] This event could be responsible for our results.A study developed by Park and Han [37] showed that leaves and tubers of Smallanthus sonchifolius (Yacon, Asteraceae), which are rich in antioxidant compounds such as FA and CAPE, increased sperm number and serum testosterone level in rats, similarly to described for propolis.Another plant polyphenolic compound present in propolis, CAPE, is known as an antioxidant, anti-inflammatory, antitumoric, and antimetastatic agent.[38] A study presented by Abdallah et al.[38] showed CAPE alleviated the detrimental effect of pyrethroid insecticide on testicular histopathological alterations, semen quality, and antioxidant enzyme activities in the rat testis.There is some evidence linking CAPE action and the results of the present study. First, Na et al.[39] showed that CAPE restored GJ, phosphorylation of Cx43, and its normal location on the plasma membrane in WB-ras 2 cells (ras-transformed rat liver epithelial cell line). Second, Chen et al.[40] showed that CAPE quickly entered leukemic cells and caused glutathione depletion, mitochondrial dysfunction, and caspase-3 activation and could inhibit the growth of human cancer. Thus, we suggest that CAPE could be responsible for effects induced by propolis intake at 10 mg/kg/day, such as Cx43 upregulation, cell death induction, and reduced cell proliferation.Another possibility to be considered is that flavonoids can increase the GJIC through the alteration of connexin protein expression. This hypothesis is in agreement with Yu et al.[41] data. They demonstrated that flavonoids extracted from litsea coreana leve which is a traditional Chinese medicine caused an increase of oxaliplatin cytotoxicity, inducing cell death and apoptosis, by enhancing GJIC through elevated Cx43 protein expression. Subsequent studies are necessary to confirm this possibility. Conclusion Results reported herein indicate that chronic Brazilian green propolis intake during 56 days displayed different cellular events according to propolis concentration, demonstrating a dose-dependent response mechanism. Six milligrams of propolis/kg/day increased cell proliferation and LC volume, probably leading to increased DSP. On the other hand, 10 mg of propolis/kg/day induced Cx43 upregulation and cell death induction. Both concentrations caused morphological damage to germ cells, Sertoli cells, and LCs. Our data raise the concern that male reproductive homeostasis was altered in spite of other beneficial effects of propolis. Further studies are needed to fully understand the effects of Brazilian green propolis on the reproductive system and to the establishment of an ideal dose and intake period that would ensure its beneficial activity but avoid any deleterious effect.AcknowledgmentWe thank Fundação Hermínio Ometto for financial support, Prof. Dra. Fernanda O. G. de Gaspi for providing the Brazilian green propolis extract, and Prof. Franco D. C. Pereira for compiling the images.Financial support and sponsorshipThis work was supported by the Fundação Hermínio Ometto and PIBIC/CNPq.Conflicts of interestThere are no conflicts of interest. References 1.Magnusson U, Ljungvall K. Environmental pollutants and dysregulation of male puberty – A comparison among species. Reprod Toxicol 2014;44:23-32. 2.Meeker JD. Exposure to environmental endocrine disruptors and child development. Arch Pediatr Adolesc Med 2012;166:952-8. 3.Emeville E, Giton F, Giusti A, Oliva A, Fiet J, Thomé JP, et al. Persistent organochlorine pollutants with endocrine activity and blood steroid hormone levels in middle-aged men. PLoS One 2013;8:e66460. 4.Zawatski W, Lee MM. Male pubertal development: Are endocrine-disrupting compounds shifting the norms? J Endocrinol 2013;218:R1-12. 5.Monteiro JC, Predes FS, Matta SL, Dolder H. Heteropterys aphrodisiaca infusion reduces the collateral effects of cyclosporine A on the testis. Anat Rec (Hoboken) 2008;291:809-17. 6.Leite RP, Wada RS, Monteiro JC, Predes FS, Dolder H. Protective effect of Guaraná (Paullinia cupana var. sorbilis) pre-treatment on cadmium-induced damages in adult Wistar testis. Biol Trace Elem Res 2011;141:262-74. 7.Leite RP, Predes FS, Monteiro JC, Freitas KM, Wada RS, Dolder H. Advantage of Guaraná (Paullinia cupana Mart.) supplementation on cadmium-induced damages in testis of adult Wistar rats. Toxicol Pathol 2013;41:73-9. 8.Franchi GC Jr., Moraes CS, Toreti VC, Daugsch A, Nowill AE, Park YK. Comparison of effects of the ethanolic extracts of brazilian propolis on human leukemic cells as assessed with the MTT assay. Evid Based Complement Alternat Med 2012;2012:918956. 9.Sforcin JM. Biological properties and therapeutic applications of propolis. Phytother Res 2016;30:894-905. 10.de Sousa JP, Leite MF, Jorge RF, Resende DO, da Silva Filho AA, Furtado NA, et al. Seasonality role on the phenolics from cultivated Baccharis dracunculifolia . Evid Based Complement Alternat Med 2011;2011:464289. 11.Yousef MI, Salama AF. Propolis protection from reproductive toxicity caused by aluminium chloride in male rats. Food Chem Toxicol 2009;47:1168-75. 12.ElMazoudy RH, Attia AA, El-Shenawy NS. Protective role of propolis against reproductive toxicity of chlorpyrifos in male rats. Pestic Biochem Physiol 2011;101:175-81. 13.Yousef MI, Kamel KI, Hassan MS, El-Morsy AM. Protective role of propolis against reproductive toxicity of triphenyltin in male rabbits. Food Chem Toxicol 2010;48:1846-52. 14.Capucho C, Sette R, de Souza Predes F, de Castro Monteiro J, Pigoso AA, Barbieri R, et al. Green Brazilian propolis effects on sperm count and epididymis morphology and oxidative stress. Food Chem Toxicol 2012;50:3956-62. 15.Elkin ND, Piner JA, Sharpe RM. Toxicant-induced leakage of germ cell-specific proteins from seminiferous tubules in the rat: Relationship to blood-testis barrier integrity and prospects for biomonitoring. Toxicol Sci 2010;117:439-48. 16.Gilleron J, Carette D, Durand P, Pointis G, Segretain D. Connexin 43 a potential regulator of cell proliferation and apoptosis within the seminiferous epithelium. Int J Biochem Cell Biol 2009;41:1381-90. 17.Salian S, Doshi T, Vanage G. Neonatal exposure of male rats to Bisphenol A impairs fertility and expression of sertoli cell junctional proteins in the testis. Toxicology 2009;265:56-67. 18.Kahiri CN, Khalil MW, Tekpetey F, Kidder GM. Leydig cell function in mice lacking connexin43. Reproduction 2006;132:607-16. 19.Sforcin JM, Fernandes A Jr, Lopes CA, Bankova V, Funari SR. Seasonal effect on Brazilian propolis antibacterial activity. J Ethnopharmacol 2000;73:243-9. 20.Pagliarone AC, Orsatti CL, Búfalo MC, Missima F, Bachiega TF, Júnior JP, et al. Propolis effects on pro-inflammatory cytokine production and Toll-like receptor 2 and 4 expression in stressed mice. Int Immunopharmacol 2009;9:1352-6. 21.Sforcin JM. Propolis and the immune system: A review. J Ethnopharmacol 2007;113:1-14. 22.Russell LD, Ettlin RA, SinhaHilin AP, Clegg ED. Mammalian spermatogenesis. In: Russel LD, Griswold, MD, editors. Histological and Histopathological Evaluation of the Testis. Clearwater: Cache River Press; 1990. p. 1-38. 23.Mani F, Damasceno HC, Novelli EL, Martins EA, Sforcin JM. Propolis: Effect of different concentrations, extracts and intake period on seric biochemical variables. J Ethnopharmacol 2006;105:95-8. 24.Burdock GA. Review of the biological properties and toxicity of bee propolis (propolis). Food Chem Toxicol 1998;36:347-63. 25.Mori H, Christensen AK. Morphometric analysis of Leydig cells in the normal rat testis. J Cell Biol 1980;84:340-54. 26.Huo S, Xu Z, Zhang X, Zhang J, Cui S. Testicular denervation in prepuberty rat inhibits seminiferous tubule development and spermatogenesis. J Reprod Dev 2010;56:370-8. 27.Gagliano N, Donne ID, Torri C, Migliori M, Grizzi F, Milzani A, et al. Early cytotoxic effects of ochratoxin A in rat liver: A morphological, biochemical and molecular study. Toxicology 2006;225:214-24. 28.Kameritsch P, Khandoga N, Pohl U, Pogoda K. Gap junctional communication promotes apoptosis in a connexin-type-dependent manner. Cell Death Dis 2013;4:e584. 29.Lee NP, Leung KW, Wo JY, Tam PC, Yeung WS, Luk JM. Blockage of testicular connexins induced apoptosis in rat seminiferous epithelium. Apoptosis 2006;11:1215-29. 30.Kidder GM, Cyr DG. Roles of connexins in testis development and spermatogenesis. Semin Cell Dev Biol 2016;50:22-30. 31.Wang L, Fu Y, Peng J, Wu D, Yu M, Xu C, et al. Simvastatin-induced up-regulation of gap junctions composed of connexin 43 sensitize Leydig tumor cells to etoposide: An involvement of PKC pathway. Toxicology 2013;312:149-57. 32.Su Y, Li J, Witkiewicz AK, Brennan D, Neill T, Talarico J, et al. N-cadherin haploinsufficiency increases survival in a mouse model of pancreatic cancer. Oncogene 2012;31:4484-9. 33.Domke LM, Rickelt S, Dörflinger Y, Kuhn C, Winter-Simanowski S, Zimbelmann R, et al. The cell-cell junctions of mammalian testes: I. The adhering junctions of the seminiferous epithelium represent special differentiation structures. Cell Tissue Res 2014;357:645-65. 34.Bremmer F, Schweyer S, Martin-Ortega M, Hemmerlein B, Strauss A, Radzun HJ, Behnes CL. Switch of cadherin expression as a diagnostic tool for Leydig cell tumours. APMIS 2013;121:976-81. 35.Roy S, Metya SK, Rahaman N, Sannigrahi S, Ahmed F. Ferulic acid in the treatment of post-diabetes testicular damage: Relevance to the down regulation of apoptosis correlates with antioxidant status via modulation of TGF-ß1, IL-1ß and Akt signalling. Cell Biochem Funct 2014;32:115-24. 36.Halliwell B, Gutteridge JM, editors. Free Radicals in Biology and Medicine. 4th ed. New York: Claredon Press; 2007. 37.Park JS, Han K. The spermatogenic effect of yacon extract and its constituents and their inhibition effect of testosterone metabolism. Biomol Ther (Seoul) 2013;21:153-60. 38.Abdallah FB, Fetoui H, Zribi N, Fakhfakn F, Keskes L. Protective role of caffeic acid on lambda cyhalothrin-induced changes in sperm characteristics and testicular oxidative damage in rats. Toxicol Ind Health 2012;28:639-47. 39.Na HK, Wilson MR, Kang KS, Chang CC, Grunberger D, Trosko JE. Restoration of gap junctional intercellular communication by caffeic acid phenethyl ester (CAPE) in a ras-transformed rat liver epithelial cell line. Cancer Lett 2000;157:31-8. 40.Chen YJ, Shiao MS, Wang SY. The antioxidant caffeic acid phenethyl ester induces apoptosis associated with selective scavenging of hydrogen peroxide in human leukemic HL-60 cells. Anticancer Drugs 2001;12:143-9. 41.Yu BB, Dong SY, Yu ML, Jiang GJ, Ji J, Tong XH. Total flavonoids of litsea coreana enhance the cytotoxicity of oxaliplatin by increasing gap junction intercellular communication. Biol Pharm Bull 2014;37:1315-22. Authors Grasiela DC Severi-Aguiar obtained her Ph.D. degree in 2001 from Universidade Estadual Paulista (UNESP) at Brazil. Currently, she is positioned as Collaborating Researcher in the Department of Structural and Functional Biology, Biology Institute, at the State University of Campinas, State of Sγo Paulo, Brazil. Dr. Severi-Aguiar is working on understanding of effects and action mechanisms of natural products and environmental pollutants. She has experience in the area of morphology hepatic and area of male reproductive system and also in the evaluation of genotoxicity. Similar in PUBMED Search Pubmed forSeveri-Aguiar GDPinto SJCapucho COliveira CADiamante MABarbieri RPredes FSDolder H Search in Google Scholar for Severi-Aguiar GD Pinto SJ Capucho C Oliveira CA Diamante MA Barbieri R Predes FS Dolder H Related articlesCell death and proliferationconnexin 43morphometryN-cadherinstereologyultrastructure Access Statistics
Who is the Most Empathetic of All? People tend to think of massage therapists, in general, as empathetic. According to a new study of more than 75,000 adults, middle-aged women—which is the demographic in which most massage therapists fit—are more empathic than men of the same age and than younger or older people. However, middle-aged men are also more empathetic than younger people. “Overall, late middle-aged adults were higher in both of the aspects of empathy that we measured,” says The University of Michigan’s Sara Konrath, co-author of an article on age and empathy forthcoming in the Journals of Gerontology: Psychological and Social Sciences. “They reported that they were more likely to react emotionally to the experiences of others, and they were also more likely to try to understand how things looked from the perspective of others.” “Given the fundamental role of empathy in everyday social life and its relationship to many important social activities such as volunteering and donating to charities, it’s important to learn as much as we can about what factors increase and decrease empathic responding,” says Konrath. Earlier research by O’Brien, Konrath and colleagues found declines in empathy and higher levels of narcissism among young people today as compared to earlier generations of young adults. Massage Fits into Fitness 6 in 10 Consumers Choose Products Made in the U.S.
Hot Laptops: Engineers Aim To Solve 'Burning' Computer Problem "Laptops are very hot now, so hot that they are not 'lap' tops anymore," says Avik Ghosh, an assistant professor of computer and electrical engineering at the University of Virginia. "If we continue at our current pace of miniaturization, these devices will be as hot as the sun in 10 to 20 years." Ghosh is seeking ways to reduce the heat of smaller and faster computers. Avik Ghosh and Mircea Stan. Credit: Photo by Jane Haley If you've balanced a laptop computer on your lap lately, you probably noticed a burning sensation. That's because ever-increasing processing speeds are creating more and more heat, which has to go somewhere — in this case, into your lap. Two researchers at the University of Virginia's School of Engineering and Applied Science aim to lay the scientific groundwork that will solve the problem using nanoelectronics, considered the essential science for powering the next generation of computers. �"Laptops are very hot now, so hot that they are not 'lap' tops anymore," said Avik Ghosh, an assistant professor in the Charles L. Brown Department of Electrical and Computer Engineering. “The prediction is that if we continue at our current pace of miniaturization, these devices will be as hot as the sun in 10 to 20 years." To head off this problem, Ghosh and Mircea Stan, also a professor in the department, are re-examining nothing less than the Second Law of Thermodynamics. The law states that, left to itself, heat will transfer from a hotter unit to a cooler one — in this case between electrical computer components — until both have roughly the same temperature, a state called "thermal equilibrium." The possibility of breaking the law will require Ghosh and Stan to solve a scientifically controversial — and theoretical — conundrum known as "Maxwell's Demon." Introduced by Scottish physicist James Clerk Maxwell in 1871, the concept theorizes that the energy flow from hot to cold could be disrupted if there were a way to control the transfer of energy between two units. Maxwell's Demon would allow one component to take the heat while the other worked at a lower temperature. This could be accomplished only if the degree of natural disorder, or entropy, were reduced. And that's the "demon" in Maxwell's Demon. "Device engineering is typically based on operating near thermal equilibrium," Ghosh said. But, he added, nature has examples of biological cells that operate outside thermal equilibrium. "Chlorophyll, for example, can convert photons into energy in highly efficient ways that seem to violate traditional thermodynamic expectations," he said. A closely related concept, Brownian "ratchets," will also be explored. This concept proposes that devices could be engineered to convert non-equilibrium electrical activity into directed motion, allowing energy to be harvested from a heat source. If computers could be made with components that operate outside thermal equilibrium, it could mean better computer performance. Basically, your laptop wouldn't burst into flames as it processes larger amounts of information at faster speeds. Also, because it would operate at extremely low power levels and would have the ability to harness, or scavenge, power dissipated by other functions, battery life would increase.� Combining Ghosh's command of physics with Stan's expertise in electrical engineering, the two hope to bridge the concept of tackling Maxwell's Demon and Brownian ratchets from theoretical physics to engineered technologies. "These theories have been looked at from a physics perspective for years, but not from the perspective of electrical engineering," Stan said. "So that's where we are trying to break some ground." The above story is based on materials provided by University of Virginia. The original article was written by Zak Richards. Note: Materials may be edited for content and length. University of Virginia. "Hot Laptops: Engineers Aim To Solve 'Burning' Computer Problem." ScienceDaily. ScienceDaily, 1 October 2008. <www.sciencedaily.com/releases/2008/09/080929144120.htm>. University of Virginia. (2008, October 1). Hot Laptops: Engineers Aim To Solve 'Burning' Computer Problem. ScienceDaily. Retrieved September 2, 2014 from www.sciencedaily.com/releases/2008/09/080929144120.htm University of Virginia. "Hot Laptops: Engineers Aim To Solve 'Burning' Computer Problem." ScienceDaily. www.sciencedaily.com/releases/2008/09/080929144120.htm (accessed September 2, 2014). Passive infrared sensors
Prof. Yogesh Gianchandani Elected Fellow of the IEEE Prof. Yogesh B. Gianchandani has been named an IEEE Fellow, Class of 2010, “for contributions to silicon-based microactuators and on-chip microplasmas.” Prof. Gianchandani leads a world-class research group in microelectromechanical systems (MEMS). His research interests include all aspects of design, fabrication, and packaging of micromachined sensors and actuators and their interface circuits. He has blended work in materials, processes, and device structures, exploiting thermal effects and microplasmas created at the chip level to form a wide variety of new devices, including biomedical devices (active cardiovascular stents, biliary stents, biopsy needles), microGeiger counters, very high-temperature pressure sensors, Knudsen pumps, and microfluidic devices. He was a Chief Co-Editor of the book Comprehensive Microsystems: Fundamentals, Technology, and Applications, published in 2008. He serves several journals as an editor or a member of the editorial board, and served as a General Co-Chair for the IEEE/ASME International Conference on Micro Electro Mechanical Systems (MEMS) in 2002. From 2007 to 2009, he served at the National Science Foundation as the program director for Micro and Nano Systems within the Electrical, Communication, and Cyber Systems Division (ECCS). He has published approximately 200 papers in journals and conferences, and has 22 U.S. patents issued and at least 10 pending. Prof. Gianchandani received his Ph.D. degree in electrical engineering from The University of Michigan. He is deputy director for the Center for Wireless Integrated Microsystems (WIMS), a member of the Michigan Institute for Plasma Science and Engineering (MIPSE), and he holds a courtesy appointment in the Department of Mechanical Engineering. **************************************************************************** More about IEEE Fellows: According to IEEE, "the grade of Fellow recognizes unusual distinction in the profession and shall be conferred by the Board of Directors upon a person with an extraordinary record of accomplishments in any of the IEEE fields of interest. The accomplishments that are being honored shall have contributed importantly to the advancement or application of engineering, science and technology, bringing the realization of significant value to society." Posted: February 1, 2010 by Catharine June EECS/ECE Communications Coordinator cmsj@umich.edu or 734-936-2965
Nicola Critchley takes key role on Civil Justice Council A high honour has been bestowed on an insurance lawyer by the Lord Chancellor. Nicola Critchley, head of costs at specialist insurance industry law firm Horwich Farrelly, has been appointed to the Civil Justice Council (CJC). The remit of the role, which runs for a three-year term, is to represent the specific interests of the insurance industry. The CJC is an advisory public body whose function is to oversee the civil justice system and to recommend reforms to government. The 19 members, who include judges, legal practitioners, civil servants and consumer affairs experts, meet regularly to discuss matters referred to the CJC in order to advise the Lord Chancellor, the judiciary and the Civil Procedure Rule committee. Critchley’s career to date, spanning more than two decades, has been spent entirely within the insurance legal services sector. A senior partner at Horwich Farrelly for the last 14 years, she has worked with and advised a wide variety of organisations, insurers and clients on a broad spectrum of subjects from unreasonable medical fees to ‘After the Event’ insurance premiums. She founded Horwich Farrelly’s dedicated costs department in 2001 which has grown to become the largest dedicated team of its type in the insurance legal services sector, having dealt with over 525,000 costs matters in the past 10 years. Commenting on the announcement she said: “I am delighted and honoured to be appointed to the CJC and look forward to using my expertise to benefit the civil justice system as a whole. I believe my extensive experience in costs will be a great asset to the Council, providing a unique insight into the issues facing the industry. For example, how the civil justice reforms of the past 15 years have seen costs disputes emerge as the predominant arena for both defendants and claimants to seek rule clarification and change.” Horwich Farrelly nails record eight fraudulent insurance claims in one day Insurance fraudster who accused own lawyer lands in jail
Title: Structural biology of Wnt signalling through LDL receptor-related proteins Author: Chen, Shou Awarding Body: Oxford University Cell-cell communication involving the Wnt family of secreted signalling molecules is fundamental to animal development and homeostasis, whilst dysregulation of Wnt signalling causes many human diseases including cancer and osteoporosis. Low-density lipoprotein (LDL) receptor-related protein 6 (LRP6) co-operates with members of the Frizzled family of seven- pass transmembrane proteins to transduce Wnt signalling across the cell membrane. This thesis first reviews the Wnt signalling pathway from a molecular perspective, and then describes the methods employed in this doctoral work. Next, structural and functional studies of the LRP6 extracellular domain that reveal a cell surface platform for Wnt signalling are reported. Finally, this thesis presents results preliminary to the structural characterisation of the Dickkopf (Dkk) family of secreted Wnt modulators and their complexes with LRP receptors. The LRP6 ectodomain comprises four tandem six- bladed Tyr- Trp- Thr-Asp B- propeller-epidermal growth factor-like domain (PE) pairs which harbour binding sites for Wnt morphogens and their antagonists including Dkkl. To understand how these multiple interactions are integrated, crystallographic analysis of the third and fourth PE pairs was combined with electron microscopy to determine the complete ectodomain structure. An extensive inter-pair interface, conserved for the first-to-second and third-to-fourth PE interactions, contributes to a compact platform-like architecture, which is disrupted by mutations implicated in developmental diseases. Electron microscopy reconstruction of the LRP6 platform bound to chaperone Mesd (mesoderm development) exemplifies a binding mode spanning PE pairs. Cellular ar d binding assays identify overlapping Wnt3a- and Dkkl- binding surfaces on the third PE pair, consistent with steric competition, but also suggest a model in which the platform structure supports interplay of ligands through multiple interaction sites. The major discoveries of this work have been published as a research article in the journal Developmental Cell.
Background: Equestrian sport carries with it an implicit risk of injury. Despite the frequency of injuries in equestrian sports, there is no published study on injuries of equestrian athletes in Malaysia. Objective: The objective of this study was to determine the prevalence of injuries and its correlates among horseback riders. Subjects And Methods: A web-based standardized questionnaire was used to collect data for this cross-sectional survey. Horseback riders aged 18 years and above were included in the study. Out of 169 participants, 93 were females and 76 were males. The correlation of injuries to gender, age, level of experience, exercise habits, use of safety measures, and type of equestrian sport were determined. Chi-square test was performed to test for statistical significance. Results: The prevalence was high with 85.8% of the participants reporting symptoms and characteristics of injuries in the past 12 months. The most frequently perceived symptoms reported were in the upper extremities (43.4%) followed by lower extremities (40.7%), head injury (8.3%) and injuries of upper and lower back (3.4%). There was a higher prevalence of injury among female participants (55.03%) than males (42.60%). A significant correlation was found between gender and prevalence of injuries. About 70% of the riders sustained soft tissue injuries. Fifty-five percent of the injured were involved in recreational riding. The most common mechanism of injury was a fall from a horse. Sixty percent of the injured riders did not seek medical attention after being injured, and physiotherapy consultation was even lower with 10.3%. Conclusions: The high prevalence of injuries and low rate of medical consultation emphasize the need for education programs on safety in Malaysia. Sessions should be held to improve coaching for riders and instructors, and their knowledge of the nature of the horse, mechanisms of injuries, horse handling, and riding skills to help them host safe equestrian activities. [FULL TEXT] [PDF]* * Requires registration (Free)
Cultural centers to report to Academic Diversity, Equity and Inclusion Vernese Edghill-Walden As part of efforts to create a campus climate that better supports and promotes diversity, equity and inclusion, the university’s cultural resource centers, as well as the Office of Diversity and Equity will begin reporting to the Office of Academic Diversity, Equity and Inclusion within the Division of Academic Affairs. The changes were made in response to recommendations that emerged from both the Program Prioritization Administrative Task Force and the Diversity and Inclusion Task Force. Both groups recognized that such an alignment would allow for better use of resources as the university strives to realize its vision for a more diverse and inclusive learning and working environment. “The new structure will bring even greater strategy and cohesion to the operation of these important entities,” said NIU Chief Diversity Officer Vernese Edghill-Walden who will oversee the centers. “This will allow these units to work in a more integrated fashion toward a single vision for diversity, equity and inclusion at NIU. It will help eliminate silos and encourage intentional strategies that can advance that work across all levels and experiences.” The changes, which will officially be implemented Jan. 1, 2017, will affect the Asian American Center, Latino Resource Center, and the Gender and Sexuality Resource Center. Currently, those centers, as well as the Office of Diversity and Equity, are housed within the Division of Student Affairs and Enrollment Management. The new alignment fits well with the efforts being made by the Office of Academic Diversity, Equity and Inclusion to implement its comprehensive plan. To date, that work has included the creation of the on-going Diversity Dialogues series, the development of the Human Diversity requirement and a number of other academic equity initiatives. As the realignment takes place, one of the key players in making it happen will be leaving. Katrina Caldwell, who currently serves as assistant vice president for Diversity and Equity, will be assuming the position of vice chancellor for diversity and community engagement, and chief diversity officer, at the University of Mississippi, starting in January. “Katrina’s passion, expertise and student-centered insights on issues of equity and inclusion have helped shape this new organizational structure to move NIU forward,” said Edghill-Walden. “While we are deeply saddened to see her go, she has earned this new position and our full support, and we wish her well.” Previous Post:Sophia Varcados – Creative Services Next Post:World premiere concert by professional choir in DeKalb
How pterosaurs became airborne Scleromochlus reconstruction by Jaime A. Headden While it’s easy to generalise birds, bats and pterosaurs as the result of very lucky gliders, the truth is that we know very little about how these animals became airborne. Aerial dinosaurs are notoriously subjected to intense debate, especially considering that forms like Microraptor are amazingly highly unsuited for both climbing and WAIR, but birds are hardly the only mysteries. Bats, often credited as derived from gliding mammals, might actually be the result of a highly unusual cave dwelling lifestyle, though predictably this is highly criticised as well. In both birds and bats, therefore, can can negate origins from tree climbing gliding ancestors, the usual models for flying animal ancestors. So, in that light, how did pterosaurs, the other group of flying vertebrates, became airborne? What we currently know Jeholopterus by Mark Witton. Animals like these are among the most basal known pterosaurs. More controversial than any other thing about them, is where pterosaurs fit in the tree of life. The most valid suggestion, which is currently the “official” one, is that pterosaurs evolved from archosaurs closely related to dinosaurs, and that both Pterosauromorphs and Dinosauromorphs form clade Ornithodira, or “bird necks”. And within Ornithodira, pterosaurs are often considered to be closely related to the archosaur Scleromochlus, a little sauropsid that is pretty much the archosaurian answer to Sharovipteryx. Even then, this is not certain; some consider Scleromochlus to be a more “primitive” ornithodiran, or even outside of the clade all together. The fact that it has long hindlimbs and short forelimbs, as opposed to the long forelimbs and short hindlimbs of pterosaurs, doesn’t help matters, though it’s worth to note that gliding squirrels do have more developed forelimbs than other squirrels, so such a reversal is not unheard off. Within Pterosauria itself, the most basal taxa don’t help matters. Currently, the most basal known pterosaurs are anurognathids. Anurognathids were highly different from the stereotypical pterosaur: with very short and wide toad-like jaws, massive eyes and short, broad wings covered in specialised pycnofibers, these were a far call from the elegant, diurnal, colourful pterodactyloids of the Cretaceous, being basically the closest a sauropsid ever came close to a bat. More importantly, however, they provide clues as to how pterosaurs evolved. Anurognathid morphology is consistent with that of arboreal, nocturnal/crepuscular insectivores. Their short and wide snouts were highly remaniscient of not just amphibian mouths, but also of the beaks of nightjars, swifts and swallows, all acrobatic aerial predators of moths and other insects. Their wings were proportionally short by pterosaur standards, being quite adequate for flying among dense forests, and the wing membranes were bordered by long pycnofibers similar to the barbs of silent-flight birds like owls, indicating that these animals had quite silent wing beats. It’s very likely that they were ambush predators, relying on the cover of darkness and their silent wings to stalk their prey undetected, and likely to avoid potential predators like arboreal mammals and, latter on, larger pterosaurs. We also know that they had unique crests on their arm bones, implying that these were also among the few known pterosaurs to be capable at hoovering. Like all non-pterodactyloid pterosaurs, anurognathids were also highly adapted to climbing. Their claws were robust and curved, impairing movement on flat surfaces, but allowing efficient climbing. Of what we know about their wing membranes, they were more extensive on the legs than in more derived pterosaurs, also making running cumbersome. These were, therefore, highly arboreal animals, presumably avoiding the ground as much as possible. Therefore, based on Anurognathidae, we can speculate that the first pterosaurs were nocturnal, arboreal gliders/flutterers. The tree tops of the Triassic had a menagerie of diurnal arboreal reptiles like drepanosaurs, and back then there was virtually no competition for the numbers of flying insects. With competition on the ground and availiable prey on the air, flight was an inevitability. There are modern analogues!? Ptychozoon gecko gliding. Most surprising, is that there is an echo of the old ornithodires that produced pterosaurs, in the dense tropical rainforests of the Holocene. Gliding or flying geckos are curious squamates distributed across four genera. The most famous, and ostensibly the most aerial, are the Ptychozoon geckos from Southeast Asia, but there is also the closely related Luperosaurus and the distantly related Thecadactylus. All these gekkotans are nocturnal, arboreal animals just like most geckos, but thanks to the extensive flaps of skin that stretch along the limbs, in the flanks and between the toes, they can glide efficiently. As said before, Ptychozoon, with the most extensive membranes and wider tail, is the most aerial of these geckos; they are frequently captured in bat nets, suggesting that they engage in aerial locomotion frequently, and perhaps even to capture aerial prey. Most interestingly, unlike other gliding squamates, which rely on ribs to support the wing, the flank membranes of these geckos are unsupported. Indeed, these animals rely a lot on the webbed toes to form the wing surface; such a reliance on the limbs, combined with already present flank membranes, makes the evolution of powered flight the logical conclusion, and the ancestors of pterosaurs likely followed the exact same path, especially when marks of membranes between the pterosaur clawed fingers have been found. The evolution of “new pterosaurs” from Ptychozoon is probably sadly postponed, as bats and nocturnal birds rule the skies of their environment. However, long ago, in the Triassic, there were no flying dinosaurs or mammals, and as such animals very similar to Ptychozoon were free to further develop their wings. (EDIT: Before proceeding with the comments’ section, read Darren Naish’s post on the phylogenetic controversies raised by David Peters) ← So said the Carettochelys How cryptozoologists fail at pterosaur research → Elijah Shandseight permalink November 24, 2012 11:25 am Very interesting post. Exactly, why did you choose anurognathids for looking to the ancestry of Pterosauria? I mean it’s true that Kellner classified them as one of the most basal groups of pterosaurus, but they could also be the sister taxa of Pterodactyloidea (as in Andres et al., 2010). Would species such as Dimorphodon and Preondactylus be a better analogue? Reply gwawinapterus permalink* November 24, 2012 2:44 pm Ñot really. Andre’s study is not taken very seriously, as it was more of an attempt to justify not making them a ghost lineage. Dimorphodontids might be more basal, but the most recent cladograms I’ve seen make anurognathids the most basal pterosaurs, with dimorphodontids sister taxa to Campylognathoidea. At any rate, while not as … aberrant, as anurognathids, dimorphodontids might had led a similar lifestyle, also having extreme arboreal adaptations and wide mouths. Of course, Dimorphodon itself was more of a pine-marten analogue. Reply Elijah Shandseight permalink November 25, 2012 9:42 am Thanks for the answer, very clear 😉 davidpeters1954 permalink March 4, 2013 12:30 pm Why not look to the ancestry of pterosaurs in Sharovipteryx and kin, a taxon you briefly mentioned? If you’re looking for good evidence for the ancestry of pterosaurs you can find it at reptileevolution.com and pterosaurheresies.com. Not sure why paleontologists, including yourself, are avoiding the tritosaurs, like Huehuecuetzpalli and Macrocnemus and the tritosaur fenestrasaurs, Langobardisaurus, Cosesaurus, Sharovipteryx and Longisquama as pterosaur ancestors when this is the only sequence of taxa with a gradual and increasing list of pterosaurian traits. Reply gwawinapterus permalink* March 4, 2013 1:28 pm Considering that Tritosauria is polyphyletic, considering that your bias towards Sharovipteryx ignores several recent morphological examinations, and considering that your character examination is dubious at best, I am not going to waste time arguing about that. Note, however, that Sharovipteryx could ostensibly have led a similar lifestyle to modern gliding geckos. It shares several adaptations for arboreality, and it was a quadruped as well. Reply davidpeters1954 permalink March 4, 2013 1:32 pm Polyphyletic? First I’ve heard. Please send refs. Sharovipteryx examinations? Please send refs. davidpeters@att.net gwawinapterus permalink* March 4, 2013 5:29 pm Its been considered as such since the mid-2000’s, it’s strange you’ve never heard before. If you must, see Modesto, 2004. As for Sharovipteryx, Darren Naish covers it pretty extensively. davidpeters1954 permalink March 4, 2013 5:39 pm Urls or titles please, if possible. Googling is not bringing these up. Tritosauria (literally the third clade of squamates) was a term invented by me within the last two years, so it is doubtful that Modesto 2004 could have described it as polyphyletic. gwawinapterus permalink* March 4, 2013 7:01 pm What you claim to be the bulk of Tritosauria, however, was pretty much rendered highly polyphyletic in that analysis. davidpeters1954 permalink March 4, 2013 7:17 pm What is the name of the taxon in Modesto 2004? That may help me to find it. davidpeters1954 permalink March 4, 2013 7:31 pm By any chance are you referring to Modesto and Sues 2004? And the tree they found published online here: http://en.wikipedia.org/wiki/Protorosauria gwawinapterus permalink* March 4, 2013 7:45 pm Yes, pretty much. davidpeters1954 permalink March 4, 2013 7:56 pm Good! Then our problems are solved. The large reptile tree at http://www.reptileevolution.com/reptile-tree.htm includes a magnitude more taxa providing that many more opportunities for taxa to nest. Given those opportunities the tree topology is different and better resolved without suprageneric taxa, like “squamates.” Also missing from the Modesto and Sues list are the more derived spheondontids that would nest with Trilophosaurus and rhynchosaurs. More taxa documents the convergence between tanystropheids and protorosaurs, which is quite remarkable, but still a case of convergence. This is a test that anyone can duplicate, like a good science experiment. I encourage you to test the large reptile tree any way you want to. It’s quite robust. I will post on this topic in the next few days at pterosaurheresies.com. Thanks for the impetus. Pterosaur Tails, woohoo! « Gwawinapterus On pterosaur wing development | Gwawinapterus
Acid rain has a disproportionate impact on coastal waters Sep 08, 2007 Maps depict the model-estimated atmospheric deposition rates of carbon, nitrogen and sulfur; alkalinity; and potential alkalinity to the ocean caused by human activity relative to conditions before the Industrial Age began. Credit: Scott Doney, WHOI, et al, from Proceedings of the National Academy of Sciences The release of sulfur and nitrogen into the atmosphere by power plants and agricultural activities plays a minor role in making the ocean more acidic on a global scale, but the impact is greatly amplified in the shallower waters of the coastal ocean, according to new research by atmospheric and marine chemists. Ocean “acidification” occurs when chemical compounds such as carbon dioxide, sulfur, or nitrogen mix with seawater, a process which lowers the pH and reduces the storage of carbon. Ocean acidification hampers the ability of marine organisms—such as sea urchins, corals, and certain types of plankton—to harness calcium carbonate for making hard outer shells or “exoskeletons.” These organisms provide essential food and habitat to other species, so their demise could affect entire ocean ecosystems. The findings were published this week in the online “early edition” of the Proceedings of the National Academy of Sciences; a printed version will be issued later this month. “Acid rain isn’t just a problem of the land; it’s also affecting the ocean,” said Scott Doney, lead author of the study and a senior scientist in the Department of Marine Chemistry and Geochemistry at the Woods Hole Oceanographic Institution (WHOI). “That effect is most pronounced near the coasts, which are already some of the most heavily affected and vulnerable parts of the ocean due to pollution, over-fishing, and climate change.” In addition to acidification, excess nitrogen inputs from the atmosphere promote increased growth of phytoplankton and other marine plants which, in turn, may cause more frequent harmful algal blooms and eutrophication (the creation of oxygen-depleted “dead zones”) in some parts of the ocean. Doney collaborated on the project with Natalie Mahowald, Jean-Francois Lamarque, and Phil Rasch of the National Center for Atmospheric Research, Richard Feely of the Pacific Marine Environmental Laboratory, Fred Mackenzie of the University of Hawaii, and Ivan Lima of the WHOI Marine Chemistry and Geochemistry Department. “Most studies have traditionally focused only on fossil fuel emissions and the role of carbon dioxide in ocean acidification, which is certainly the dominant issue,” Doney said. “But no one has really addressed the role of acid rain and nitrogen.” The research team compiled and analyzed many publicly available data sets on fossil fuel emissions, agricultural, and other atmospheric emissions. They built theoretical and computational models of the ocean and atmosphere to simulate where the nitrogen and sulfur emissions were likely to have the most impact. They also compared their model results with field observations made by other scientists in the coastal waters around the United States. Farming, livestock husbandry, and the combustion of fossil fuels cause excess sulfur dioxide, ammonia, and nitrogen oxides to be released to the atmosphere, where they are transformed into nitric acid and sulfuric acid. Though much of that acid is deposited on land (since it does not remain in the air for long), some of it can be carried in the air all the way to the coastal ocean. When nitrogen and sulfur compounds from the atmosphere are mixed into coastal waters, the researchers found, the change in water chemistry was as much as 10 to 50 percent of the total changes caused by acidification from carbon dioxide. This rain of chemicals changes the chemistry of seawater, with the increase in acidic compounds lowering the pH of the water while reducing the capacity of the upper ocean to store carbon. The most heavily affected areas tend to be downwind of power plants (particularly coal-fired plants) and predominantly on the eastern edges of North America, Europe, and south and east of Asia. Seawater is slightly basic (pH usually between 7.5 and 8.4), but the ocean surface is already 0.1 pH units lower than it was before the Industrial Revolution. Previous research by Doney and others has suggested that the ocean will become another 0.3 to 0.4 pH units lower by the end of the century, which translates to a 100 to 150 percent increase in acidity. Source: Woods Hole Oceanographic Institution Impoverished North Korea falls back on cyber weapons As one of the world's most impoverished powers, North Korea would struggle to match America's military or economic might, but appears to have settled on a relatively cheap method to torment its foe. Staples Inc. says nearly 1.2 million customer payment cards may have been exposed during a security breach earlier this year. Five ways to make your email safer in case of a hack attack The Sony hack, the latest in a wave of company security breaches, exposed months of employee emails. Other hacks have given attackers access to sensitive information about a company and its customers, such as credit-card ...
Chroma: An Ode to J.D. ‘Okhai Ojeikere Taking inspiration from an iconic Nigerian photographer, this playful series pays tribute to the rich history of local hair culture. Photographs and text by Medina Dugger “Chroma: An Ode to J.D. ‘Okhai Ojeikere” is an ongoing series which celebrates women’s hair styles in Lagos, Nigeria through a fanciful, contemporary lens. The images are inspired by hair color trends and by the late Nigerian photographer J.D. ‘Okhai Ojeikere, who photographed over a thousand different hair styles in his lifetime. © J.D. ‘Okhai Ojeikere, courtesy of Gallery Fifty One Ojeikere’s approach was documentary in nature, as he took inventory of hundreds of hairstyles and amassed an enormous index spanning over 40 years. He began photographing hairstyles in black-and-white, following the re-emergence of traditional hairstyles which became popular again following Nigeria’s independence. Prior to de-colonization, wigs and hair straightening had become a commonplace practice, especially in urban areas of the country. African hair-braiding methods date back thousands of years, and Nigerian hair culture is a rich and often extensive process which begins in childhood. The methods and variations have been influenced by social and cultural patterns, historical events and globalization. Hairdos range from being purely decorative to conveying deeper, more symbolic meanings, revealing social status and age as well as tribal and family traditions. The availability of colorful hair extensions and wools in local markets today has led to unique variations on threading and braiding techniques. ”Chroma” is a celebration of traditional and contemporary braiding methods. The series takes more of a conceptual approach to Ojeikere’s documentary style and recontextualizes some of Ojeikere’s (and other) hairstyles to highlight current and imagined hair designs, celebrating the art of Nigerian hair culture. —Medina Dugger Editors’ note: Dugger’s project was the Open series winner in the Magnum Photography Awards 2017! Discover more inspiring work from all 41 of the winners, finalists, and jurors’ picks. If you enjoyed this article, you might like these previous features: Natural Red Hair, a series by a Dutch photographer who is captivated by natural redheads; Pray LA, Joe Pugliese’s project featuring the well-dressed members of a Baptist church in Los Angeles; and A Modern Hair Study, portraits of young women that recall nineteenth-century images of the same subject. Magenta. © Medina Dugger Blue on Blue Thread. © Medina Dugger Purple Kinky Calabar. © Medina Dugger Maroon. © Medina Dugger Orange Koroba with Pony © Medina Dugger Royal Blue. © Medina Dugger Green. © Medina Dugger Jade Double Patewo © Medina Dugger Turquoise. © Medina Dugger Pink Didi with Cowrey Shells © Medina Dugger Red. © Medina Dugger Silver Calabar © Medina Dugger Double Joint Yellow © Medina Dugger Femke Dekkers Young Dutch artist Femke Dekkers says, “Sometimes, I feel that I am painting more than taking photographs, using the space in front of the lens as my canvas.” Homegrown: American Unease Blue Velvet meets Norman Rockwell: seemingly idyllic scenes of Americana come tinged with unease. Amidst the carefully staged serenity, hidden antagonists lurk, exposing the frayed edges of daily life. Families: A New Definition Vincent Gouriou What is a family? From loving twin sisters to an activist transexual couple, these simple yet diverse family portraits show that our age-old definition deserves a careful re-examination. Jody Ake Jody Ake creates portraits, nudes, still lifes and landscape images using the wet collodion process, an historic photographic technique which involves using a large format camera, glass plates, and hand-mixing all of the necessary chemicals for... Aftermath: Battling Breast Cancer At 31, most of us feel invincible—this series documents the shock, sadness, and resolve that one woman endured during her struggle to survive breast cancer; a process she documented so that others could understand what it means to battle this... Julia Wannabe Anna Grzelewska “Julia Wannabe” is a photographic collaboration between a mother and her daughter, as the daughter transitions from being a girl to a young woman — award-winning work!
The Importance of Early Identification and Early Literacy Instruction Signs of dyslexia often go undetected and unrecognized in traditional academic settings. Left unidentified, students with dyslexia find reading and other language related skills difficult, and, as a result, become increasingly frustrated through their years of school. Underachievement and diminishing self-esteem, a lack of confidence and a lack of love for learning become evident as the struggling student remains in a traditional academic setting. It is critical that both teachers and parents have an awareness of dyslexia, and can work to identify presenting characteristics in the young child before the child begins to experience repeated failure and frustration. Academic Achievement Directly Linked to Early Literacy Learning Early identification and intervention for dyslexia and other language learning differences are key to the future academic success and self-esteem of the child. Highly capable teachers and an intensive, individualized program are important components for successful early intervention. The failure to acquire fundamental early literacy skills reduces the opportunities for children to develop reading comprehension strategies, vocabulary development, and an appreciation and love for reading. The quality of early literacy learning in a child’s education is directly linked to later academic achievement. The following abilities predict the future reading achievement for a child: Oral Language (Listening Comprehension, Vocabulary, Verbal Formulation) Phonological Awareness ( the ability to discriminate sounds in words) Rapid Automatic Naming (ability to quickly and accurately retrieve labels) Phonological awareness is a fundamental area of language development, and can be easily assessed at an early age. A weakness in phonological awareness is a clear sign of a child having difficulty in acquiring early reading skills. See our Resources for Parents section that highlights more signs of dyslexia. Early Literacy Program For Five and Six Year Olds A child’s future success in school and later in life is dependent on the literacy skills they develop and acquire in the early years. Though the development of these literacy skills is a lifelong process, and it is critical that the child in the early years is provided with a rich language program delivered by specially trained teachers using proven methods. Providing children with a language rich early literacy program is critical to later achievement in school and beyond. Recognizing the compelling research that highlights the importance of early identification of students' needs and the potency of early intervention, The Odyssey School's Kayak program is designed to help young children who are at risk for reading difficulties. The focus of the Kayak class is on developing important literacy skills and a love of learning in the lives of five and six year old children who would be challenged in a more traditional classroom. Highly trained teachers implement proven methodologies in developing the early literacy skills in young children. The small class size affords children with individualized and specialized instruction that cultivates early literacy, language growth, and a love for learning. FREE Early Identification Screening (ages 4 - 8) Schedule a free reading screening today
Apolipoprotein E Polymorphism and Brain Morphology in Mild Cognitive Impairment Thomann P.A.a · Roth A.-S.a · Dos Santos V.a · Toro P.a · Essig M.b · Schröder J.a aSection of Geriatric Psychiatry, University of Heidelberg, and bGerman Cancer Research Center, Heidelberg, Germany Dement Geriatr Cogn Disord 2008;26:300–305 (DOI:10.1159/000161054) Background: The apolipoprotein E (ApoE) genotype has been confirmed as the major genetic risk factor for late-onset Alzheimer’s disease (AD). How the ApoE genotype and brain morphology relate to each other is only partly understood, particularly in mild cognitive impairment, the assumed prestage of AD. Methods: A total of 83 subjects with mild cognitive impairment (aging-associated cognitive decline criteria) were investigated with optimized voxel-based morphometry (VBM). We tested for differences in gray and white matter densities between groups according to their ApoE status, i.e. &epsi;4 allele noncarriers (n = 42), subjects with one &epsi;4 allele (n = 27) and subjects with two &epsi;4 alleles (n = 14). Results: In individuals carrying two &epsi;4 alleles, VBM revealed a decline in gray matter density predominantly in the medial temporal lobe region. Subjects with a single copy of the &epsi;4 allele exhibited gray matter atrophy in the right inferior frontal gyrus. With respect to white matter changes, atrophy was only found in subjects homozygous for &epsi;4 and confined to the right superior and middle temporal gyrus. Conclusion: Our findings support the hypothesis that the ApoE genotype in mild cognitive impairment might be associated with structural changes typically found in the early stages of AD.
Bri Doughty: Proud to honor Newton's LGBT seniors June is LGBT (Lesbian, Gay, Bisexual, Transgender) Pride month. June is LGBT (Lesbian, Gay, Bisexual, Transgender) Pride month. Across the world, people are celebrating their personal identities and the journey we as a community have endured against stigma and discrimination. In honor of this exciting time, I am delighted to introduce the Newton Department of Senior Services’ new initiative focused on older LGBT members of the Newton community.I have spent the last year at the Senior Center as an intern from Simmons School of Social Work. My hope was to enhance the already welcoming environment of the Center by beginning to understand the unique needs and interests of LGBT seniors and integrating them into the fabric of NDSS.As a young gay woman, I live in an area that not only accepts me for who I am, but has made strides to view me as equal to my heterosexual counterparts. My journey hasn’t been free of confusion and insecurity; however, I was fortunate to have never experienced fear. I don’t worry as I walk down the street holding my partner’s hand or feel uncomfortable talking openly about my sexuality with my doctor. My time here at the Center has allowed me to pay homage, not only to those who came before me, but also to those who have helped shape the community of which I proudly call myself a member. I have deep respect and admiration for those who fought for the liberties that I take advantage of everyday and have made my life in the LGBT community an easy and pleasant one.The individuals who fought so hard in the past are facing new challenges today as they age. Last year, the Administration on Aging recognized LGBT older adults as a population of “greatest social need.” Not only have LGBT seniors become invisible to mainstream society, but they have disappeared within the LGBT community. Here in Newton, where 22 percent of the population is over the age of 60 we recognize that there may be individuals who are underserved.In conjunction with The LGBT Aging Project, NDSS is working to make a more welcoming and accepting place for all older members of this community. Jayne Colino, Director of NDSS, is fully committed to this as an ongoing effort.“Newton is made up of many communities,” said Colino, “and we at the Senior Center want to ensure that all residents of Newton feel safe and welcomed here.“As we work to foster an inclusive community, the first step is to engage in an open dialogue. Through professional development to increase awareness for city staff and board members, our hope is to create a safe space for people to express themselves, learn new things and broaden their perspectives. We are working to empower a population to become more visible and walk more proudly in the community. Each step will hopefully lead to systemic change and greater social awareness.On June 19, as part of this effort, we are excited to offer the screening of Gen Silent, a documentary chronicling the lives of six LGBT seniors and the struggles they face in the Boston area. While this film focuses on challenges faced by specific individuals, their stories are powerful and moving and allow us all to reflect on our own role in society. By highlighting human elements, it reminds us of aging’s unique and universal challenges. Hopefully the film will plant a seed of change, create mindfulness of the diversity within our community, and explore the inherent similarities that we all share.Change doesn’t happen overnight. I stand with many others who are dedicated to help pave the way and continue the legacy of Stonewall. The NDSS’ newly created welcoming statement, proudly hung in our entryway, is testament to this. As a follow-up to this screening, we will host a Community Conversation on July 24 to find out how the needs of the older LGBT community can best be served by our department and city. With each interaction we hope to increase visibility, highlight areas for improvement and acknowledge the diversity of our community.All are welcome to join the NDSS on Wednesday, June 19 from 6 to 8 p.m. for a screening of Gen Silent. Lisa Krinsky, Director of The LGBT Aging Project will participate in the discussion after the screening. Light refreshments will be served. Space is limited; please call 617-796-1670 to register.Bri Doughty is studying at the Simmons School of Social Work and interning at the Newton Senior Center.
Canadian HIV Testing and Prevention Guidelines Information By Erica Lee From Canadian AIDS Treatment Information Exchange If you're looking for resources to inform an evidence-based decision, guidelines are a key tool to turn to. Evidence-based guidelines provide recommendations based on a review of the research literature. Recommendations can also be developed from the consensus of experts in the field and the experience of practice-based evidence. Guidelines are issued by a variety of sources including governments and expert groups at the local, national and international levels. This article highlights HIV testing and prevention guidelines developed in Canada. This article is the first of three articles that will appear in Prevention in Focus in 2017 and 2018 on Canadian HIV and hepatitis C guidelines. While some guidelines are relevant nationally, others may have been developed to respond to a local context, but may still be informative for other regions. Testing guidelines address a range of factors to consider when delivering HIV testing. Recommendations commonly cover screening people for HIV testing, testing in specific populations, testing frequency and test counselling. Testing guidelines can also address legal and ethical issues such as confidentiality and disclosure, and technical issues such as test types and technologies. In Canada, the Public Health Agency of Canada has developed a guide with general recommendations for HIV testing that can be taken into consideration alongside any existing local or specialized practices. A number of provinces have also developed testing guidelines that reflect approaches and procedures specific to the province. Human immunodeficiency virus: screening and testing guide -- Public Health Agency of Canada HIV testing guidelines for the province of British Columbia -- British Columbia Office of the Provincial Health Officer Guidelines for HIV testing and counselling -- Ontario Ministry of Health and Long-Term Care Ontario HIV testing frequency guidelines: guidance for counselors and health professionals -- Ontario Ministry of Health and Long-Term Care Guide québécois de dépistage des infections transmissibles sexuellement et par le sang -- Ministry of Health and Social Services of Quebec Saskatchewan HIV testing policy -- Saskatchewan HIV Provincial Leadership Team Post-exposure prophylaxis guidelines can cover the use of PEP after an occupational exposure, non-occupational exposure, or both. They generally address infections, such as HIV, that may be transmitted by blood or other body fluids. The guidelines listed here have been developed by provincial and territorial governments. They provide guidance on assessing whether a potential exposure may require PEP, testing of the source or recipient of the exposure, and the drug regimens and procedures to use when administering PEP. The guidelines can also provide information on local resources or copies of specific forms used in the province or territory. A group of clinicians, researchers and community members are developing national guidelines for the prescribing of PEP and PrEP (pre-exposure prohylaxis) in Canada. These guidelines will provide evidence-informed guidance on how to assess patient eligibility for PEP and how to correctly prescribe it. The guidelines are expected to be released in early 2017. Alberta guidelines for non-occupational, occupational and Mandatory Testing and Disclosure Act post-exposure management and prophylaxis: HIV, hepatitis B, hepatitis C and sexually transmitted infections -- Alberta Health Blood and body fluid exposure management -- Yukon Health and Social Services Guidelines for the management of exposures to blood and body fluids -- Government of Saskatchewan Guidelines for the management of percutaneous or sexual exposure to bloodborne pathogens -- Department of Health and Wellness, Prince Edward Island Guide pour la prophylaxie après une exposition au VIH, au VHB et au VHC dans un contexte non professionnel : Résumé -- Ministry of Health and Social Services of Quebec Guide pour la prophylaxie postexposition (PPE) à des liquides biologiques dans le contexte du travail -- Ministry of Health and Social Services of Quebec Integrated post-exposure protocol for HIV, HBV and HCV: guidelines for managing exposures to blood and body fluids -- Manitoba Health, Seniors and Active Living The daily use of the antiretroviral drug Truvada, in combination with safer sex practices, for reducing the risk of the sexual transmission of HIV was approved in Canada in February 2016. Guidelines help inform the delivery of PrEP in clinical settings. PrEP guidelines can include recommendations on screening people for PrEP use, the prescription of PrEP, monitoring while on PrEP, and stopping PrEP. A group of clinicians, researchers and community members are developing national guidelines for the prescribing of PrEP and PEP (post-exposure prophylaxis) in Canada. These guidelines will provide evidence-informed guidance on how to assess patient eligibility for PrEP and how to correctly prescribe it. The guidelines are expected to be released in early 2017. Guidance for the use of pre-exposure prophylaxis (PrEP) for the prevention of HIV acquisition in British Columbia -- British Columbia Centre for Excellence Harm reduction programs help reduce the transmission of HIV and hepatitis C among people who use drugs. This resource provides guidance for harm reduction programs by examining the context and effectiveness of practices that facilitate safer drug use and offering recommendations for the delivery of harm reduction services. Topics covered include the distribution of safer drug use equipment, education and overdose prevention. Related concerns are also addressed, such as health and support service delivery and referrals for people who use drugs. Best practice recommendations for Canadian harm reduction programs that provide service to people who use drugs and are at risk for HIV, HCV, and other harms -- Working Group on Best Practice for Harm Reduction Programs in Canada Looking for additional guidelines? Check out Programming guides and tools in the Strengthening Programming section of the CATIE website for more Canadian and international guidelines on HIV, hepatitis C and related topics. And stay tuned for future Prevention in Focus articles on guidelines for HIV care and treatment and hepatitis C. Erica Lee is the Information and Evaluation Specialist at CATIE. Since earning her Master of Information Studies, Erica has worked in the health library field, supporting the information needs of frontline service providers and service users. Before joining CATIE, Erica worked as the Librarian at the AIDS Committee of Toronto (ACT). How Frequently Should People Be Tested for HIV? Canadian Researchers Call for Public Health Strategy to Improve Engagement in HIV Care Read the Guidelines for the Use of Antiretroviral Agents in HIV-1-Infected Adults and Adolescents (PDF) Updated HHS Adult and Adolescent Antiretroviral Treatment Guidelines Released This article was provided by Canadian AIDS Treatment Information Exchange. It is a part of the publication Prevention in Focus: Spotlight on Programming and Research. Visit CATIE's Web site to find out more about their activities, publications and services. Conversations With Federal HIV Leaders From the 2017 U.S. Conference on AIDS
NIDILRR’s Model Systems programs in spinal cord injury (SCIMS), traumatic brain injury (TBIMS), and burn injury (BMS) provide coordinated systems of rehabilitation care and conduct research on recovery and long-term outcomes. In addition, these centers serve as platforms for collaborative, multi-site research, including research on interventions using randomized controlled approaches. The programs also track Model Systems patients over time in large databases. Eligible applicants are institutions of higher education, nonprofit Organizations, and other organizations and/or agencies. View a list of NIDILRR funding opportunities and application kits. As a rough guideline, check the preceding link between October and April every year. View the Guide to Applying for some helpful application tips. The Model Systems Program Funding Mechanism accounted for roughly 18% of grant funding in FY 2015. This program funding mechanism consists of: 14 Spinal Cord Injury (SCI) Model System Centers and its national data center. The National SCI Database has been in existence since 1973 and captures data from an estimated 13% of SCI cases in the U.S. Since its inception, 28 federally funded SCIMS have contributed data to the National SCI Database. As of October 2015, the SCIMS center has more than 44,280 individuals in its National Database. 16 Traumatic Brain Injury Model System Centers and its national data center. As of October 2015, the TBIMS centers have 14,406 persons enrolled in their national database. The TBIMS National Database includes a large-scale ongoing follow-up of individuals post-injury. 4 Burn Model System Centers and its national data center. As of October 2015, the Burn Model System centers have 5,762 persons enrolled in their national database. The BMS National Database is a prospective, longitudinal, multicenter research study that examines functional and psychosocial outcomes following burns for more than 3,000 adults and nearly 2,000 children. Takeaway: Studies using these databases have provided an abundance of groundbreaking information over the decades on the social and environmental factors influencing the community living and participation of individuals affected by these injuries, best clinical practices for screening and treatment, physiological aspects of the conditions, and long-term outcomes. A Model Systems Knowledge Translation Center: The Model Systems Knowledge Translation Center (MSKTC) summarizes research, identifies health information needs, and develops information resources to support the Model Systems programs in meeting the needs of individuals with spinal cord injury (SCI), traumatic brain injury (TBI), and burn injury (Burn). Select Accomplishments for FY 2015 Study Points to Rehabilitation Interventions That Are Associated with Positive Outcomes for People with Traumatic Brain Injury Ohio State University (Grant #H133A080023) Ohio State University coordinated a journal supplement that reported the results of the first practice-based evidence study of TBI rehabilitation. The study enrolled 2,205 individuals with TBI who were receiving initial inpatient rehabilitation across 10 rehabilitation centers. The study has resulted in the richest dataset on TBI rehabilitation ever assembled. The journal supplement included 12 articles, beginning with an introductory article. The studies collectively identified the characteristics associated with patient outcomes, how clinical events mediated outcomes, and treatments used in response to various clinical problems. They further determined the best treatment options, as measured by superior outcomes, accounting for TBI severity and other factors. The analyses reported in the journal supplement are the beginning of an extensive analysis and reporting process. The findings will provide guidance in developing guidelines for clinical decision-making and other evidence-based practices. The citation for the Journal Supplement is: Horn, S. (Ed.). (2015).What works in inpatient traumatic brain injury rehabilitation? Results from the TBI-PBE study [Supplement]. Archives of Physical Medicine and Rehabilitation, 96(8), S173–S340. Instrument Proves Valuable for Assessing Function and Functional Change in Individuals with SCI Boston University (Grant #90SI5013) Assessing the impact of Assistive Technology (AT) on a person's function is a significant problem that must be addressed when developing a rehabilitation outcome measure. Researchers at Boston University’s SCIMS reported the success of the SCI-FI/AT (Spinal Cord Injury-Functional Index Computer / Assistive Technology) instrument. The SCI-FI/AT was developed specifically for use with individuals with SCI across all levels and extent of injury (with a sensitivity to changes in function), while also being practical for use in busy clinical settings. The SCI-FI focuses on providing a general measure of function for people with SCI while supporting the use of assistive technology. In doing so, the instrument provides a valuable approach for accurately measuring function and meaningful change in the way that individuals use assistive technology. This instrument allows clinicians to more accurately discuss the functional capabilities of the patient with them and their families. The utility of the instrument was demonstrated in a recent study: Jette, A. M., Slavin, M. D., Ni, P., Kisala, P. A., Tulsky, D. S., Heinemann, A. W., Charlifue, S., Tate, D. G., Fyffe, D., Morse, L., Marino, R., Smith, I., & Williams, S. (2015). Development and initial evaluation of the SCI-FI/AT. Journal of Spinal Cord Medicine, 38(3), 409–418. New Tool Measures Quality of Life in Persons with SCI Kessler Medical Rehabilitation Research and Education Corporation, Northern New Jersey Spinal Cord Injury System. (Grant #H133N110020) Researchers at the Kessler Foundation and its Institute for Rehabilitation Research contributed to the development of the SCI Quality of Life instrument. This tool is a measurement system developed to address the shortage of relevant and psychometrically sound patient-reported outcome measures available for clinical care and research into SCI. The system provides an innovative, psychometrically rigorous and highly useful method that will revolutionize the assessment of self-reported outcomes for persons with SCI. The system builds on the capacity of the rehabilitation research field by providing a valid and reliable means of assessing aspects of health-related quality of life—a key outcome measure for epidemiological and intervention studies of persons with SCI. The findings relating to this measurement system were published in: Tulsky, D. S., Kisala, P. A., Victorson, D., Tate, D. G., Heinemann, A. W., Charlifue, S., Kirshblum, S. C., Fyffe, D., Gershon, R., Spungen, A. M., Bombardier, C. H., Dyson-Hudson., T. A., Amtmann, D., Kalpakjian, C. Z., Choi, S. W., Jette, A. M., Forchheimer, M., & Cella, D. (2015). Overview of Spinal Cord Injury – Quality of Life (SCI QOL) measurement system. Journal of Spinal Cord Medicine, 38(3), 257–269. Convenient Checklist Keeps Wheelchairs at Top Performance University of Pittsburgh (Grant #90DP0025) Researchers at the University of Pittsburgh developed the Wheelchair Maintenance Assessment Tool (W-MAT), a wheelchair inspection checklist to assess the condition of a wheelchair and its parts. The checklist identifies problems related to component failure in power and manual wheelchairs and serves as a useful risk assessment for the prevention of wheelchair-related injuries. Research has shown that between 44 and 57 percent of people with SCI have required at least one wheelchair repair in a six-month period and between 22 and 30 percent of those who needed repairs also reported being stranded; missing appointments, school, or work; or being injured due to this needed repair. Additionally, lack of maintenance has been linked to a tenfold increase in likelihood of being injured. The W-MAT is designed for a broad audience, increasing its versatility. By improving knowledge of wheelchair maintenance, wheelchair users can more quickly address issues and decrease the number of repairs and accidents. The W-MAT was presented at the 2015 International Seating Symposium Workshop “Basic Wheelchair Maintenance Training for Manual and Power Wheelchair Users.” The article discussing the tool is currently in press: Toro, M. L., & Pearlman, J. (In Press). Development of a manual wheelchair and power wheelchair maintenance program. Proceedings of the Rehabilitation Engineering and Assistive Technology Society of North American Conference. June 10–14, 2015, Denver, CO. Contact Cate Miller at NIDILRR if you have questions about the TBIMS or BMS. Contact Theresa SanAgustin at NIDILRR if you have questions about the SCIMS.
UH researchers identify new butterfly species to Hawaiian Islands By Web Staff Published: September 25, 2015, 12:00 pm Photos courtesy UH-Manoa A new butterfly species has recently been discovered in Hawaii, the first since 2008. UHM Professor Daniel Rubinoff and researcher William Haines of the Department of Plant and Environmental Protection Sciences, College of Tropical Agriculture and Human Resources, have conclusively identified a newcomer to the Hawaiian Islands called the Sleepy Orange butterfly (Abaeis nicippe). The last time a new butterfly was identified in Hawaii was in 2008, when the Lesser Grass Blue (Zizina otis) was found. The Sleepy Orange is widespread in the U.S. South; it occurs as far south as Brazil and may stray as far north as Canada when populations are high enough. It was first seen in December 2013 in Waialua on the North Shore of Oahu and then on other parts of the island. Within a year, it had become common on Maui and had also been found on Kauai, Molokai, Hawaii Island and Kahoolawe. The butterfly’s range is broad as well, from sea level to 6,800 feet up the slopes of Haleakala. Elsewhere, the Sleepy Orange butterfly has distinct summer and winter forms, but in Hawaii only the summer coloration has been seen, even during the winter. It is golden yellow with dark brown markings, including speckles on the underside of the wings and a wide band around the edges of the wings on its upper side. It is about 2 inches across at its wings’ widest span. The winter coloration is reddish-brown. The larvae are green and slightly fuzzy looking. Contrary to its name, the Sleepy Orange is a very rapid and erratic flier, pausing only to take nectar from flowers or to sip water from mud puddles. The larvae of the Sleepy Orange feed the Senna plant, which do not include any food crops grown in the state, but do include the popular shower tree. However, Rubinoff and Haines do not believe that it poses a threat to this or other ornamental landscape plants in the Islands. “The butterfly is unlikely to build up numbers sufficient to threaten ornamental plants, and it has not been recorded feeding on any native Hawaiian plants at this time,” Rubinoff explains. “While Hawaii has again dodged a bullet with this probably harmless introduction, it does go to show that we need to contribute more resources towards quarantine and reduce our reliance on imports, since the butterfly was almost certainly brought in accidentally on imports from the mainland.” Lawsuit claims bright lights at airports, harbors harm native seabirds Live boa constrictor found in shipping container Live bat captured on container ship at Honolulu Harbor Mixed Plate: Fashion in the Philippines
As people live longer and reproduce less, natural selection keeps up In many places around the world, people are living longer and are having fewer children. But that's not all. A study of people living in rural Gambia shows that this modern-day "demographic transition" may lead women to be taller and slimmer, too. This shows four generations of women and girls from a single family in Gambia. Credit: Current Biology, Courtiol et al. In many places around the world, people are living longer and are having fewer children. But that's not all. A study of people living in rural Gambia, published in the Cell Press journal Current Biology on April 25, shows that this modern-day "demographic transition" may lead women to be taller and slimmer, too. "This is a reminder that declines in mortality rates do not necessarily mean that evolution stops, but that it changes," says Ian Rickard of Durham University in the United Kingdom. Rickard and Alexandre Courtiol of the Leibniz Institute for Zoo and Wildlife Research in Germany show that changes in mortality and fertility rates in Gambia, likely related to improvements in medical care since a clinic opened there in 1974, have changed the way that natural selection acts on body size. For their studies, Rickard, Courtiol, and their colleagues used data collected over a 55-year period (1956) by the UK Medical Research Council on thousands of women from two rural villages in the West Kiang district of Gambia. Over the time period in question, those communities experienced significant demographic shifts -- from high mortality and fertility rates to rapidly declining ones. The researchers also had thorough data on the height and weight of the women. Their analysis shows that the demographic transition influenced directional selection on women's height and body mass index (BMI). Selection initially favored short women with high BMI values but shifted over time to favor tall women with low BMI values. The researchers say it's not entirely clear why selection has shifted from shorter and stouter women to taller and thinner ones. It's partly because selection began acting less on mortality and more on fertility over time. But other environmental changes were shown to play an important role, too. "Although we cannot tell directly, it may be due to health care improvements changing which women were more or less likely to reproduce," Courtiol says. The findings in Gambia may have relevance around the globe. "Our results are important because the majority of human populations have either recently undergone, or are currently undergoing, a demographic transition from high to low fertility and mortality rates," the researchers write. "Thus the temporal dynamics of the evolutionary processes revealed here may reflect the shifts in evolutionary pressures being experienced by human societies generally." And how we humans respond to these pressures might tell us something about how we'll continue to evolve in this ever-changing world we live in. The above story is based on materials provided by Cell Press. Note: Materials may be edited for content and length. Alexandre Courtiol, Ian�J. Rickard, Virpi Lummaa, Andrew�M. Prentice, Anthony�J.C. Fulford, Stephen�C. Stearns. The Demographic Transition Influences Variance in Fitness and Selection on Height and BMI in Rural Gambia. Current Biology, 2013; DOI: 10.1016/j.cub.2013.04.006 Cell Press. "As people live longer and reproduce less, natural selection keeps up." ScienceDaily. ScienceDaily, 25 April 2013. <www.sciencedaily.com/releases/2013/04/130425132614.htm>. Cell Press. (2013, April 25). As people live longer and reproduce less, natural selection keeps up. ScienceDaily. Retrieved September 2, 2014 from www.sciencedaily.com/releases/2013/04/130425132614.htm Cell Press. "As people live longer and reproduce less, natural selection keeps up." ScienceDaily. www.sciencedaily.com/releases/2013/04/130425132614.htm (accessed September 2, 2014).
Katarina “Katitzi” Taikon And Her Immortal Tale Of The Swedish Roma As the years and decades go by, the works of Swedish Romani writer Katarina “Katitzi” Taikon still strike a chord with many across Europe. Taikon’s most famous work, the semi-autobiographical book series “Katitzi” tells the story of a young Romani girl growing up in 1940’s Sweden. Written primarily for children, the series, which was televised in 1979, was also a thought-provoking read for adults as it challenges the prejudices of educated people. As a 9-year old, lead character “Katitzi” returns to her Romani family from an orphanage where her innocence helps to give a stark portrayal of the prejudices faced by the Roma in Sweden at the time. This clip from the television series was taken from the first episode of Katitzi where her father comes to pick her up from the orphanage. Promoting inter-cultural understanding, the books awoke older readers to the realities of social and political shortcomings in Sweden and beyond. The books have since been translated and published in various other languages as the impact of Taikon’s work spread further and further. Born in the Swedish town of Orebro in 1932, Katarina received no formal education as a child but emerged as a talented actress, starring in her first film “Uppbrott” as a teenager in 1948. Katarina was an avid believer in equality for the Roma and fought for the cause not only in the field of literature. Intrepidly, she would approach newspapers, government, parliament and political parties to try and make the Roma voice heard. She also gave lectures at universities on the Roma in Sweden. Katarina was one of few to speak up for the Roma in Sweden despite their long history in the country. Today, the number of Roma living in Sweden is estimated at around 40,000-50,000 comprising of various different groups. The largest of these groups are Travellers who are believed to have lived in the country since the 14th century. Other groups include the Kale, originally from Finland, and around 5,000 Romani refugees from the Yugoslav Wars of the 1990s, mainly from Bosnia-Herzegovina. In 2006, the Swedish government introduced a special commission dealing with issues in the Romani minorities consisting of some experts from different Romani groups. The delegation was tasked with submitting proposals to improve living conditions of the Roma, with specific attention being paid to children. However, four years later the Swedish authorities were heavily criticized after 50 Roma EU citizens were abruptly deported. Commissioner for Human Rights at the Council of Europe, Thomas Hammarberg, who is a Swede, stated that his fellow countrymen were complicit in the Roma’s ongoing discrimination. He said: “They are identified as a danger to society by politicians who seek to win political points on demands of a tough line against this already vulnerable group. They are subjected to arrest and collective deportations.” It seemed that not a lot had changed since the early 1980’s when Katarina’s sister Rosa explained in an article of the UNESCO Courier that it was difficult to eradicate prejudices from adult minds. Instead Katarina “wrote the Katitzi stories to help young people to understand a minority more fully.” Rosa spoke powerfully about her experiences as a Roma in the twentieth century and was especially critical of Roma access to education. She said: “I never went to school until I was 33 years old even though I am a Swedish citizen, born in Sweden.” In 1982, Katarina suffered a heart attack and fell into a coma from which she would never wake up. She died in 1995 at the age of 63 in Sweden but her insights into a Romani upbringing in Sweden still resonate with Roma of all ages across Europe. ← A hazátlan gyűlölet Magyarországon Roma, Migration and Human Trafficking → 5 thoughts on “Katarina “Katitzi” Taikon And Her Immortal Tale Of The Swedish Roma” Sonia Meyer on March 8, 2012 at 3:15 pm said: I tried to look up Katerina Katitzi Taikon on Amazon and unfortunately could not find her there. I am a firm believer in overturning prejudice through art. I myself have written a novel for that very purpose. Many of the people who have read come to me and want to know more about who the Roma really are, the people behind the killer prejudice. Where can I get a copy of Katitzi? The more people who read it, especially children where prejudice begins, the more effective it will be. Sonia Meyer http://www.soniameyer.com/ Romedia Foundation on March 8, 2012 at 3:32 pm said: I found this but it’s in French…. http://www.reboundbooks.com/products/katitzi-dans-le-nid-de-viperes They do some to be like buried treasure but we’ll keep looking and let you know if we have any more leads. Otherwise, yes. Art (and sport) can break down barriers between communities and will always be an important weapon against racism. Niko on March 10, 2012 at 6:43 am said: In the file Romani_sanak’ja_ikIo_Katici.pdf on http://edisk.ukr.net/?do=dir#cdir=0 you will find a bibliography of Katarina Taikon. Niko Rergo Pingback: Katitzi | Ce-a citit Iasmina Pingback: Katitzi | NOU! Ce-a citit Iasmina
Jeanine Skorinko, Social Science & Policy Studies Jeanine Skorinko hopes her research into how humans think, behave, and interact with others will make the world a better, more equitable place. skorinko.jpg Why did you pick the branch of science you are in? It fit. I was always curious about people—thoughts, decisions, interactions, cultures. It led me to double major in psychology and anthropology in undergrad, and I discovered through my experiences that I thought more like a psychologist. What are the biggest misperceptions people have about scientists? For psychological science, it is that we’re reading everyone’s minds, analyzing everyone we meet, and that we have obsessions with couches. Also, the fact that people think psychological science isn’t a science or it is somehow “easy.” I’ve never understood how understanding the mind and why we do what we do is considered “easy.” What’s something you do that reminds you that you are an #ActualLivingScientist? I study living people! I observe the world around me to develop research questions to better understand how we think, behave, and interact with others. You can’t get more “living” than that. I am a scientist and I… …study people! I conduct experiments and other types of methodologies (e.g., interviews, focus groups) to answer research questions. I collaborate with others in psychology and other disciplines to answer research questions, as well. How do you hope your scientific contributions will impact the world? Ultimately, my research goal is to make the world a better, more equitable place. I do this by examining how subtle factors (like stereotypes) influence how we perceive ourselves, how we perceive others, and how we treat other people. In the end, I hope this work enables us to understand how we can promote equality, diversity, and cultural understanding. How has WPI helped you prepare students to become an #ActualLivingScientist? I get my students active and conducting their own psychological science research projects. They get to conduct observations studies, survey others on topics related to the course, conduct mini experiments, and just see connections between what we are learning in class (theory) and the real world (practice). We also learn to think and talk about tough topics related to gender and sexual identity, racial issues, cultural similarities and differences, sex, abuse, and more. Additionally, every term I have five to ten students helping me in the research lab conduct the experiments that we do. These students aren’t always psychological science majors and minors; many are just interested in psychology and science and getting a different experience. View more WPI #ActualLivingScientists Social Science & Policy Studies
UN designates Inca road network in Peru and 5 other countries as World Heritage site LIMA, Peru – The United Nations has designated parts of the road network used by the Inca empire as a World Heritage site. The U.N. Education, Scientific and Cultural Organization granted the designation Saturday during a meeting in Doha, Qatar. Peru's Culture Ministry says the network known as Qhapac Nan extends nearly 60,000 kilometers (37,000 miles). It spans the length and breadth of the Andes mountain range in six countries. While parts of it pre-dated the Inca empire, the network reached its maximum expansion in the 15th century as the culture celebrated for architectonic prowess used it to exert dominance over the Andes. Peru, Argentina, Chile, Bolivia, Ecuador and Colombia combined to nominate the road network, only portions of which were granted Heritage Site status. Much of is in disrepair, covered by vegetation.
Adjusted light and dark cycles can optimize photosynthetic efficiency in algae growing in photobioreactors. Eleonora Sforza, Diana Simionato, Giorgio Mario Giacometti, Alberto Bertucco, Tomas Morosinotto. Biofuels from algae are highly interesting as renewable energy sources to replace, at least partially, fossil fuels, but great research efforts are still needed to optimize growth parameters to develop competitive large-scale cultivation systems. One factor with a seminal influence on productivity is light availability. Light energy fully supports algal growth, but it leads to oxidative stress if illumination is in excess. In this work, the influence of light intensity on the growth and lipid productivity of Nannochloropsis salina was investigated in a flat-bed photobioreactor designed to minimize cells self-shading. The influence of various light intensities was studied with both continuous illumination and alternation of light and dark cycles at various frequencies, which mimic illumination variations in a photobioreactor due to mixing. Results show that Nannochloropsis can efficiently exploit even very intense light, provided that dark cycles occur to allow for re-oxidation of the electron transporters of the photosynthetic apparatus. If alternation of light and dark is not optimal, algae undergo radiation damage and photosynthetic productivity is greatly reduced. Our results demonstrate that, in a photobioreactor for the cultivation of algae, optimizing mixing is essential in order to ensure that the algae exploit light energy efficiently. Optimize Flue Gas Settings to Promote Microalgae Growth in Photobioreactors via Computer Simulations Authors: Lian He, Amelia B Chen, Yi Yu, Leah Kucera, Yinjie Tang. JoVE Environment Flue gas from power plants can promote algal cultivation and reduce greenhouse gas emissions1. Microalgae not only capture solar energy more efficiently than plants3, but also synthesize advanced biofuels2-4. Generally, atmospheric CO2 is not a sufficient source for supporting maximal algal growth5. On the other hand, the high concentrations of CO2 in industrial exhaust gases have adverse effects on algal physiology. Consequently, both cultivation conditions (such as nutrients and light) and the control of the flue gas flow into the photo-bioreactors are important to develop an efficient “flue gas to algae” system. Researchers have proposed different photobioreactor configurations4,6 and cultivation strategies7,8 with flue gas. Here, we present a protocol that demonstrates how to use models to predict the microalgal growth in response to flue gas settings. We perform both experimental illustration and model simulations to determine the favorable conditions for algal growth with flue gas. We develop a Monod-based model coupled with mass transfer and light intensity equations to simulate the microalgal growth in a homogenous photo-bioreactor. The model simulation compares algal growth and flue gas consumptions under different flue-gas settings. The model illustrates: 1) how algal growth is influenced by different volumetric mass transfer coefficients of CO2; 2) how we can find optimal CO2 concentration for algal growth via the dynamic optimization approach (DOA); 3) how we can design a rectangular on-off flue gas pulse to promote algal biomass growth and to reduce the usage of flue gas. On the experimental side, we present a protocol for growing Chlorella under the flue gas (generated by natural gas combustion). The experimental results qualitatively validate the model predictions that the high frequency flue gas pulses can significantly improve algal cultivation. Analysis of Fatty Acid Content and Composition in Microalgae Authors: Guido Breuer, Wendy A. C. Evers, Jeroen H. de Vree, Dorinde M. M. Kleinegris, Dirk E. Martens, René H. Wijffels, Packo P. Lamers. Institutions: Wageningen University and Research Center, Wageningen University and Research Center, Wageningen University and Research Center. A method to determine the content and composition of total fatty acids present in microalgae is described. Fatty acids are a major constituent of microalgal biomass. These fatty acids can be present in different acyl-lipid classes. Especially the fatty acids present in triacylglycerol (TAG) are of commercial interest, because they can be used for production of transportation fuels, bulk chemicals, nutraceuticals (ω-3 fatty acids), and food commodities. To develop commercial applications, reliable analytical methods for quantification of fatty acid content and composition are needed. Microalgae are single cells surrounded by a rigid cell wall. A fatty acid analysis method should provide sufficient cell disruption to liberate all acyl lipids and the extraction procedure used should be able to extract all acyl lipid classes. With the method presented here all fatty acids present in microalgae can be accurately and reproducibly identified and quantified using small amounts of sample (5 mg) independent of their chain length, degree of unsaturation, or the lipid class they are part of. This method does not provide information about the relative abundance of different lipid classes, but can be extended to separate lipid classes from each other. The method is based on a sequence of mechanical cell disruption, solvent based lipid extraction, transesterification of fatty acids to fatty acid methyl esters (FAMEs), and quantification and identification of FAMEs using gas chromatography (GC-FID). A TAG internal standard (tripentadecanoin) is added prior to the analytical procedure to correct for losses during extraction and incomplete transesterification. Environmental Sciences, Issue 80, chemical analysis techniques, Microalgae, fatty acid, triacylglycerol, lipid, gas chromatography, cell disruption A Simple and Rapid Protocol for Measuring Neutral Lipids in Algal Cells Using Fluorescence Authors: Zachary J. Storms, Elliot Cameron, Hector de la Hoz Siegler, William C. McCaffrey. Institutions: University of Alberta, University of Calgary. Algae are considered excellent candidates for renewable fuel sources due to their natural lipid storage capabilities. Robust monitoring of algal fermentation processes and screening for new oil-rich strains requires a fast and reliable protocol for determination of intracellular lipid content. Current practices rely largely on gravimetric methods to determine oil content, techniques developed decades ago that are time consuming and require large sample volumes. In this paper, Nile Red, a fluorescent dye that has been used to identify the presence of lipid bodies in numerous types of organisms, is incorporated into a simple, fast, and reliable protocol for measuring the neutral lipid content of Auxenochlorella protothecoides, a green alga. The method uses ethanol, a relatively mild solvent, to permeabilize the cell membrane before staining and a 96 well micro-plate to increase sample capacity during fluorescence intensity measurements. It has been designed with the specific application of monitoring bioprocess performance. Previously dried samples or live samples from a growing culture can be used in the assay. Chemistry, Issue 87, engineering (general), microbiology, bioengineering (general), Eukaryota Algae, Nile Red, Fluorescence, Oil Content, Oil Extraction, Oil Quantification, Neutral Lipids, Optical Microscope, biomass In Vitro Reconstitution of Light-harvesting Complexes of Plants and Green Algae Authors: Alberto Natali, Laura M. Roy, Roberta Croce. Institutions: VU University Amsterdam. In plants and green algae, light is captured by the light-harvesting complexes (LHCs), a family of integral membrane proteins that coordinate chlorophylls and carotenoids. In vivo, these proteins are folded with pigments to form complexes which are inserted in the thylakoid membrane of the chloroplast. The high similarity in the chemical and physical properties of the members of the family, together with the fact that they can easily lose pigments during isolation, makes their purification in a native state challenging. An alternative approach to obtain homogeneous preparations of LHCs was developed by Plumley and Schmidt in 19871, who showed that it was possible to reconstitute these complexes in vitro starting from purified pigments and unfolded apoproteins, resulting in complexes with properties very similar to that of native complexes. This opened the way to the use of bacterial expressed recombinant proteins for in vitro reconstitution. The reconstitution method is powerful for various reasons: (1) pure preparations of individual complexes can be obtained, (2) pigment composition can be controlled to assess their contribution to structure and function, (3) recombinant proteins can be mutated to study the functional role of the individual residues (e.g., pigment binding sites) or protein domain (e.g., protein-protein interaction, folding). This method has been optimized in several laboratories and applied to most of the light-harvesting complexes. The protocol described here details the method of reconstituting light-harvesting complexes in vitro currently used in our laboratory, and examples describing applications of the method are provided. Biochemistry, Issue 92, Reconstitution, Photosynthesis, Chlorophyll, Carotenoids, Light Harvesting Protein, Chlamydomonas reinhardtii, Arabidopsis thaliana Determination of Mitochondrial Membrane Potential and Reactive Oxygen Species in Live Rat Cortical Neurons Authors: Dinesh C. Joshi, Joanna C. Bakowska. Institutions: Loyola University Chicago. Mitochondrial membrane potential (ΔΨm) is critical for maintaining the physiological function of the respiratory chain to generate ATP. A significant loss of ΔΨm renders cells depleted of energy with subsequent death. Reactive oxygen species (ROS) are important signaling molecules, but their accumulation in pathological conditions leads to oxidative stress. The two major sources of ROS in cells are environmental toxins and the process of oxidative phosphorylation. Mitochondrial dysfunction and oxidative stress have been implicated in the pathophysiology of many diseases; therefore, the ability to determine ΔΨm and ROS can provide important clues about the physiological status of the cell and the function of the mitochondria. Several fluorescent probes (Rhodamine 123, TMRM, TMRE, JC-1) can be used to determine Δψm in a variety of cell types, and many fluorescence indicators (Dihydroethidium, Dihydrorhodamine 123, H2DCF-DA) can be used to determine ROS. Nearly all of the available fluorescence probes used to assess ΔΨm or ROS are single-wavelength indicators, which increase or decrease their fluorescence intensity proportional to a stimulus that increases or decreases the levels of ΔΨm or ROS. Thus, it is imperative to measure the fluorescence intensity of these probes at the baseline level and after the application of a specific stimulus. This allows one to determine the percentage of change in fluorescence intensity between the baseline level and a stimulus. This change in fluorescence intensity reflects the change in relative levels of ΔΨm or ROS. In this video, we demonstrate how to apply the fluorescence indicator, TMRM, in rat cortical neurons to determine the percentage change in TMRM fluorescence intensity between the baseline level and after applying FCCP, a mitochondrial uncoupler. The lower levels of TMRM fluorescence resulting from FCCP treatment reflect the depolarization of mitochondrial membrane potential. We also show how to apply the fluorescence probe H2DCF-DA to assess the level of ROS in cortical neurons, first at baseline and then after application of H2O2. This protocol (with minor modifications) can be also used to determine changes in ∆Ψm and ROS in different cell types and in neurons isolated from other brain regions. Neuroscience, Issue 51, Mitochondrial membrane potential, reactive oxygen species, neuroscience, cortical neurons Establishment of Microbial Eukaryotic Enrichment Cultures from a Chemically Stratified Antarctic Lake and Assessment of Carbon Fixation Potential Authors: Jenna M. Dolhi, Nicholas Ketchum, Rachael M. Morgan-Kiss. Institutions: Miami University . Lake Bonney is one of numerous permanently ice-covered lakes located in the McMurdo Dry Valleys, Antarctica. The perennial ice cover maintains a chemically stratified water column and unlike other inland bodies of water, largely prevents external input of carbon and nutrients from streams. Biota are exposed to numerous environmental stresses, including year-round severe nutrient deficiency, low temperatures, extreme shade, hypersalinity, and 24-hour darkness during the winter 1. These extreme environmental conditions limit the biota in Lake Bonney almost exclusively to microorganisms 2. Single-celled microbial eukaryotes (called "protists") are important players in global biogeochemical cycling 3 and play important ecological roles in the cycling of carbon in the dry valley lakes, occupying both primary and tertiary roles in the aquatic food web. In the dry valley aquatic food web, protists that fix inorganic carbon (autotrophy) are the major producers of organic carbon for organotrophic organisms 4, 2. Phagotrophic or heterotrophic protists capable of ingesting bacteria and smaller protists act as the top predators in the food web 5. Last, an unknown proportion of the protist population is capable of combined mixotrophic metabolism 6, 7. Mixotrophy in protists involves the ability to combine photosynthetic capability with phagotrophic ingestion of prey microorganisms. This form of mixotrophy differs from mixotrophic metabolism in bacterial species, which generally involves uptake dissolved carbon molecules. There are currently very few protist isolates from permanently ice-capped polar lakes, and studies of protist diversity and ecology in this extreme environment have been limited 8, 4, 9, 10, 5. A better understanding of protist metabolic versatility in the simple dry valley lake food web will aid in the development of models for the role of protists in the global carbon cycle. We employed an enrichment culture approach to isolate potentially phototrophic and mixotrophic protists from Lake Bonney. Sampling depths in the water column were chosen based on the location of primary production maxima and protist phylogenetic diversity 4, 11, as well as variability in major abiotic factors affecting protist trophic modes: shallow sampling depths are limited for major nutrients, while deeper sampling depths are limited by light availability. In addition, lake water samples were supplemented with multiple types of growth media to promote the growth of a variety of phototrophic organisms. RubisCO catalyzes the rate limiting step in the Calvin Benson Bassham (CBB) cycle, the major pathway by which autotrophic organisms fix inorganic carbon and provide organic carbon for higher trophic levels in aquatic and terrestrial food webs 12. In this study, we applied a radioisotope assay modified for filtered samples 13 to monitor maximum carboxylase activity as a proxy for carbon fixation potential and metabolic versatility in the Lake Bonney enrichment cultures. Microbiology, Issue 62, Antarctic lake, McMurdo Dry Valleys, Enrichment cultivation, Microbial eukaryotes, RubisCO A New Approach for the Comparative Analysis of Multiprotein Complexes Based on 15N Metabolic Labeling and Quantitative Mass Spectrometry Authors: Kerstin Trompelt, Janina Steinbeck, Mia Terashima, Michael Hippler. Institutions: University of Münster, Carnegie Institution for Science. The introduced protocol provides a tool for the analysis of multiprotein complexes in the thylakoid membrane, by revealing insights into complex composition under different conditions. In this protocol the approach is demonstrated by comparing the composition of the protein complex responsible for cyclic electron flow (CEF) in Chlamydomonas reinhardtii, isolated from genetically different strains. The procedure comprises the isolation of thylakoid membranes, followed by their separation into multiprotein complexes by sucrose density gradient centrifugation, SDS-PAGE, immunodetection and comparative, quantitative mass spectrometry (MS) based on differential metabolic labeling (14N/15N) of the analyzed strains. Detergent solubilized thylakoid membranes are loaded on sucrose density gradients at equal chlorophyll concentration. After ultracentrifugation, the gradients are separated into fractions, which are analyzed by mass-spectrometry based on equal volume. This approach allows the investigation of the composition within the gradient fractions and moreover to analyze the migration behavior of different proteins, especially focusing on ANR1, CAS, and PGRL1. Furthermore, this method is demonstrated by confirming the results with immunoblotting and additionally by supporting the findings from previous studies (the identification and PSI-dependent migration of proteins that were previously described to be part of the CEF-supercomplex such as PGRL1, FNR, and cyt f). Notably, this approach is applicable to address a broad range of questions for which this protocol can be adopted and e.g. used for comparative analyses of multiprotein complex composition isolated from distinct environmental conditions. Microbiology, Issue 85, Sucrose density gradients, Chlamydomonas, multiprotein complexes, 15N metabolic labeling, thylakoids Patterned Photostimulation with Digital Micromirror Devices to Investigate Dendritic Integration Across Branch Points Authors: Conrad W. Liang, Michael Mohammadi, M. Daniel Santos, Cha-Min Tang. Institutions: University of Maryland School of Medicine. Light is a versatile and precise means to control neuronal excitability. The recent introduction of light sensitive effectors such as channel-rhodopsin and caged neurotransmitters have led to interests in developing better means to control patterns of light in space and time that are useful for experimental neuroscience. One conventional strategy, employed in confocal and 2-photon microscopy, is to focus light to a diffraction limited spot and then scan that single spot sequentially over the region of interest. This approach becomes problematic if large areas have to be stimulated within a brief time window, a problem more applicable to photostimulation than for imaging. An alternate strategy is to project the complete spatial pattern on the target with the aid of a digital micromirror device (DMD). The DMD approach is appealing because the hardware components are relatively inexpensive and is supported by commercial interests. Because such a system is not available for upright microscopes, we will discuss the critical issues in the construction and operations of such a DMD system. Even though we will be primarily describing the construction of the system for UV photolysis, the modifications for building the much simpler visible light system for optogenetic experiments will also be provided. The UV photolysis system was used to carryout experiments to study a fundamental question in neuroscience, how are spatially distributed inputs integrated across distal dendritic branch points. The results suggest that integration can be non-linear across branch points and the supralinearity is largely mediated by NMDA receptors. Bioengineering, Issue 49, DMD, photolysis, dendrite, photostimulation, DLP, optogenetics Long-term Lethal Toxicity Test with the Crustacean Artemia franciscana Authors: Loredana Manfra, Federica Savorelli, Marco Pisapia, Erika Magaletti, Anna Maria Cicero. Institutions: Institute for Environmental Protection and Research, Regional Agency for Environmental Protection in Emilia-Romagna. Our research activities target the use of biological methods for the evaluation of environmental quality, with particular reference to saltwater/brackish water and sediment. The choice of biological indicators must be based on reliable scientific knowledge and, possibly, on the availability of standardized procedures. In this article, we present a standardized protocol that used the marine crustacean Artemia to evaluate the toxicity of chemicals and/or of marine environmental matrices. Scientists propose that the brine shrimp (Artemia) is a suitable candidate for the development of a standard bioassay for worldwide utilization. A number of papers have been published on the toxic effects of various chemicals and toxicants on brine shrimp (Artemia). The major advantage of this crustacean for toxicity studies is the overall availability of the dry cysts; these can be immediately used in testing and difficult cultivation is not demanded1,2. Cyst-based toxicity assays are cheap, continuously available, simple and reliable and are thus an important answer to routine needs of toxicity screening, for industrial monitoring requirements or for regulatory purposes3. The proposed method involves the mortality as an endpoint. The numbers of survivors were counted and percentage of deaths were calculated. Larvae were considered dead if they did not exhibit any internal or external movement during several seconds of observation4. This procedure was standardized testing a reference substance (Sodium Dodecyl Sulfate); some results are reported in this work. This article accompanies a video that describes the performance of procedural toxicity testing, showing all the steps related to the protocol. Chemistry, Issue 62, Artemia franciscana, bioassays, chemical substances, crustaceans, marine environment Electron Spin Resonance Micro-imaging of Live Species for Oxygen Mapping Authors: Revital Halevy, Lazar Shtirberg, Michael Shklyar, Aharon Blank. Institutions: The Technion, Israel Institute of Technology. This protocol describes an electron spin resonance (ESR) micro-imaging method for three-dimensional mapping of oxygen levels in the immediate environment of live cells with micron-scale resolution1. Oxygen is one of the most important molecules in the cycle of life. It serves as the terminal electron acceptor of oxidative phosphorylation in the mitochondria and is used in the production of reactive oxygen species. Measurements of oxygen are important for the study of mitochondrial and metabolic functions, signaling pathways, effects of various stimuli, membrane permeability, and disease differentiation. Oxygen consumption is therefore an informative marker of cellular metabolism, which is broadly applicable to various biological systems from mitochondria to cells to whole organisms. Due to its importance, many methods have been developed for the measurements of oxygen in live systems. Current attempts to provide high-resolution oxygen imaging are based mainly on optical fluorescence and phosphorescence methods that fail to provide satisfactory results as they employ probes with high photo-toxicity and low oxygen sensitivity. ESR, which measures the signal from exogenous paramagnetic probes in the sample, is known to provide very accurate measurements of oxygen concentration. In a typical case, ESR measurements map the probe's lineshape broadening and/or relaxation-time shortening that are linked directly to the local oxygen concentration. (Oxygen is paramagnetic; therefore, when colliding with the exogenous paramagnetic probe, it shortness its relaxation times.) Traditionally, these types of experiments are carried out with low resolution, millimeter-scale ESR for small animals imaging. Here we show how ESR imaging can also be carried out in the micron-scale for the examination of small live samples. ESR micro-imaging is a relatively new methodology that enables the acquisition of spatially-resolved ESR signals with a resolution approaching 1 micron at room temperature2. The main aim of this protocol-paper is to show how this new method, along with newly developed oxygen-sensitive probes, can be applied to the mapping of oxygen levels in small live samples. A spatial resolution of ~30 x 30 x 100 μm is demonstrated, with near-micromolar oxygen concentration sensitivity and sub-femtomole absolute oxygen sensitivity per voxel. The use of ESR micro-imaging for oxygen mapping near cells complements the currently available techniques based on micro-electrodes or fluorescence/phosphorescence. Furthermore, with the proper paramagnetic probe, it will also be readily applicable for intracellular oxygen micro-imaging, a capability which other methods find very difficult to achieve. Cellular Biology, Issue 42, ESR, EPR, Oxygen, Imaging, microscopy, live cells
Seed Mortality in Daucus Carota Populations: Latitudinal Effects UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document) Elizabeth P. Lacey, Professor (Creator) The University of North Carolina at Greensboro (UNCG ) Web Site: http://library.uncg.edu/ Abstract: Daucus carota, a common herbaceous weed, grows over a wide latitudinal range in eastern North America. Viability and germination tests of mature seeds collected from 36° to 45°N were conducted to measure predispersal seed mortality. Viability and germination declined as latitude of the seed source decreased. Only 30-50% of the seeds from southern populations germinated owing to high embryo inviability and absence of embryos. Sixty to ninety percent of the seeds from northern populations germinated. Reciprocal planting of seeds in outdoor experimental plots at three latitudes and testing of seeds over two generations together showed that the environment in which seeds mature, rather than environmental preconditioning over generations or genetically-based differences among populations, explain this variation in germination ability. Within-latitude germination declined in experimental plots as population age of the seed source within latitudes increased. The data indicate that predispersal seed mortality can influence local population persistence and that seed mortality is an increasingly important factor in population regulation at the southern limit of the species' range. Created on 3/28/2011 American Journal of Botany 71(9): 1 175-1 182. 1984. Seeds, Temperature, Environmental effects, Survival, Geographic influences, Weeds, Daucus carota, Wild carrots Email this document to
The latest fashion: Graphene edges can be tailor-made January 23, 2015 by Mike Williams Graphene nanoribbons can be enticed to form favorable "reconstructed" edges by pulling them apart with the right force and at the right temperature, according to researchers at Rice University. The illustration shows the crack at the edge that begins the formation of five- and seven-atom pair under the right conditions. Credit: ZiAng Zhang/Rice University Theoretical physicists at Rice University are living on the edge as they study the astounding properties of graphene. In a new study, they figure out how researchers can fracture graphene nanoribbons to get the edges they need for applications. New research by Rice physicist Boris Yakobson and his colleagues shows it should be possible to control the edge properties of graphene nanoribbons by controlling the conditions under which the nanoribbons are pulled apart. The way atoms line up along the edge of a ribbon of graphene—the atom-thick form of carbon—controls whether it's metallic or semiconducting. Current passes through metallic graphene unhindered, but semiconductors allow a measure of control over those electrons. Since modern electronics are all about control, semiconducting graphene (and semiconducting two-dimensional materials in general) are of great interest to scientists and industry working to shrink electronics for applications. In the work, which appeared this month in the Royal Society of Chemistry journal Nanoscale, the Rice team used sophisticated computer modeling to show it's possible to rip nanoribbons and get graphene with either pristine zigzag edges or what are called reconstructed zigzags. Perfect graphene looks like chicken wire, with each six-atom unit forming a hexagon. The edges of pristine zigzags look like this: ////////. Turning the hexagons 30 degrees makes the edges "armchairs," with flat tops and bottoms held together by the diagonals. The electronic properties of the edges are known to vary from metallic to semiconducting, depending on the ribbon's width. "Reconstructed" refers to the process by which atoms in graphene are enticed to shift around to form connected rings of five and seven atoms. The Rice calculations determined reconstructed zigzags are the most stable, a desirable quality for manufacturers. All that is great, but one still has to know how to make them. "Making graphene-based nano devices by mechanical fracture sounds attractive, but it wouldn't make sense until we know how to get the right types of edges—and now we do," said ZiAng Zhang, a Rice graduate student and the paper's lead author. Yakobson, Zhang and Rice postdoctoral researcher Alex Kutana used density functional theory, a computational method to analyze the energetic input of every atom in a model system, to learn how thermodynamic and mechanical forces would accomplish the goal. Their study revealed that heating graphene to 1,000 kelvins and applying a low but steady force along one axis will crack it in such a way that fully reconstructed 5-7 rings will form and define the new edges. Conversely, fracturing graphene with low heat and high force is more likely to lead to pristine zigzags. Explore further: Graphene rips follow rules: Simulations show carbon sheets tear along energetically favorable lines Read the abstract at pubs.rsc.org/en/content/articlelanding/2015/nr/c4nr06332e#!divAbstract Journal reference: Nanoscale 1749 shares feedback to editors Relevant PhysicsForums posts Resonant frequency of Piezoelectric material 1 hour ago Low pressure hydrogen storage 12 hours ago Entropy: measure of amount of energy unavailable to do work? Aug 25, 2016 Stop warping of photosensitive resin with desktop UV lamp Aug 25, 2016 thermal conductivity of SiC and the mechanisms Aug 25, 2016 Improving thermal conductivity of RTV silicone for hard candy Aug 24, 2016 More from Materials and Chemical Engineering Graphene rips follow rules: Simulations show carbon sheets tear along energetically favorable lines Research from Rice University and the University of California at Berkeley may give science and industry a new way to manipulate graphene, the wonder material expected to play a role in advanced electronic, mechanical and ... Graphene 'onion rings' have delicious potential Concentric hexagons of graphene grown in a furnace at Rice University represent the first time anyone has synthesized graphene nanoribbons on metal from the bottom up—atom by atom. Graphene nanoribbons as electronic switches One of graphene's most sought-after properties is its high conductivity. Argentinian and Brazilian physicists have now successfully calculated the conditions of the transport, or conductance mechanisms, in graphene nanoribbons. ... On the edge of graphene Researchers at the National Physical Laboratory (NPL) have discovered that the conductivity at the edges of graphene devices is different to the central material. Phosphorus a promising semiconductor (Phys.org) —Defects damage the ideal properties of many two-dimensional materials, like carbon-based graphene. Phosphorus just shrugs. New self-assembly method for fabricating graphene nanoribbons First characterized in 2004, graphene is a two-dimensional material with extraordinary properties. The thickness of just one carbon atom, and hundreds of times faster at conducting heat and charge than silicon, graphene ... Small balloons made from one-atom-thick material graphene can withstand enormous pressures, much higher than those at the bottom of the deepest ocean, scientists at the University of Manchester report. 'Ideal' energy storage material for electric vehicles developed The energy-storage goal of a polymer dielectric material with high energy density, high power density and excellent charge-discharge efficiency for electric and hybrid vehicle use has been achieved by a team of Penn State ... The pursuit of next-generation technologies places a premium on producing increased speed and efficiency with components built at scales small enough to function on a computer chip. Designing ultrasound tools with Lego-like proteins Ultrasound imaging is used around the world to help visualize developing babies and diagnose disease. Sound waves bounce off the tissues, revealing their different densities and shapes. The next step in ultrasound technology ... Beads, disks, bowls and rods: scientists at Radboud University have demonstrated the first methodological approach to control the shapes of nanovesicles. This opens doors for the use of nanovesicles in biomedical applications, ... In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels, much like the electrons in an atom - for this reason, such electron ... tpafladamite The cutting edge reporting is wonderful on emerging properties.
Home>News & Publications:Scientific Publications>Significance of HDAC6 regulation via estrogen signaling for cell motility and prognosis in estrogen receptor-positive breast cancer. Recent Broad Publications < Back to Publications Significance of HDAC6 regulation via estrogen signaling for cell motility and prognosis in estrogen receptor-positive breast cancer. Publication TypeJournal Article AuthorsSaji, S., Kawakami M., Hayashi S., Yoshida N., Hirose M., Horiguchi S., Itoh A., Funata N., Schreiber SL, Yoshida M., and Toi M. AbstractHistone deacetylase (HDAC) 6 is a subtype of the HDAC family; it deacetylates alpha-tubulin and increases cell motility. Here, we investigate the impact of an alteration of HDAC6 expression in estrogen receptor alpha (ER)-positive breast cancer MCF-7 cells, as we identified that HDAC6 is a novel estrogen-regulated gene. MCF-7 treated with estradiol showed increased expression of HDAC6 mRNA and protein and a four-fold increase in cell motility in a migration assay. Cell motility was increased to the same degree by stably transfecting the HDAC6 expression vector into MCF-7 cells. In both cases, the cells changed in appearance from their original round shape to an axon-extended shape, like a neuronal cell. This HDAC6 accumulation caused the deacetylation of alpha-tubulin. Either the selective estrogen receptor modulator tamoxifen (TAM) or the pure antiestrogen ICI 182,780 prevented estradiol-induced HDAC6 accumulation and deacetylation of alpha-tubulin, leading to reduced cell motility. Tubacin, an inhibitory molecule that binds to the tubulin deacetylation domain of HDAC6, also prevented estradiol-stimulated cell migration. Finally, we evaluated HDAC6 protein expression in 139 consecutively archived human breast cancer tissues by immunohistochemical staining. The prognostic analyses for these patients revealed no significant differences based on HDAC6 expression. However, subset analysis of ER-positive patients who received adjuvant treatment with TAM (n = 67) showed a statistically significant difference in relapse-free survival and overall survival in favor of the HDAC6-positive group (P < 0.02 and P < 0.05, respectively). HDAC6 expression was an independent prognostic indicator by multivariate analysis (odds ratio = 2.82, P = 0.047). These results indicate the biological significance of HDAC6 regulation via estrogen signaling. Year of Publication2005 JournalOncogene Volume24 Issue28 Pages4531-9 Date Published (YYYY/MM/DD)2005/06/30 ISSN Number0950-9232 DOI10.1038/sj.onc.1208646 PubMedhttp://www.ncbi.nlm.nih.gov/pubmed/15806142?dopt=Abstract Share This
December 8, 1997, Vol 157, No. 22 Progress Toward the Elimination of Hepatitis B Virus Transmission Among Health Care Workers in the United States Francis J. Mahoney, MD; Kimberly Stewart, MPH; Hanxian Hu, MD, MPH; Patrick Coleman, PhD; Miriam J. Alter, PhD Arch Intern Med. 1997;157(22):2601-2605. doi:10.1001/archinte.1997.00440430083010. Text Size: A A A Background: Hepatitis B virus (HBV) infection is a wellrecognized occupational risk for health care workers (HCWs). Vaccination coverage, disease trends, and the need for booster doses after hepatitis B vaccination of adults have been the subject of intense study during the 15 years of the vaccine's availability.Methods: Vaccination coverage of HCWs was determined from a review of medical records on a sample of employees from 113 randomly selected hospitals. The number of HBV infections among HCWs and the general US population for 1983 through 1995 was estimated from national surveillance data. Studies on long-term protection after hepatitis B vaccination of adults were reviewed.Results: A total of 2837 employee medical records were reviewed; 2532 employees (90%) were eligible to receive hepatitis B vaccine, and 66.5% of them (95% confidence interval, 61.9%-70.9%) had received 3 doses of hepatitis B vaccine. Vaccination coverage was highest (75%) for personnel with frequent exposure to infectious body fluids (phlebotomists, laboratory personnel, and nursing staff) and lowest (45%) for employees at low risk for exposure (dietary and clerical staff). The number of HBV infections among HCWs declined from 17 000 in 1983 to 400 in 1995. The 95% decline in incidence observed among HCWs is 1.5-fold greater than the reduction in incidence in the general US population. Studies on long-term protection demonstrate that vaccineinduced protection persists at least 1 years even when titers of antibody to hepatitis B surface antigen decline below detectable levels.Conclusions: Although a high percentage of HCWs have been fully vaccinated with hepatitis B vaccine, efforts need to be made to improve this coverage. There has been a dramatic decrease in the number of HBV infections among HCWs who are now at lower risk of HBV infection than the general US population. Vaccine-induced protection persists at least 11 years and booster doses are not needed at this time for adults who have responded to vaccination.Arch Intern Med. 1997;157:2601-2605 Return to: Progress Toward the Elimination of Hepatitis B Virus Transmission Among Health Care Workers in the United States Mahoney FJ, Stewart K, Hu H, Coleman P, Alter MJ. Progress Toward the Elimination of Hepatitis B Virus Transmission Among Health Care Workers in the United States. Arch Intern Med. 1997;157(22):2601-2605. doi:10.1001/archinte.1997.00440430083010. Download citation file: RIS (Zotero)
The Research Forum Blog Ways of Seeing Posted on October 28, 2013 by Niccola Shearman “Visual Insights: What Art Can Tell Us About the Brain” Professor Margaret Livingstone, Tuesday 22 October 2013. For the second Frank Davis memorial lecture of 2013, the Courtauld community and guests were given a privileged glimpse into the workings of our own visual processing by Margaret Livingstone of Harvard Medical School. Applying developments in neurobiology to a study of pictorial reception, Professor Livingstone’s research in recent years has explored the evidence that artists also spend a lot of time trying to figure out how we see. Along with plentiful information on the finely tuned operation of neurons within the visual pathway, it was the interactive experience – facilitated by red-green cinema specs – which cemented for the audience the evidence of how the brain processes retinal responses to pictures, faces, and pictures of faces. Claude Monet, Autumn Effect at Argenteuil, 1873. The shimmering effect of the reflection can be explained by equal values of luminance in the colour choices. Those who had turned up to hear the big neurological reveal on the secret of the Mona Lisa’s smile were not to be disappointed, but first we needed the basic picture. Through diagrams illustrating the opposing actions of ganglion cells on the retina, which can both fire or repress signals depending on the area receiving light, Professor Livingstone demonstrated the dominant principles of luminance and contrast at the base line of vision. This evidence helps to access the employment of light and shadow throughout the history of art, from the uniform brilliance of haloes in a Duccio altarpiece to Impressionist experiments with movement created by subtle variants in light value. Such effects were further explained by a diagram of the primate brain showing the division of two distinct functions: the ‘what system’ which has developed to recognise objects, colour and faces; and the ‘where system’ which takes the more general role of detecting spatial relations of depth, distance, figure/ground, and movement. These separate functions are behind the puzzling effects of optical illusions and those red-green patterns familiar from optical examinations; and, as illustrated with works by Monet and Mondrian, are expertly manipulated by visual artists. Correspondingly, we were shown how it could be the difference in acuity between central and peripheral vision which is behind the enigmatic smile of the Mona Lisa. Returning to the visual peculiarities of artists themselves, the lecture concluded with an intriguing insight into the properties of stereovision and the likelihood of ocular misalignment or of dyslexia as a contributing factor in the artist’s particular facility in translating volumes into flat pictures. A graph based on Rembrandt’s depictions of his own eyes in a series of painted and etched self-portraits provided a convincing argument in favour of the research, as of Professor Livingstone’s parting comment; namely, that ‘if you can make a graph of the unlikeliest thing, you can get published’. The background to this science and its application to artistic vision are explained in Margaret Livingstone’s book, Vision and Art (2002), available in the Courtauld Library. Research Rhythms Tags Frank Davis Memorial Lecture SeriesResearch Forumvision sciencevisual art The Art of Collecting: Questioning Status and Practices Posted on June 21, 2013 by Megan N. Liberty In this workshop, held on Thursday 13 June, Courtauld students Agathe Jacquemet and Amélie Timmermans set out to explore why and how people and organizations collect art. The afternoon began with a short video of three different collectors discussing why they collect, what defines them as a collector, and how they purchase and develop their collections. Following the video, Jeffrey Boloten, Co-Founder and Managing Director of ArtInsight Ltd, introduced the workshop’s speakers, who represented both private and public collections. The first half of the afternoon was devoted to private collections and featured Philip Hook from Sotheby’s Impressionist and Modern Department and art advisor Alex Heath, who is Chairman and Managing Director of International Art Consultants Ltd. Hook’s lecture, titled Why Collectors Collect, presented a pie chart of the various motives for developing private collections: spiritual enlightenment, investments, status, and aesthetic/intellectual pleasure. Overall, Hook promoted the virtues of collecting for spiritual enlightenment and intellectual pleasure, concluding with, ‘You need to see your art in order to stay alive’. Heath’s lecture, titled Advising Collectors in their Collections, approached private collections from the opposite angle, examining the methods and factors essential to advising a broad range of collectors. Having little background knowledge on economic and financial theories, I found Heath’s treatment of art as a good to be consumed and his discussion of the importance of wealth management in building private collections to be particularly interesting. The second half of the workshop had a very different tone, focusing on building public collections, particularly the Art Fund’s, discussed by Head of Policy and Strategy Sally Wrampling, and the Courtauld Gallery’s, discussed by the Head of the Gallery Ernst Vegelin. Wrampling presented several of the Art Fund’s joint purchases from the past few years and explained the process of helping other institutions acquire works with Art Fund support. She stressed the importance of the support of Art Fund members and donors to the success of the Fund over the years. Vegelin’s lecture highlighted the importance of three of the Courtauld’s own private collectors: Samuel Courtauld, Lord Lee of Fareham, and Sir Robert Witt. It was particularly relevant in light of the current exhibition at the Gallery, Collecting Gauguin: Samuel Courtauld in the ’20s, which showcases the benefits of Samuel Courtauld’s foresight in building his own collection. According to Vegelin, ninety-five percent of the Courtauld’s collection is composed of gifts, making it a prime example of the fruits of meticulous private collectors. It also made it a fitting topic to end the workshop with, as it illustrates the transformation of private collections into public ones. The Art of Collecting provided an impressive range of speakers and topics, highlighting the difficulties with and complexity of developing and managing both private and public collections and opening up further debate on the changing function and status of collecting art in the twenty-first century. artart collectingArt Fundeconomicsprivate collectionspublic collectionsResearch ForumSotheby'sThe Courtauld Collectionvisual art Light, Colour and Veils Posted on June 13, 2013 by James Alexander Cameron Some conferences, such as last month’s Beyond the Western Mediterranean, set out to break new ground, but some are held just to celebrate and inspire. This was the mood for the day-long event at The Courtauld in honour of retiring professor Paul Hills. The duly prophetic Peter Mack from the Warburg set the tone for the day by explaining how Paul, with his deep pleasure in paintings, uses them as tools with which to think. Getting intense enjoyment out of a work of art is something I feel is a skill in itself. However, it seems almost selfish to indulge in if you can’t pass anything from the experience to others without pretence or arrogance, two words that could never apply to Professor Hills. Highlights of the day’s papers included Jane Bridgeman’s explanation of the different sort of female head-coverings in Renaissance Italy: mantles, veils and wimples. It was stimulating to be reminded that the beautiful costumes of the Madonna that the Christ Child tugs at in so many medieval paintings are in essence a symbolic yoke of the repressed female. Beverly Louise Brown’s reassessment of Titian’s Jacopo Pesaro presented by Pope Alexander VI to St Peter was particularly lucid and revealing. Usually considered as a clumsy piece of juvenilia where the young artist could not even get St Peter’s mantle the right colour, Dr. Brown showed how Titian was working in a tradition of dressing St Peter in red papal robes, and the saint’s somewhat stilted appearance may have been an allusion to his statue in the Vatican of which pilgrims would kiss the foot. Paul Smith’s characteristically packed paper on colour theory formed an excellent closing to the conference. What made the day special was the presence of actual art and artists: something Professor Hills surely appreciated. The print room had been prepared with a selection of appropriate master drawings, serving to bring people together at the lunch break and prompt rich discussion at this often awkward stage of a Saturday conference when many disappear up the Strand in search of calorific sustenance. Films were also presented, in person by Nicky Hamlyn and in absentia by Shirazeh Houshiary, which prompted thoughts on the materiality of the veil, as well as the noisiness of the 16mm projector (a topic for another conference). Christopher Le Brun, president of the Royal Academy, spoke openly about his own paintings: how by veiling the canvas in paint he unveiled his own persona to the world at large. It was a reminder that the creation of the work of art could be an uncomfortable process, much more fraught than the art historians’ task of picking it apart at their leisure. I work with so many broken bits of English Gothic art, sad shadows of great works through poor drawings, all but demolished Abbey ruins. However this inspirational conference reminded me I want to see them as an art historian, and yearn to pass on at least a small fraction of the pleasure which they give me, to show that they are examples of beautiful and profound music in a noisy world. Aestheticsartart historycolourlightPaul HillsRenaissance ArtResearch Forumveilsvisual art Patterns of Dissent: Contemporaneity in South Asian Art–Subodh Gupta & The Routes of Success Posted on June 10, 2013 by Ashitha Nagesh Subodh Gupta speaking at The Courtauld. Photo by Ashitha Nagesh. Being familiar with Subodh Gupta’s large-scale sculptural installations, it was surprising to hear him speak at The Courtauld on 21 May– for his particularly modest, humble manner of approaching his own artworks and practice was somewhat unexpected in light of his ambitious pieces. One thing the artist and his work clearly have in common, however, is that they are immensely powerful. His latest installation at Hauser & Wirth Savile Row, What does the vessel contain, that the river does not (2012) is a huge Keralan fishing ship, hand-sewn in the traditional way and filled with the everyday Indian domestic objects that Gupta is perhaps best recognised for, steel kitchenware, amongst other pieces of furniture, broken or whole. This miscellany collected within a symbol of travel and trade seems a fitting culmination of the fourteen years of work that Gupta discussed at the seminar, for his oeuvre is inherently tied up in his personal experiences. It was interesting to hear the anecdotes that accompany some of his most well-known pieces, as they are linked to his life – whether they were events that had taken place, conversations he had had, or simply his own thought processes – as Gupta told us, “My journey is my art.” The importance of his discovery of Duchamp was particularly touching, and one that makes so much sense when considering his sculpture – the way he elevates the quotidian to something aesthetically beautiful is quintessentially Duchampian. For example, speaking about his works Across Seven Seas and Everything is Inside (both 2004) he spoke about how he used to travel to Europe via the Gulf, and on his return journey would see Indians who were working in the Middle East with large, tightly and carefully wrapped bundles. He asked people what they had packed in there, expecting them to contain fragile and precious items; however, they usually only held gifts for the workers’ families back home. He found these bundles, as commonplace as they turned out to be, so beautiful that he created the two sculptures based on them. Aam Aadmi (2009), a collection of incredibly realistic painted bronze mangoes in a wooden crate, is similar treatment of the everyday – and as “aam aadmi” (literally translating from Hindi as “mango people”) is a colloquial term used by politicians to refer to the “common people”, it becomes a celebration not only of everyday objects but of the general masses. Gupta then went on to talk about his early years, the beginning of his artistic career in art school in Patna, how he initially wanted to become an actor, as well as his experience of working in the Khoj workshop in 1997 – a liberating environment where the artists could work free from gallery influence for the first time. Needless to say, it was fascinating to hear the experiences that preceded such an incredible body of work. AestheticsArt & Lifecontemporary artDuchampHauser & WirthHistoriesIndiaResearch ForumSubodh Guptavisual art Medieval Work in Progress: Dr Robert Mills on Medieval Art and the Question of the Animal Posted on June 5, 2013 by Ashitha Nagesh Unicorn being slain from the Rochester Bestiary (London, British Library, Royal MS F xiii) folio 10v Although given a rather moderate-sounding title, as soon as Dr Mills started speaking about the bestialisation of the human in the context of medieval torture and martyrdom images, the seminar on the 22nd of May, “Medieval Art and the Question of the Animal,” immediately became much more complex than initially expected (and for those of us with darker tastes, much more interesting too). Mills began by addressing theories of “Speciesism” and considerations of how violence is represented from the perspective of the animal, and deconstructed these ideas by considering what actually constituted “animal perspective” in the Middle Ages. In this context, Mills looked closely at how animals functioned in a symbolic manner in the late medieval period, and how this informed the pedagogical functions of bestiaries, such as the Rochester Bestiary (BL MS Royal 12 F xiii) and another in the British Library, MS Harley 3244. This was but a springboard, however, for Mills’ exploration of animality within the category of the human. Drawing upon Aristotle’s claims that man is both beyond, yet also within the animal, and that “man is by nature a political animal,” he established that the distinction between “human” and “animal” is essentially porous – the foundation of his study of both animal and human slaughter in manuscripts. There were some beautiful examples of this – particularly in Leviticus 1 of the Bible Moralisée (ÖNB Vienna 2554, on folio 27r). On this folio was a richly illuminated, deep vermillion rendering of the flaying of a cow, with the corresponding moralisation equally graphically depicting the skinning alive of St Bartholomew. Here, the flaying of the cow was so vividly conflated with human martyrdom, and the torture of both cow and saint were represented almost identically. Similarly in another Bible Moralisée (Naples, MS Français 9561), the orientation of the humans and the animals undergoing torture was exactly the same, as well as the nature of the torture and the torment on their faces – an interesting revelation, considering the common perception of medieval attitudes towards animal rights. The martyrs are conspicuously dehumanised, heightening the effect of the torture, whilst the animals are simultaneously humanised. The porousness of the distinction is no clearer than here. What I found most interesting, though, was Dr Mills’ idea of medieval books themselves literally representing the word-made-flesh – that the bloody, torturous image of the cow being flayed in Vienna 2554 vividly recalls the production of the parchment that the illumination is painted on; medieval parchment, also called vellum, was itself made from cow or calf skin. The parchment in this context becomes performative, and is an active component of the cow’s torture; “the violence on the page,” Dr Mills explained, “serves as an uncanny reminder of the violence behind the production of the page.” AestheticsanimalsartHistoriesmartyrdomMedieval ArtnarrativesSpeciesismvisual art Memorabilia from an Age of Troublemaking – Liu Dahong and Katie Hill in Conversation Posted on May 17, 2013 by Megan N. Liberty Liu Dahong, Gazing into Space. Oil on Canvas, 2011. Courtesy the artist, Hanart TZ Gallery, Hong Kong and Rossi & Rossi, London Chinese contemporary artist Liu Dahong began his presentation on 30 April by stating that he has lived through three dynasties—the first being Chairman Mao’s reign, the second when he left power, and the third the current regime. He explained that this is the lens through which all of his paintings must be viewed. Liu’s work illustrates the merging of past and present histories by weaving references from his own childhood with contemporary political issues. It not only reflects his own histories, but also the nature of history as both an account of factual events and a myth composed of personal memories. Sotheby’s Institute of Art lecturer Dr. Katie Hill engaged Liu in dialogue about the overarching themes present in his most recent series of work, ‘Childhood’, currently on view at Rossi & Rossi. This show presents the work, along with written text by Liu, in book form. During the conversation, Hill described this book as a kind of ‘textbook’ that was available for visitors to purchase and contribute to. As Liu explained, alongside the pages of his images and explanations were also blank notebook pages to which spectators could add their own impressions and thoughts about his work. This concept, he noted, comes from his continued practice of journal keeping, again bringing elements of his childhood history into his contemporary practices, merging his own history and opinions with those of his audience. I was particularly interested in the dialog regarding Liu’s painting, Battling the Seaweed Sea (2011). Liu introduced this image with a folktale from his childhood about children who were brave enough to stay out with their sheep during a storm. Thus the image depicts two mischievous children peddling through the water ‘battling the seaweed.’ But as Hill suggested, the image also reflects contemporary ecological issues: the green sea signifies the extreme pollution. Again, Liu brings together the myths of his childhood with current histories, creating a visual link between the past and present. Another link present throughout Liu’s body of work is one between the Far East and West. The first work Liu presented was a digital tour of a ‘Chinese Church’ to highlight the differences between Chinese and Western culture. Many of Liu’s works utilize Western, particularly Christian, motifs and structures to display distinctly Eastern themes. During the audience question-and-answer session, Hill and Liu discussed his reasons for adopting this format. Utilizing Christian iconography, but placing Chairman Mao’s image in it, demonstrates the widespread influence Mao had, comparable to that of Christianity. The Western forms facilitate the translation of the influence of Chinese political figures. Overall, Hill and Liu highlighted this idea of translation—translating various histories and myths, translating childhood experience, and translating Chinese culture and politics into visual forms that can be understood and experienced by a broad and diverse audience. Aestheticsartbook artchildhoodChinaHistoriesPoliticsResearch Forumvisual art Utopia III: Contemporary Russian Art and the Ruins of Utopia Posted on April 5, 2013 by Lola Weddleton Ilya Kabakov, The Man Who Flew Into Space from his Apartment, 1968-88 In February, I attended the Utopia III conference held through the Cambridge Courtauld Russian Art Centre. The conference was the third in a series addressing the theme of ‘utopia’ within Russian art, with each focusing on a different time period; Utopia III focused on contemporary art. This was the first of the conference series I was able to attend, and it left me regretting that I had missed the previous two. Days later, I still found myself thinking about the idea of utopia, both as it concerned Soviet art and as it connected to other realms of my academic and non-academic interests— particularly, my penchant for reading dystopian novels, which normally constitutes a wholly non-academic escape. I found the keynote speaker, Mikhail Epstein, particularly intriguing in this respect. His topic, ‘The Philosophical Underpinnings of Russian Conceptualism’, drew parallels for me between the concept of the utopian he described, which he argued was grounded in philosophical ideas predating Soviet ideology, and the philosophical exercise that seems to be at the heart of many dystopian novels. Central to the genre, of course, is the desire to posit the ramifications of Soviet-era politics and totalitarian moments of 20th century history, but also often motifs drawn from classical-era philosophies of government. Though by a strict definition, ‘utopian’ and ‘dystopian’ are opposing ideas, they exist in tension, with the second reliant upon the first to exist. Both are united in a joint exercise in constructing an alternate version of reality: one optimistically plausible, the other existing in order to identify the fundamental flaws in the former. Though the term ‘dystopia’ was not investigated at this conference, I often detected the blurry line between the two. One example, used by multiple speakers, was Ilya Kabakov’s “The Man Who Flew Into Space from his Apartment.” This installation artwork depicts the aftermath of the apartment belonging to the eponymous man in space. His cramped living quarters, wallpapered with Soviet propaganda, are now furnished by the aftermath of his successful space mission. Through the work’s highly narrative composition, the viewer is able to infer the action that preceded the current tableau, while simultaneously detecting the cracks in a supposedly utopian Soviet society: the propaganda feels suffocating, and must be escaped. Epstein proposed that conceptual art is the visual counterpart to philosophy, and has been understood this way by some of the artists themselves. This proved somewhat controversial in the Q&A portion following his talk, although I found his argument fairly convincing. In my understanding of dystopian literature the connection seems apt: conceptual art, like literature, becomes a method of exploring abstract ideas in a concrete sense, as if running a simulation to prove exactly where grand theories, in our imperfect reality, will fall short. AestheticsartCCRACconceptualismcontemporary artdystopiaHistoriesPhilosophyPoliticsResearch ForumRussian artutopiavisual art Mark Cheetham, ‘Landscape & Language: from Conceptualism to Ecoaesthetics’ and Mark with Mariele Neudecker, ‘Re-Inventing Landscape Traditions for the Present’ Posted on March 11, 2013 by Tom Balfe N. E. Thing Co., Quarter Mile Landscape, 1969. In the late 1960s, the N. E. Thing Co., a Canadian art collective, produced a series of interventions exploring the connection between landscape and language. They set up road signs next to nondescript stretches of countryside with messages like ‘You will soon pass by a ¼ mile N. E. Thing Co. landscape’, highlighting the fact that all it takes to turn mere land into ‘landscape’ is the addition of a short text. Landscape, the signs suggest, is simply where we are directed to look. For Mark Cheetham, speaking on a Monday in early October 2012 in the first of two events on the role of nature in modern and contemporary art, works like these are a stark reminder that our experience of our environment is always culturally mediated. In his talk, he went on to analyse some important recent artworks which approach nature through the medium of language. One early conceptual piece by Richard Long, for example, consists solely of lists of instructions on how to arrange sticks and other natural objects in the gallery. The lists draw attention to the display conventions that ‘tame’ nature when it is brought into the gallery, yet are themselves instances of these conventions (which usually remain unwritten); as such, they reveal the impossibility of capturing nature in a unadulterated form, even when, as with Long’s sticks, it appears to survive the conversion into art raw and unworked. Mariele Neudecker, I Don’t Know How I Resisted the Urge to Run, 1998, mixed media including water, acrylic medium, salt and fibreglass, 75 x 90 x 61cm (with plinth). The second event the following day gave us the chance to think further about these issues in relation to the work of artist Mariele Neudecker, who joined Cheetham to discuss the question of how the Western landscape tradition has been reinterpreted in recent art practice. Neudecker began by offering a survey of her career, focusing on particular works which speak to this theme. Characteristic of her thoughtful approach to the landscape tradition are her tank installations: backlit vitrines which contain miniature landscape dioramas submerged in hazy coloured fluid. These eerie, beautiful works reference the paintings of Caspar David Friedrich through their titles and appearance; at the same time, their relationship to this giant of the tradition is not one of straightforward emulation. As Cheetham noted later on, in the way that they demand to be viewed from different angles, and in their refusal to hide their central framing device, the vitrine, Neudecker’s tanks reveal the extent to which Friedrich presents a vision of the northern landscape cut off from time and embodied experience. I agree; but perhaps the tanks’ sensuous and explicitly visual response to Friedrich should also alert us to the fact that – for artists at least – the dialogue with tradition tends to be conducted in aesthetic as well as linguistic or conceptual terms. This can be an uncomfortable fact for art historians, who work within a discipline afflicted by an iconophobia so profound that it often seems more acceptable to look at anything (diaries, archives, inventories, texts, contexts) rather than the artwork itself. Events like this stimulating encounter between an artist and an art historian help us all to see a little further beyond our self-imposed boundaries. Aestheticsconceptual artecoaestheticsenvironmentalismHistorieslandscapeMariele NeudeckerN.E. Thing CoPoliticsResearch Forumsculptural formsvisual art Martin Myrone, ‘“Like a great circus tent”: folk art, art history and the museum’ Posted on March 1, 2013 by Tom Balfe George Smart, The Earth Stopper, early 19th century applied felt on watercolour paper background, 32.5 x 44cm. London art market, 2006. It can be easy to forget how restricted a view of art production most of us really have. The works sitting pretty in our major museums and galleries are the towering emergent trees in our cultural ecosystem; while often wholly unrepresentative of mainstream forms of creative activity (being, as we say, ‘original’), they nevertheless absorb a disproportionately large share of the available resources: scholarship, exposure in exhibitions and publications, and money. At the other end of the scale – in the murky zone below the forest canopy – are the various popular practices known as ‘folk art’. This term encircles a formidably diverse range of phenomena. It can refer to artefacts which are recognisable as works of art, such as the small felt collage pictures made by George Smart, the tailor from Frant, as a sideline to his business. But it also encompasses context-specific performances (morris-dancing, story-telling) and activities so ephemeral or routine – traditional jam making, for example – that to refer to them as art at all requires a stretch of the imagination for most historians. In his talk on 1st October 2012, curator Martin Myrone explored the museological issues raised by the British folk art tradition, focusing on the question of how this fascinating but deeply problematic body of material might best be offered to the public in an upcoming exhibition at Tate Britain. Lion figurehead, c.1720, wood and oil paint, 234 x 51 x 58cm. National Maritime Museum. As the case studies which Myrone presented to us reveal, a key difficulty associated with folk art is its resistance to the various labels (author, date, genre, etc.) which museums rely upon to contextualise and interpret objects for their audiences. One of his most striking examples, the ship’s figureheads preserved in British naval collections, illustrate some of the complexities involved here. These anonymous wooden sculptures cannot really be viewed as instances of a period style because over the centuries they have been repeatedly stripped down and repainted. Nor does their level of craftsmanship allow them to be presented as ‘timeless’ aesthetic objects which can be appreciated by museum visitors without a supporting framework of historical information. Like most folk art, they occupy an uneasy position between high art and the straightforwardly functional. The ambiguous status of folk art also carries a political charge. As one contributor in the discussion session pointed out, to transplant a work from, say, the Reading Museum of Rural Life into a prominent art museum like the Tate is a significant act of redescription, one which involves certain risks. If the work falls short of the high aesthetic standards with which its new home is associated, it may end up seeming hopelessly clumsy, vulgar or irrelevant; a gesture intended to celebrate folk art may expose it to ridicule. On the other hand, bringing unusual materials into the museum can also help to refresh our ideas of what counts as art. It will be interesting to see how Myrone and his team choose to manage the challenges of folk art in a few years’ time. AestheticsartFolk ArtGeorge SmartHistoriesmuseumsnarrativesPoliticsResearch Forumvisual art ORIENTALISM AND “ISLAMOPHILIA” Posted on November 20, 2012 by Tim Satterthwaite Courtyard, Madrasa Bou Inaniyam Fez, Morocco This year’s Frank Davis Memorial Lecture Series, titled Histories in Transition, explores the theme of historicism in visual art of the modern period. For the third lecture in the series, Rémi Labrusse, of Université de Paris Ouest Nanterre, described idealist visions of the Islamic Middle East in nineteenth-century art and scholarship. Prof. Labrusse began the talk with an apology for his imperfect English, and then spoke in elegant English, and with perfect clarity, for the following hour. This was one of those rare moments, for me, which define what art history is all about: capturing the rich and complex ways in which artefacts and images incorporate the values and meanings of the culture that produced them. A tile pattern from the Alhambra, transcribed to a nineteenth-century pattern book, inflects the crisis in the self-image of imperialist Europe; or describes the shift from figuration to geometric abstraction in the history of decorative art. The narratives that intersect the visual object are never exhausted – and that’s what makes art history so fascinating. Rémi Labrusse’s account traced two broad ideological tendencies that governed visualisations of Islam in nineteenth-century Europe. The first of these, termed orientalism, describes the construction of a fictive, exotic world, embodying values imperilled by the rise of industrial capitalism. In the works of painters such as Jean-Léon Gerôme or Frank Dillon, the Arabic world was projected as a fantasy realm, absent of modernity, an erotic blend of timeless sophistication and heathen barbarism. As Labrusse described, the inherent tensions in the imperialist project are implicit in the paintings: the ‘Orient’ was defined by its isolation from modernity, so these depictions can describe only its defilement, or its demise. Vasily Vereschagin’s horrifying Apotheosis of War (1871), a desert pyramid of skulls with feeding crows, echoes the meticulous naturalism of Gerôme’s Arabian palace scenes: these are opposing perspectives on the same imperialist project. The history painting aesthetic, employed in the depiction of a fictionalised actuality, fails to suppress the underpinning brutality of nineteenth-century colonialism. In opposition to the orientalist fantasies of the genre painters, Labrusse suggests that a more culturally sensitive, Islamophilic tendency emerged in European visual culture in the second half of the nineteenth century. Studies of Islamic ornamentation, by authors such as Owen Jones, became exemplary texts in the movement to reform the decorative arts, following the aesthetic debacle of the Great Exhibition of 1851. Rather than serving as a figure of exoticism and colonial conquest, Islamic art offered, for the Islamophiles, a dazzling contrast to the decadent styles of the ‘age of ugliness’. The lecture concluded with the outline of a fascinating hypothesis – my scribbled notes are a poor record of Labrusse’s subtle ideas. Among the reformists, he suggests, Islamophilia became a means of reformulating the Romantic project of classical renewal. Islamic tradition, unlike Greek and Romantic antiquity, offered a ‘weak’ model for European modernity, a path to aesthetic renewal without the oedipal constraints of the classical tradition. I am in danger of misrepresenting his arguments, so I better stop there. French readers can find more on this fascinating theme in Labrusse’s Islamophiles: l’Europe moderne et les Arts d’Islam, published in 2011. AestheticsAlhambraartcolonialismdecorative artfigurationFrank Dillongeometric abstractionGreat Exhibition of 1851Historieshistory paintingidealismidealist visionsimperialismindustrial capitalismIslamophiliaJean-Léon GerômeLabrusseMiddle Eastnarrativesnineteenth-century artOrientOrientalismpaintingResearch ForumTransitionvisual art Older Review Streams Courtauld Critics (49) Research Rhythms (76) About this blog Views and Reviews is an opportunity for diverse voices from the Courtauld to respond to Research Forum events. Find out more Most used tagsAbstract British art Frank Davis Memorial Lecture Series National style nineteenth-century art sculptural forms Self-Portraiture spatial forms technical art history The Latest Archives June 2016 The Courtauld Institute Of Art London WC2R 0RN What's On at the Gallery Courtauld Image Libraries Research Forum Events © Copyright 2015 The Courtauld Institute Of Art, Somerset House, Strand, London WC2R 0RN, UK
Combined Heat and Power Saves Money, Reduces Emissions, and Improves Energy Security Combined heat and power is a proven win-win for reducing costs and emissions while providing a secure, reliable, and resilient energy supplies in congested power markets and in the wake of power grid failures. These features can be especially important for critical infrastructure such as hospitals, water supply and waste water treatment facilities, nursing homes, emergency centers, and other facilities for which the loss of thermal and electric power would cause significant economic harm or undermine public health or safety. This week, EESI hosted a briefing and released a Fact Sheet on related topics. At the May 22 EESI briefing, How Combined Heat and Power Saves Money, Reduces Emissions, and Improves Energy Security, Anne Hampson, Senior Associate, ICF International, provided an introductory overview of combined heat and power (CHP). Susan Wickwire, Chief, Energy Supply and Industry Branch, Climate Protection Partnerships Division, U.S. EPA, spoke about EPA and other interagency initiatives to advance CHP and to help overcome some key challenges for developers. Tom Bourgeois, Deputy Director, Pace Energy and Climate Center, Pace University, described recent developments in the Northeast, the resilient performance of CHP facilities in the wake of recent extreme weather, and the future potential of CHP development in the region. Robert Araujo, Manager for Sustainable Development and Environment, Health and Safety, Sikorsky Helicopter, described Sikorsky’s experience developing a CHP system for their facility in Connecticut and the way it performed in the aftermath of recent extreme weather events including Hurricane Sandy. And, Dale Louda, Executive Director, CHP Association, summed up the multiple benefits of CHP for business, industry, critical infrastructure, and the nation as a whole. You can watch the briefing and access links to related reports and copies of the PowerPoint presentations at EESI’s web site . In addition, EESI released a Fact Sheet, "Combined Heat and Power: Pathway to Lower Energy Costs, Reduced Emissions, Secure and Resilient Energy Supply," which provides an overview of the key benefits of CHP, the future potential of CHP in the United States, some of the barriers, and state and federal policy approaches to advancing CHP. The Fact Sheet also features a brief overview of CHP systems that run on renewable biomass. When biomass is produced sustainably, biomass-fueled CHP systems can produce heat and power with very few net greenhouse gas (GHG) emissions on a life cycle basis, and thus can be much more climate-friendly than systems fueled with fossil natural gas, coal, or oil. By substituting renewable biomass for fossil fuel, carbon emissions from non-renewable, finite fossil fuels can be avoided. Further, because almost all of the biomass used for biomass CHP today is derived from forestry or agricultural residues or urban and agricultural waste streams, significant additional emissions of climate-changing methane, which would otherwise be released to the atmosphere, are avoided. This combination of emission-reducing components can actually make some biomass CHP systems net carbon negative on a life cycle basis. For links to previous SBFF posts on biomass CHP, see these: Can Renewable Biomass Combined Heat and Power Replace Coal Power? (October 11, 2012) White House Calls for Boosting Combined Heat and Power: Biomass Can Help (August 31, 2012) More Biomass Combined Heat and Power Projects (January 26, 2012) Biomass Combined Heat and Power: A Smart Long Term Investment? (January 19, 2012) CHP - Combined Heat and Power Sustainable Biomass and Energy
Anna Colette Hunt 10,000 Ceramic Insects Swarm English Manor BY Kate Horowitz From a distance, the dense river of insects surging up the walls and ceilings of Wollaton Hall is pretty creepy. But take just a few steps closer, and you’ll be rewarded with an astonishing surprise: the swarm is comprised entirely of ceramic bugs, each hand-made and unique. The thousands of pieces are an installation by artist Anna Collette Hunt, who draws on natural history and fairy tales to create her gorgeous, unsettling environments. “Stirring the Swarm” began in 2012, when Hunt was invited to create a solo exhibition in the Nottingham Natural History Museum, which occupies a former manor house in Nottingham, England. Hunt tells mental_floss that, at first, the idea was a bit overwhelming. “It was such a large space,” she said, “and I didn’t even have a kiln or a studio!” Casting about for inspiration, Hunt took a behind-the-scenes tour with the museum’s curator of taxidermy. “She took me around this labyrinth that is their collections warehouse,” Hunt says. “We had to squeeze past headless or damaged taxidermy animals, and all the specimens had plastic bags over their heads. There were jam jars of noses and a drawer of glass eyes.” The overall effect was unnerving. After the tour, Hunt continued to the entomology room where she had what she describes as a peculiar sort of daydream. “It was late afternoon and the golden sunlight was flickering on the entomology cabinets—and it looked as if the pin-speared specimens were waking up! And this wondrous idea flooded into my mind, the tale of a entomology collection waking up, and smashing out of their time capsules and soaring off into the night.” Hunt began to draw strange, hybrid insects, all with butterfly wings. She consulted an entomologist, who helped her create Latin names for each imaginary species. Hunt created models and molds of each species, and a team of assistants helped her cast and glaze them. Because Hunt envisioned the insects as native to the museum, she chose colors from the manor’s interior and even transferred some of the building’s wallpaper into their wings. Hunt didn't forget the way she felt while looking at the museum's stuffed animals. She also recognizes their value. “The thought of the act of killing in this way makes me feel ill,” she says, “and I’ve had to reflect greatly on this, as I use museum collections for lots of my research. It’s too late [for these animals], but transforming them into exhibits at museums seems like the correct honor for their sacrifice. I think the museum specimens are treated with respect and can bring joy and knowledge to thousands." As a tribute to the real insects pinned to boards in entomology collections around the world, Hunt added a trickle of gold to some of her insects’ bodies. The exhibition ultimately comprised more than 10,000 insects, bursting from glass cases, climbing the walls, and clinging to the ceiling. After the show closed, Hunt took several thousand bugs on the road for a touring show. The rest remained in Wollaton Hall as a permanent display to startle and amaze museum-goers for years to come. Hunt’s studio is still producing bugs and sells them in an online shop. Although her work in the museum is complete, Hunt’s passion for natural history burns on. “I have so much … wonder for this world,” she says. “The soil, the sea, the stars, and our expanding universe. The biodiversity of animals and plants, fungi, moss—our world is just a miracle, and I am awestruck.” All images courtesy of Anna Collette Hunt Art beetle insects museums Courtesy Chronicle Books Inside This Pop-Up Book Are a Planetarium, a Speaker, a Decoder Ring, and More Designer Kelli Anderson's new book is for more than just reading. This Book Is a Planetarium is really a collection of paper gadgets. With each thick, card stock page you turn, another surprise pops out. "This book concisely explains—and actively demonstrates with six functional pop-up paper contraptions—the science at play in our everyday world," the book's back cover explains. It turns out, there's a whole lot you can do with a few pieces of paper and a little bit of imagination. There's the eponymous planetarium, a paper dome that you can use with your cell phone's flashlight to project constellations onto the ceiling. There's a conical speaker, which you can use to amplify a smaller music player. There's a spiralgraph you can use to make geometric designs. There's a basic cipher you can use to encode and decode secret messages, and on its reverse side, a calendar. There's a stringed musical instrument you can play on. All are miniature, functional machines that can expand your perceptions of what a simple piece of paper can become. Art books fun News Noriyuki Saitoh Japanese Artist Crafts Intricate Insects Using Bamboo Not everyone finds insects beautiful. Some people think of them as scary, disturbing, or downright disgusting. But when Japanese artist Noriyuki Saitoh looks at a discarded cicada shell or a feeding praying mantis, he sees inspiration for his next creation. Saitoh’s sculptures, spotted over at Colossal, are crafted by hand from bamboo. He uses the natural material to make some incredibly lifelike pieces. In one example, three wasps perch on a piece of honeycomb. In another, two mating dragonflies create a heart shape with their abdomens. The figures he creates aren’t meant to be exact replicas of real insects. Rather, Saitoh starts his process with a list of dimensions and allows room for creativity when fine-tuning the appearances. The sense of movement and level of detail he puts into each sculpture is what makes them look so convincing. You can browse the artist’s work on his website or follow him on social media for more stunning samples from his portfolio. [h/t Colossal] All images courtesy of Noriyuki Saitoh. insects nature News Use Wi-Fi? Your Device Is at Risk in the Latest Security Breach
Your browser does not support script One of the 30 species of garter snake native to North and Central America, the Mexican garter snake (Thamnophis eques megalops) is an aquatic snake found in Arizona, New Mexico, and Mexico. Its primary habitat is southwest riparian areas with permanent water, dense vegetative cover and an abundance of native prey, including fish and leopard frogs. the decline of riparian habitat in the Southwest to less than 10% of its pre-European extent, the Mexican garter snake has declined to near extinction in the U.S. This decline has been furthered by the introduction and rapid spread of the inexorable bull-frog, which has been eating its way through the aquatic fauna of the Southwest, including the Mexican garter snake itself and most of its native prey species. photo by Philip C. Rosen people fear snakes and the snake is often portrayed negatively. The most obvious example is the bible and the story of the snake that tempted Eve to bite the apple. This has not always been the case, however. In many cultures, snakes have been seen as symbols of fertility, healing and the power of gods. Our symbol of medicine, for example, is the caduceus, which is a wing-topped staff, wound with two snakes. The Greek god Hermes carried a caduceus. A clinical fear of snakes is called ophidiophobia and for some people can be so acute that they won’t go outdoors. Studies have found that fear of snakes is in part genetic, transferred from one generation to the next. Other studies, however, have found that fear of snakes is greater in older people than children, suggesting a learned component. E.O. Wilson observed that: “the mind is primed to react emotionally to the sight of snakes, not just to fear them but to be aroused and absorbed in their details, to weave stories about them.” With proper education, people’s natural captivation for snakes can likely be turned from fear to fascination. Despite people’s fears, most snakes are harmless and even venomous snakes rarely strike. The Mexican garter snake, like other garter snakes, is non-venomous and shy, and thus poses no risk to humans. Instead, it is a vital component of the food chain and forms an important part of what makes the Southwest unique. Today, the Mexican garter snake is limited to an estimated 19 populations in Arizona and New Mexico, most of which consist of one to a few animals and are isolated from other populations. Threats to its habitat include livestock grazing, urban development, groundwater pumping, exotic species and illegal collection and persecution. Although the species occurs in a larger range in Mexico, many of these same threats are present and populations are poorly studied. In response to the imperiled status of Mexican garter snake, the Center for Biological Diversity filed a petition on 12-15-2003 to list the species as threatened or endangered under the Endangered Species Act. The petition seeks protection of the snake’s habitat from livestock grazing and other threats, establishment of minimum instream flows in Southwest rivers, prohibitions on further introductions of non-native fish and amphibians, and more funding for research and removal of non-natives. These actions are likely to benefit the myriad other species that are imperiled by the loss and degradation of Southwest rivers and streams. graphic Andrew Rodman ©2002 January 15, 2004
Model-driven redox pathway manipulation for improved isobutanol production in Bacillus subtilis complemented with experimental validation and metabolic profiling analysis. Haishan Qi, Shanshan Li, Sumin Zhao, Di Huang, Menglei Xia, Jianping Wen. To rationally guide the improvement of isobutanol production, metabolic network and metabolic profiling analysis were performed to provide global and profound insights into cell metabolism of isobutanol-producing Bacillus subtilis. The metabolic flux distribution of strains with different isobutanol production capacity (BSUL03, BSUL04 and BSUL05) drops a hint of the importance of NADPH on isobutanol biosynthesis. Therefore, the redox pathways were redesigned in this study. To increase NADPH concentration, glucose-6-phosphate isomerase was inactivated (BSUL06) and glucose-6-phosphate dehydrogenase was overexpressed (BSUL07) successively. As expected, NADPH pool size in BSUL07 was 4.4-fold higher than that in parental strain BSUL05. However, cell growth, isobutanol yield and production were decreased by 46%, 22%, and 80%, respectively. Metabolic profiling analysis suggested that the severely imbalanced redox status might be the primary reason. To solve this problem, gene udhA of Escherichia coli encoding transhydrogenase was further overexpressed (BSUL08), which not only well balanced the cellular ratio of NAD(P)H/NAD(P)+, but also increased NADH and ATP concentration. In addition, a straightforward engineering approach for improving NADPH concentrations was employed in BSUL05 by overexpressing exogenous gene pntAB and obtained BSUL09. The performance for isobutanol production by BSUL09 was poorer than BSUL08 but better than other engineered strains. Furthermore, in fed-batch fermentation the isobutanol production and yield of BSUL08 increased by 11% and 19%, up to the value of 6.12 g/L and 0.37 C-mol isobutanol/C-mol glucose (63% of the theoretical value), respectively, compared with parental strain BSUL05. These results demonstrated that model-driven complemented with metabolic profiling analysis could serve as a useful approach in the strain improvement for higher bio-productivity in further application. Ratiometric Biosensors that Measure Mitochondrial Redox State and ATP in Living Yeast Cells Authors: Jason D. Vevea, Dana M. Alessi Wolken, Theresa C. Swayne, Adam B. White, Liza A. Pon. JoVE Bioengineering Mitochondria have roles in many cellular processes, from energy metabolism and calcium homeostasis to control of cellular lifespan and programmed cell death. These processes affect and are affected by the redox status of and ATP production by mitochondria. Here, we describe the use of two ratiometric, genetically encoded biosensors that can detect mitochondrial redox state and ATP levels at subcellular resolution in living yeast cells. Mitochondrial redox state is measured using redox-sensitive Green Fluorescent Protein (roGFP) that is targeted to the mitochondrial matrix. Mito-roGFP contains cysteines at positions 147 and 204 of GFP, which undergo reversible and environment-dependent oxidation and reduction, which in turn alter the excitation spectrum of the protein. MitGO-ATeam is a Förster resonance energy transfer (FRET) probe in which the ε subunit of the FoF1-ATP synthase is sandwiched between FRET donor and acceptor fluorescent proteins. Binding of ATP to the ε subunit results in conformation changes in the protein that bring the FRET donor and acceptor in close proximity and allow for fluorescence resonance energy transfer from the donor to acceptor. Monitoring Intraspecies Competition in a Bacterial Cell Population by Cocultivation of Fluorescently Labelled Strains Authors: Lorena Stannek, Richard Egelkamp, Katrin Gunka, Fabian M. Commichau. Institutions: Georg-August University. Many microorganisms such as bacteria proliferate extremely fast and the populations may reach high cell densities. Small fractions of cells in a population always have accumulated mutations that are either detrimental or beneficial for the cell. If the fitness effect of a mutation provides the subpopulation with a strong selective growth advantage, the individuals of this subpopulation may rapidly outcompete and even completely eliminate their immediate fellows. Thus, small genetic changes and selection-driven accumulation of cells that have acquired beneficial mutations may lead to a complete shift of the genotype of a cell population. Here we present a procedure to monitor the rapid clonal expansion and elimination of beneficial and detrimental mutations, respectively, in a bacterial cell population over time by cocultivation of fluorescently labeled individuals of the Gram-positive model bacterium Bacillus subtilis. The method is easy to perform and very illustrative to display intraspecies competition among the individuals in a bacterial cell population. Cellular Biology, Issue 83, Bacillus subtilis, evolution, adaptation, selective pressure, beneficial mutation, intraspecies competition, fluorophore-labelling, Fluorescence Microscopy Stable Isotopic Profiling of Intermediary Metabolic Flux in Developing and Adult Stage Caenorhabditis elegans Authors: Marni J. Falk, Meera Rao, Julian Ostrovsky, Evgueni Daikhin, Ilana Nissim, Marc Yudkoff. Institutions: The Children's Hospital of Philadelphia, University of Pennsylvania. Stable isotopic profiling has long permitted sensitive investigations of the metabolic consequences of genetic mutations and/or pharmacologic therapies in cellular and mammalian models. Here, we describe detailed methods to perform stable isotopic profiling of intermediary metabolism and metabolic flux in the nematode, Caenorhabditis elegans. Methods are described for profiling whole worm free amino acids, labeled carbon dioxide, labeled organic acids, and labeled amino acids in animals exposed to stable isotopes either from early development on nematode growth media agar plates or beginning as young adults while exposed to various pharmacologic treatments in liquid culture. Free amino acids are quantified by high performance liquid chromatography (HPLC) in whole worm aliquots extracted in 4% perchloric acid. Universally labeled 13C-glucose or 1,6-13C2-glucose is utilized as the stable isotopic precursor whose labeled carbon is traced by mass spectrometry in carbon dioxide (both atmospheric and dissolved) as well as in metabolites indicative of flux through glycolysis, pyruvate metabolism, and the tricarboxylic acid cycle. Representative results are included to demonstrate effects of isotope exposure time, various bacterial clearing protocols, and alternative worm disruption methods in wild-type nematodes, as well as the relative extent of isotopic incorporation in mitochondrial complex III mutant worms (isp-1(qm150)) relative to wild-type worms. Application of stable isotopic profiling in living nematodes provides a novel capacity to investigate at the whole animal level real-time metabolic alterations that are caused by individual genetic disorders and/or pharmacologic therapies. Developmental Biology, Issue 48, Stable isotope, amino acid quantitation, organic acid quantitation, nematodes, metabolism Protocols for Implementing an Escherichia coli Based TX-TL Cell-Free Expression System for Synthetic Biology Authors: Zachary Z. Sun, Clarmyra A. Hayes, Jonghyeon Shin, Filippo Caschera, Richard M. Murray, Vincent Noireaux. Institutions: California Institute of Technology, California Institute of Technology, Massachusetts Institute of Technology, University of Minnesota. Ideal cell-free expression systems can theoretically emulate an in vivo cellular environment in a controlled in vitro platform.1 This is useful for expressing proteins and genetic circuits in a controlled manner as well as for providing a prototyping environment for synthetic biology.2,3 To achieve the latter goal, cell-free expression systems that preserve endogenous Escherichia coli transcription-translation mechanisms are able to more accurately reflect in vivo cellular dynamics than those based on T7 RNA polymerase transcription. We describe the preparation and execution of an efficient endogenous E. coli based transcription-translation (TX-TL) cell-free expression system that can produce equivalent amounts of protein as T7-based systems at a 98% cost reduction to similar commercial systems.4,5 The preparation of buffers and crude cell extract are described, as well as the execution of a three tube TX-TL reaction. The entire protocol takes five days to prepare and yields enough material for up to 3000 single reactions in one preparation. Once prepared, each reaction takes under 8 hr from setup to data collection and analysis. Mechanisms of regulation and transcription exogenous to E. coli, such as lac/tet repressors and T7 RNA polymerase, can be supplemented.6 Endogenous properties, such as mRNA and DNA degradation rates, can also be adjusted.7 The TX-TL cell-free expression system has been demonstrated for large-scale circuit assembly, exploring biological phenomena, and expression of proteins under both T7- and endogenous promoters.6,8 Accompanying mathematical models are available.9,10 The resulting system has unique applications in synthetic biology as a prototyping environment, or "TX-TL biomolecular breadboard." Cellular Biology, Issue 79, Bioengineering, Synthetic Biology, Chemistry Techniques, Synthetic, Molecular Biology, control theory, TX-TL, cell-free expression, in vitro, transcription-translation, cell-free protein synthesis, synthetic biology, systems biology, Escherichia coli cell extract, biological circuits, biomolecular breadboard FtsZ Polymerization Assays: Simple Protocols and Considerations Authors: Ewa Król, Dirk-Jan Scheffers. Institutions: University of Groningen. During bacterial cell division, the essential protein FtsZ assembles in the middle of the cell to form the so-called Z-ring. FtsZ polymerizes into long filaments in the presence of GTP in vitro, and polymerization is regulated by several accessory proteins. FtsZ polymerization has been extensively studied in vitro using basic methods including light scattering, sedimentation, GTP hydrolysis assays and electron microscopy. Buffer conditions influence both the polymerization properties of FtsZ, and the ability of FtsZ to interact with regulatory proteins. Here, we describe protocols for FtsZ polymerization studies and validate conditions and controls using Escherichia coli and Bacillus subtilis FtsZ as model proteins. A low speed sedimentation assay is introduced that allows the study of the interaction of FtsZ with proteins that bundle or tubulate FtsZ polymers. An improved GTPase assay protocol is described that allows testing of GTP hydrolysis over time using various conditions in a 96-well plate setup, with standardized incubation times that abolish variation in color development in the phosphate detection reaction. The preparation of samples for light scattering studies and electron microscopy is described. Several buffers are used to establish suitable buffer pH and salt concentration for FtsZ polymerization studies. A high concentration of KCl is the best for most of the experiments. Our methods provide a starting point for the in vitro characterization of FtsZ, not only from E. coli and B. subtilis but from any other bacterium. As such, the methods can be used for studies of the interaction of FtsZ with regulatory proteins or the testing of antibacterial drugs which may affect FtsZ polymerization. Basic Protocols, Issue 81, FtsZ, protein polymerization, cell division, GTPase, sedimentation assay, light scattering A New Approach for the Comparative Analysis of Multiprotein Complexes Based on 15N Metabolic Labeling and Quantitative Mass Spectrometry Authors: Kerstin Trompelt, Janina Steinbeck, Mia Terashima, Michael Hippler. Institutions: University of Münster, Carnegie Institution for Science. The introduced protocol provides a tool for the analysis of multiprotein complexes in the thylakoid membrane, by revealing insights into complex composition under different conditions. In this protocol the approach is demonstrated by comparing the composition of the protein complex responsible for cyclic electron flow (CEF) in Chlamydomonas reinhardtii, isolated from genetically different strains. The procedure comprises the isolation of thylakoid membranes, followed by their separation into multiprotein complexes by sucrose density gradient centrifugation, SDS-PAGE, immunodetection and comparative, quantitative mass spectrometry (MS) based on differential metabolic labeling (14N/15N) of the analyzed strains. Detergent solubilized thylakoid membranes are loaded on sucrose density gradients at equal chlorophyll concentration. After ultracentrifugation, the gradients are separated into fractions, which are analyzed by mass-spectrometry based on equal volume. This approach allows the investigation of the composition within the gradient fractions and moreover to analyze the migration behavior of different proteins, especially focusing on ANR1, CAS, and PGRL1. Furthermore, this method is demonstrated by confirming the results with immunoblotting and additionally by supporting the findings from previous studies (the identification and PSI-dependent migration of proteins that were previously described to be part of the CEF-supercomplex such as PGRL1, FNR, and cyt f). Notably, this approach is applicable to address a broad range of questions for which this protocol can be adopted and e.g. used for comparative analyses of multiprotein complex composition isolated from distinct environmental conditions. Microbiology, Issue 85, Sucrose density gradients, Chlamydomonas, multiprotein complexes, 15N metabolic labeling, thylakoids Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study Authors: Johannes Felix Buyel, Rainer Fischer. Institutions: RWTH Aachen University, Fraunhofer Gesellschaft. Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli Authors: Natalie J. Saez, Hervé Nozach, Marilyne Blemont, Renaud Vincentelli. Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France. Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment. Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (www.venomics.eu), our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays. Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides The Portable Chemical Sterilizer (PCS), D-FENS, and D-FEND ALL: Novel Chlorine Dioxide Decontamination Technologies for the Military Authors: Christopher J. Doona, Florence E. Feeherry, Peter Setlow, Alexander J. Malkin, Terrence J. Leighton. Institutions: United States Army-Natick Soldier RD&E Center, Warfighter Directorate, University of Connecticut Health Center, Lawrence Livermore National Laboratory, Children's Hospital Oakland Research Institute. There is a stated Army need for a field-portable, non-steam sterilizer technology that can be used by Forward Surgical Teams, Dental Companies, Veterinary Service Support Detachments, Combat Support Hospitals, and Area Medical Laboratories to sterilize surgical instruments and to sterilize pathological specimens prior to disposal in operating rooms, emergency treatment areas, and intensive care units. The following ensemble of novel, ‘clean and green’ chlorine dioxide technologies are versatile and flexible to adapt to meet a number of critical military needs for decontamination6,15. Specifically, the Portable Chemical Sterilizer (PCS) was invented to meet urgent battlefield needs and close critical capability gaps for energy-independence, lightweight portability, rapid mobility, and rugged durability in high intensity forward deployments3. As a revolutionary technological breakthrough in surgical sterilization technology, the PCS is a Modern Field Autoclave that relies on on-site, point-of-use, at-will generation of chlorine dioxide instead of steam. Two (2) PCS units sterilize 4 surgical trays in 1 hr, which is the equivalent throughput of one large steam autoclave (nicknamed “Bertha” in deployments because of its cumbersome size, bulky dimensions, and weight). However, the PCS operates using 100% less electricity (0 vs. 9 kW) and 98% less water (10 vs. 640 oz.), significantly reduces weight by 95% (20 vs. 450 lbs, a 4-man lift) and cube by 96% (2.1 vs. 60.2 ft3), and virtually eliminates the difficult challenges in forward deployments of repairs and maintaining reliable operation, lifting and transporting, and electrical power required for steam autoclaves. Bioengineering, Issue 88, chlorine dioxide, novel technologies, D-FENS, PCS, and D-FEND ALL, sterilization, decontamination, fresh produce safety The Logic, Experimental Steps, and Potential of Heterologous Natural Product Biosynthesis Featuring the Complex Antibiotic Erythromycin A Produced Through E. coli Authors: Ming Jiang, Haoran Zhang, Blaine A. Pfeifer. Institutions: State University of New York at Buffalo, Massachusetts Institute of Technology. The heterologous production of complex natural products is an approach designed to address current limitations and future possibilities. It is particularly useful for those compounds which possess therapeutic value but cannot be sufficiently produced or would benefit from an improved form of production. The experimental procedures involved can be subdivided into three components: 1) genetic transfer; 2) heterologous reconstitution; and 3) product analysis. Each experimental component is under continual optimization to meet the challenges and anticipate the opportunities associated with this emerging approach. Heterologous biosynthesis begins with the identification of a genetic sequence responsible for a valuable natural product. Transferring this sequence to a heterologous host is complicated by the biosynthetic pathway complexity responsible for product formation. The antibiotic erythromycin A is a good example. Twenty genes (totaling >50 kb) are required for eventual biosynthesis. In addition, three of these genes encode megasynthases, multi-domain enzymes each ~300 kDa in size. This genetic material must be designed and transferred to E. coli for reconstituted biosynthesis. The use of PCR isolation, operon construction, multi-cystronic plasmids, and electro-transformation will be described in transferring the erythromycin A genetic cluster to E. coli. Once transferred, the E. coli cell must support eventual biosynthesis. This process is also challenging given the substantial differences between E. coli and most original hosts responsible for complex natural product formation. The cell must provide necessary substrates to support biosynthesis and coordinately express the transferred genetic cluster to produce active enzymes. In the case of erythromycin A, the E. coli cell had to be engineered to provide the two precursors (propionyl-CoA and (2S)-methylmalonyl-CoA) required for biosynthesis. In addition, gene sequence modifications, plasmid copy number, chaperonin co-expression, post-translational enzymatic modification, and process temperature were also required to allow final erythromycin A formation. Finally, successful production must be assessed. For the erythromycin A case, we will present two methods. The first is liquid chromatography-mass spectrometry (LC-MS) to confirm and quantify production. The bioactivity of erythromycin A will also be confirmed through use of a bioassay in which the antibiotic activity is tested against Bacillus subtilis. The assessment assays establish erythromycin A biosynthesis from E. coli and set the stage for future engineering efforts to improve or diversify production and for the production of new complex natural compounds using this approach. Biomedical Engineering, Issue 71, Chemical Engineering, Bioengineering, Molecular Biology, Cellular Biology, Microbiology, Basic Protocols, Biochemistry, Biotechnology, Heterologous biosynthesis, natural products, antibiotics, erythromycin A, metabolic engineering, E. coli Metabolic Profile Analysis of Zebrafish Embryos Authors: Yann Gibert, Sean L. McGee, Alister C. Ward. Institutions: School of Medicine, Deakin University. A growing goal in the field of metabolism is to determine the impact of genetics on different aspects of mitochondrial function. Understanding these relationships will help to understand the underlying etiology for a range of diseases linked with mitochondrial dysfunction, such as diabetes and obesity. Recent advances in instrumentation, has enabled the monitoring of distinct parameters of mitochondrial function in cell lines or tissue explants. Here we present a method for a rapid and sensitive analysis of mitochondrial function parameters in vivo during zebrafish embryonic development using the Seahorse bioscience XF 24 extracellular flux analyser. This protocol utilizes the Islet Capture microplates where a single embryo is placed in each well, allowing measurement of bioenergetics, including: (i) basal respiration; (ii) basal mitochondrial respiration (iii) mitochondrial respiration due to ATP turnover; (iv) mitochondrial uncoupled respiration or proton leak and (iv) maximum respiration. Using this approach embryonic zebrafish respiration parameters can be compared between wild type and genetically altered embryos (mutant, gene over-expression or gene knockdown) or those manipulated pharmacologically. It is anticipated that dissemination of this protocol will provide researchers with new tools to analyse the genetic basis of metabolic disorders in vivo in this relevant vertebrate animal model. Developmental Biology, Issue 71, Genetics, Biochemistry, Cellular Biology, Molecular Biology, Physiology, Embryology, Metabolism, Metabolomics, metabolic profile, respiration, mitochondria, ATP, development, Oil Red O staining, zebrafish, Danio rerio, animal model Live Cell Imaging of Bacillus subtilis and Streptococcus pneumoniae using Automated Time-lapse Microscopy Authors: Imke G. de Jong, Katrin Beilharz, Oscar P. Kuipers, Jan- Willem Veening. During the last few years scientists became increasingly aware that average data obtained from microbial population based experiments are not representative of the behavior, status or phenotype of single cells. Due to this new insight the number of single cell studies rises continuously (for recent reviews see 1,2,3). However, many of the single cell techniques applied do not allow monitoring the development and behavior of one specific single cell in time (e.g. flow cytometry or standard microscopy). Here, we provide a detailed description of a microscopy method used in several recent studies 4, 5, 6, 7, which allows following and recording (fluorescence of) individual bacterial cells of Bacillus subtilis and Streptococcus pneumoniae through growth and division for many generations. The resulting movies can be used to construct phylogenetic lineage trees by tracing back the history of a single cell within a population that originated from one common ancestor. This time-lapse fluorescence microscopy method cannot only be used to investigate growth, division and differentiation of individual cells, but also to analyze the effect of cell history and ancestry on specific cellular behavior. Furthermore, time-lapse microscopy is ideally suited to examine gene expression dynamics and protein localization during the bacterial cell cycle. The method explains how to prepare the bacterial cells and construct the microscope slide to enable the outgrowth of single cells into a microcolony. In short, single cells are spotted on a semi-solid surface consisting of growth medium supplemented with agarose on which they grow and divide under a fluorescence microscope within a temperature controlled environmental chamber. Images are captured at specific intervals and are later analyzed using the open source software ImageJ. Immunology, Issue 53, time-lapse fluorescence microscopy, single cell analysis, cell history, cell growth, development, promoter activity, protein localization, GFP, Bacillus subtilis, Streptococcus pneumoniae Direct Detection of the Acetate-forming Activity of the Enzyme Acetate Kinase Authors: Matthew L. Fowler, Cheryl J. Ingram-Smith, Kerry S. Smith. Institutions: Clemson University. Acetate kinase, a member of the acetate and sugar kinase-Hsp70-actin (ASKHA) enzyme superfamily1-5, is responsible for the reversible phosphorylation of acetate to acetyl phosphate utilizing ATP as a substrate. Acetate kinases are ubiquitous in the Bacteria, found in one genus of Archaea, and are also present in microbes of the Eukarya6. The most well characterized acetate kinase is that from the methane-producing archaeon Methanosarcina thermophila7-14. An acetate kinase which can only utilize PPi but not ATP in the acetyl phosphate-forming direction has been isolated from Entamoeba histolytica, the causative agent of amoebic dysentery, and has thus far only been found in this genus15,16. In the direction of acetyl phosphate formation, acetate kinase activity is typically measured using the hydroxamate assay, first described by Lipmann17-20, a coupled assay in which conversion of ATP to ADP is coupled to oxidation of NADH to NAD+ by the enzymes pyruvate kinase and lactate dehydrogenase21,22, or an assay measuring release of inorganic phosphate after reaction of the acetyl phosphate product with hydroxylamine23. Activity in the opposite, acetate-forming direction is measured by coupling ATP formation from ADP to the reduction of NADP+ to NADPH by the enzymes hexokinase and glucose 6-phosphate dehydrogenase24. Here we describe a method for the detection of acetate kinase activity in the direction of acetate formation that does not require coupling enzymes, but is instead based on direct determination of acetyl phosphate consumption. After the enzymatic reaction, remaining acetyl phosphate is converted to a ferric hydroxamate complex that can be measured spectrophotometrically, as for the hydroxamate assay. Thus, unlike the standard coupled assay for this direction that is dependent on the production of ATP from ADP, this direct assay can be used for acetate kinases that produce ATP or PPi. Molecular Biology, Issue 58, Acetate kinase, acetate, acetyl phosphate, pyrophosphate, PPi, ATP Metabolic Pathway Confirmation and Discovery Through 13C-labeling of Proteinogenic Amino Acids Authors: Le You, Lawrence Page, Xueyang Feng, Bert Berla, Himadri B. Pakrasi, Yinjie J. Tang. Institutions: Washington University, Washington University, Washington University. Microbes have complex metabolic pathways that can be investigated using biochemistry and functional genomics methods. One important technique to examine cell central metabolism and discover new enzymes is 13C-assisted metabolism analysis 1. This technique is based on isotopic labeling, whereby microbes are fed with a 13C labeled substrates. By tracing the atom transition paths between metabolites in the biochemical network, we can determine functional pathways and discover new enzymes. As a complementary method to transcriptomics and proteomics, approaches for isotopomer-assisted analysis of metabolic pathways contain three major steps 2. First, we grow cells with 13C labeled substrates. In this step, the composition of the medium and the selection of labeled substrates are two key factors. To avoid measurement noises from non-labeled carbon in nutrient supplements, a minimal medium with a sole carbon source is required. Further, the choice of a labeled substrate is based on how effectively it will elucidate the pathway being analyzed. Because novel enzymes often involve different reaction stereochemistry or intermediate products, in general, singly labeled carbon substrates are more informative for detection of novel pathways than uniformly labeled ones for detection of novel pathways3, 4. Second, we analyze amino acid labeling patterns using GC-MS. Amino acids are abundant in protein and thus can be obtained from biomass hydrolysis. Amino acids can be derivatized by N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (TBDMS) before GC separation. TBDMS derivatized amino acids can be fragmented by MS and result in different arrays of fragments. Based on the mass to charge (m/z) ratio of fragmented and unfragmented amino acids, we can deduce the possible labeled patterns of the central metabolites that are precursors of the amino acids. Third, we trace 13C carbon transitions in the proposed pathways and, based on the isotopomer data, confirm whether these pathways are active 2. Measurement of amino acids provides isotopic labeling information about eight crucial precursor metabolites in the central metabolism. These metabolic key nodes can reflect the functions of associated central pathways. 13C-assisted metabolism analysis via proteinogenic amino acids can be widely used for functional characterization of poorly-characterized microbial metabolism1. In this protocol, we will use Cyanothece 51142 as the model strain to demonstrate the use of labeled carbon substrates for discovering new enzymatic functions. Molecular Biology, Issue 59, GC-MS, novel pathway, metabolism, labeling, phototrophic microorganism Sample Preparation of Mycobacterium tuberculosis Extracts for Nuclear Magnetic Resonance Metabolomic Studies Authors: Denise K. Zinniel, Robert J. Fenton, Steven Halouska, Robert Powers, Raul G. Barletta. Institutions: University of Nebraska-Lincoln, University of Nebraska-Lincoln. Mycobacterium tuberculosis is a major cause of mortality in human beings on a global scale. The emergence of both multi- (MDR) and extensively-(XDR) drug-resistant strains threatens to derail current disease control efforts. Thus, there is an urgent need to develop drugs and vaccines that are more effective than those currently available. The genome of M. tuberculosis has been known for more than 10 years, yet there are important gaps in our knowledge of gene function and essentiality. Many studies have since used gene expression analysis at both the transcriptomic and proteomic levels to determine the effects of drugs, oxidants, and growth conditions on the global patterns of gene expression. Ultimately, the final response of these changes is reflected in the metabolic composition of the bacterium including a few thousand small molecular weight chemicals. Comparing the metabolic profiles of wild type and mutant strains, either untreated or treated with a particular drug, can effectively allow target identification and may lead to the development of novel inhibitors with anti-tubercular activity. Likewise, the effects of two or more conditions on the metabolome can also be assessed. Nuclear magnetic resonance (NMR) is a powerful technology that is used to identify and quantify metabolic intermediates. In this protocol, procedures for the preparation of M. tuberculosis cell extracts for NMR metabolomic analysis are described. Cell cultures are grown under appropriate conditions and required Biosafety Level 3 containment,1 harvested, and subjected to mechanical lysis while maintaining cold temperatures to maximize preservation of metabolites. Cell lysates are recovered, filtered sterilized, and stored at ultra-low temperatures. Aliquots from these cell extracts are plated on Middlebrook 7H9 agar for colony-forming units to verify absence of viable cells. Upon two months of incubation at 37 °C, if no viable colonies are observed, samples are removed from the containment facility for downstream processing. Extracts are lyophilized, resuspended in deuterated buffer and injected in the NMR instrument, capturing spectroscopic data that is then subjected to statistical analysis. The procedures described can be applied for both one-dimensional (1D) 1H NMR and two-dimensional (2D) 1H-13C NMR analyses. This methodology provides more reliable small molecular weight metabolite identification and more reliable and sensitive quantitative analyses of cell extract metabolic compositions than chromatographic methods. Variations of the procedure described following the cell lysis step can also be adapted for parallel proteomic analysis. Infection, Issue 67, Mycobacterium tuberculosis, NMR, Metabolomics, homogenizer, lysis, cell extracts, sample preparation Single-cell Analysis of Bacillus subtilis Biofilms Using Fluorescence Microscopy and Flow Cytometry Authors: Juan C. Garcia-Betancur, Ana Yepes, Johannes Schneider, Daniel Lopez. Institutions: University of Würzburg. Biofilm formation is a general attribute to almost all bacteria 1-6. When bacteria form biofilms, cells are encased in extracellular matrix that is mostly constituted by proteins and exopolysaccharides, among other factors 7-10. The microbial community encased within the biofilm often shows the differentiation of distinct subpopulation of specialized cells 11-17. These subpopulations coexist and often show spatial and temporal organization within the biofilm 18-21. Biofilm formation in the model organism Bacillus subtilis requires the differentiation of distinct subpopulations of specialized cells. Among them, the subpopulation of matrix producers, responsible to produce and secrete the extracellular matrix of the biofilm is essential for biofilm formation 11,19. Hence, differentiation of matrix producers is a hallmark of biofilm formation in B. subtilis. We have used fluorescent reporters to visualize and quantify the subpopulation of matrix producers in biofilms of B. subtilis 15,19,22-24. Concretely, we have observed that the subpopulation of matrix producers differentiates in response to the presence of self-produced extracellular signal surfactin 25. Interestingly, surfactin is produced by a subpopulation of specialized cells different from the subpopulation of matrix producers 15. We have detailed in this report the technical approach necessary to visualize and quantify the subpopulation of matrix producers and surfactin producers within the biofilms of B. subtilis. To do this, fluorescent reporters of genes required for matrix production and surfactin production are inserted into the chromosome of B. subtilis. Reporters are expressed only in a subpopulation of specialized cells. Then, the subpopulations can be monitored using fluorescence microscopy and flow cytometry (See Fig 1). The fact that different subpopulations of specialized cells coexist within multicellular communities of bacteria gives us a different perspective about the regulation of gene expression in prokaryotes. This protocol addresses this phenomenon experimentally and it can be easily adapted to any other working model, to elucidate the molecular mechanisms underlying phenotypic heterogeneity within a microbial community. Immunology, Issue 60, Bacillus subtilis, biofilm formation, gene expression, cell differentiation, single-cell analysis Monitoring the Reductive and Oxidative Half-Reactions of a Flavin-Dependent Monooxygenase using Stopped-Flow Spectrophotometry Authors: Elvira Romero, Reeder Robinson, Pablo Sobrado. Institutions: Virginia Polytechnic Institute and State University. Aspergillus fumigatus siderophore A (SidA) is an FAD-containing monooxygenase that catalyzes the hydroxylation of ornithine in the biosynthesis of hydroxamate siderophores that are essential for virulence (e.g. ferricrocin or N',N",N'''-triacetylfusarinine C)1. The reaction catalyzed by SidA can be divided into reductive and oxidative half-reactions (Scheme 1). In the reductive half-reaction, the oxidized FAD bound to Af SidA, is reduced by NADPH2,3. In the oxidative half-reaction, the reduced cofactor reacts with molecular oxygen to form a C4a-hydroperoxyflavin intermediate, which transfers an oxygen atom to ornithine. Here, we describe a procedure to measure the rates and detect the different spectral forms of SidA using a stopped-flow instrument installed in an anaerobic glove box. In the stopped-flow instrument, small volumes of reactants are rapidly mixed, and after the flow is stopped by the stop syringe (Figure 1), the spectral changes of the solution placed in the observation cell are recorded over time. In the first part of the experiment, we show how we can use the stopped-flow instrument in single mode, where the anaerobic reduction of the flavin in Af SidA by NADPH is directly measured. We then use double mixing settings where Af SidA is first anaerobically reduced by NADPH for a designated period of time in an aging loop, and then reacted with molecular oxygen in the observation cell (Figure 1). In order to perform this experiment, anaerobic buffers are necessary because when only the reductive half-reaction is monitored, any oxygen in the solutions will react with the reduced flavin cofactor and form a C4a-hydroperoxyflavin intermediate that will ultimately decay back into the oxidized flavin. This would not allow the user to accurately measure rates of reduction since there would be complete turnover of the enzyme. When the oxidative half-reaction is being studied the enzyme must be reduced in the absence of oxygen so that just the steps between reduction and oxidation are observed. One of the buffers used in this experiment is oxygen saturated so that we can study the oxidative half-reaction at higher concentrations of oxygen. These are often the procedures carried out when studying either the reductive or oxidative half-reactions with flavin-containing monooxygenases. The time scale of the pre-steady-state experiments performed with the stopped-flow is milliseconds to seconds, which allow the determination of intrinsic rate constants and the detection and identification of intermediates in the reaction4. The procedures described here can be applied to other flavin-dependent monooxygenases.5,6 Bioengineering, Issue 61, Stopped-flow, kinetic mechanism, SidA, C4a-hydroperoxyflavin, monooxygenase, Aspergillus fumigatus Bioluminescence Imaging of NADPH Oxidase Activity in Different Animal Models Authors: Wei Han, Hui Li, Brahm H. Segal, Timothy S. Blackwell. Institutions: Vanderbilt University School of Medicine, Roswell Park Cancer Institute, University at Buffalo School of Medicine. NADPH oxidase is a critical enzyme that mediates antibacterial and antifungal host defense. In addition to its role in antimicrobial host defense, NADPH oxidase has critical signaling functions that modulate the inflammatory response 1. Thus, the development of a method to measure in "real-time" the kinetics of NADPH oxidase-derived ROS generation is expected to be a valuable research tool to understand mechanisms relevant to host defense, inflammation, and injury. Chronic granulomatous disease (CGD) is an inherited disorder of the NADPH oxidase characterized by severe infections and excessive inflammation. Activation of the phagocyte NADPH oxidase requires translocation of its cytosolic subunits (p47phox, p67phox, and p40phox) and Rac to a membrane-bound flavocytochrome (composed of a gp91phox and p22phox heterodimer). Loss of function mutations in any of these NADPH oxidase components result in CGD. Similar to patients with CGD, gp91phox -deficient mice and p47phox-deficient mice have defective phagocyte NADPH oxidase activity and impaired host defense 2, 13. In addition to phagocytes, which contain the NADPH oxidase components described above, a variety of other cell types express different isoforms of NADPH oxidase. Here, we describe a method to quantify ROS production in living mice and to delineate the contribution of NADPH oxidase to ROS generation in models of inflammation and injury. This method is based on ROS reacting with L-012 (an analogue of luminol) to emit luminescence that is recorded by a charge-coupled device (CCD). In the original description of the L-012 probe, L-012-dependent chemiluminescence was completely abolished by superoxide dismutase, indicating that the main ROS detected in this reaction was superoxide anion 14. Subsequent studies have shown that L-012 can detect other free radicals, including reactive nitrogen species 15, 16. Kielland et al. 16 showed that topical application of phorbol myristate acetate, a potent activator of NADPH oxidase, led to NADPH oxidase-dependent ROS generation that could be detected in mice using the luminescent probe L-012. In this model, they showed that L-012-dependent luminescence was abolished in p47phox-deficient mice. We compared ROS generation in wildtype mice and NADPH oxidase-deficient p47phox-/- mice 2 in the following three models: 1) intratracheal administration of zymosan, a pro-inflammatory fungal cell wall-derived product that can activate NADPH oxidase; 2) cecal ligation and puncture (CLP), a model of intra-abdominal sepsis with secondary acute lung inflammation and injury; and 3) oral carbon tetrachloride (CCl4), a model of ROS-dependent hepatic injury. These models were specifically selected to evaluate NADPH oxidase-dependent ROS generation in the context of non-infectious inflammation, polymicrobial sepsis, and toxin-induced organ injury, respectively. Comparing bioluminescence in wildtype mice to p47phox-/- mice enables us to delineate the specific contribution of ROS generated by p47phox-containing NADPH oxidase to the bioluminescent signal in these models. Bioluminescence imaging results that demonstrated increased ROS levels in wildtype mice compared to p47phox-/- mice indicated that NADPH oxidase is the major source of ROS generation in response to inflammatory stimuli. This method provides a minimally invasive approach for "real-time" monitoring of ROS generation during inflammation in vivo. Immunology, Issue 68, Molecular Biology, NADPH oxidase, reactive oxygen species, bioluminescence imaging A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments Authors: Eva K. Brinkman, Kira Schipper, Nadine Bongaerts, Mathias J. Voges, Alessandro Abate, S. Aljoscha Wahl. Institutions: Delft University of Technology, Delft University of Technology. This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9 addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments. A three-step pathway for alkane degradation was implemented in E. coli to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2, rubA3, rubA4and rubB) of the alkane hydroxylase system from Gordonia sp. TF68,21 were transformed into E. coli. For the conversion of long-chain alkanes (C15-C36), theladA gene from Geobacillus thermodenitrificans was implemented. For the required further steps of the degradation process, ADH and ALDH (originating from G. thermodenitrificans) were introduced10,11. The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed. To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources. The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g. under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n-hexane in the culture medium were observed. Summarizing, the results indicate that the toolkit enables E. coli to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach. Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM Super-resolution Imaging of the Cytokinetic Z Ring in Live Bacteria Using Fast 3D-Structured Illumination Microscopy (f3D-SIM) Authors: Lynne Turnbull, Michael P. Strauss, Andrew T. F. Liew, Leigh G. Monahan, Cynthia B. Whitchurch, Elizabeth J. Harry. Institutions: University of Technology, Sydney. Imaging of biological samples using fluorescence microscopy has advanced substantially with new technologies to overcome the resolution barrier of the diffraction of light allowing super-resolution of live samples. There are currently three main types of super-resolution techniques – stimulated emission depletion (STED), single-molecule localization microscopy (including techniques such as PALM, STORM, and GDSIM), and structured illumination microscopy (SIM). While STED and single-molecule localization techniques show the largest increases in resolution, they have been slower to offer increased speeds of image acquisition. Three-dimensional SIM (3D-SIM) is a wide-field fluorescence microscopy technique that offers a number of advantages over both single-molecule localization and STED. Resolution is improved, with typical lateral and axial resolutions of 110 and 280 nm, respectively and depth of sampling of up to 30 µm from the coverslip, allowing for imaging of whole cells. Recent advancements (fast 3D-SIM) in the technology increasing the capture rate of raw images allows for fast capture of biological processes occurring in seconds, while significantly reducing photo-toxicity and photobleaching. Here we describe the use of one such method to image bacterial cells harboring the fluorescently-labelled cytokinetic FtsZ protein to show how cells are analyzed and the type of unique information that this technique can provide. Molecular Biology, Issue 91, super-resolution microscopy, fluorescence microscopy, OMX, 3D-SIM, Blaze, cell division, bacteria, Bacillus subtilis, Staphylococcus aureus, FtsZ, Z ring constriction
Unique Solution To Gene Regulation Discovered Study of “contained, isolated” genes in sea lamprey may indicate how potentially deleterious genes can be controlled Research on a unique vertebrate called the sea lamprey shows that more than a thousand genes are shed during its early development. These genes are paradoxically lost all throughout the developing embryo except in a specialized compartment called “primordial germ cells” or PGCs. The PGCs can be thought of as embryonic stem cells and are used, ultimately, for making the next generation of lampreys. Based on computational analysis, a significant number of genes that are lost in the embryo have signatures of “pluripotency,” which suggests that they could also have undesirable effects if they were inadvertently turned on in the body. In effect, by undergoing programmed genome rearrangement and gene loss during embryogenesis, the sea lamprey “seals” the genes away in the small germline compartment so they cannot be misexpressed and thereby create untoward problems (such as development of cancer, for example). The study was completed at the Benaroya Research Institute at Virginia Mason (BRI) and recently published as a featured article in Current Biology, along with an outside commentary highlighting its biological importance. The article authors are Jeramiah Smith, PhD, former postdoctoral fellow at BRI and now Assistant Professor of Biology at the University of Kentucky; Chris Amemiya, PhD, Principal Investigator at BRI and Professor of Biology, University of Washington; Evan Eichler, PhD, University of Washington Genome Sciences Professor; and Carl Baker, Research Scientist, University of Washington. The discovery builds on the group´s previous work published in the Proceedings of the National Academy of Sciences in 2009. “Our new research confirms that lampreys experience rampant programmed genome rearrangement and losses during early development,” says Dr. Amemiya. “The genes are restricted to the germline compartment suggesting a deeper biological strategy in order to regulate the genome for highly precise, normal functioning. The strategy removes the possibility that the genes will be expressed in deleterious ways. Humans, on the other hand, must contain these genes through other “epigenetic” mechanisms that are not fool-proof. There are several implications of this work: -By understanding how programmed genome arrangement occurs so pervasively in lampreys, scientists can gain insight into how vertebrate genomes can remain stable and what genetic factors contribute to this stability. -Studies in distantly related species can provide unique insights into fundamental biological concepts and may be translatable to human health. -Identifying the molecular and developmental mechanism of how lampreys regulate their genome may have implications for disease treatment. Sea lampreys are "basal" vertebrates that lack jaws and have unique properties that are of interest to scientists. This includes a completely different genetic toolkit for their adaptive immune system, which was also discovered, in part, by Amemiya's group, as well as remarkable powers of regeneration that allow them to completely recover from a severed spinal cord. High throughput genomic sequencing, computational analysis and other state-of-the-art scientific advances made this research possible. Grant funding was provided by the National Science Foundation, National Institutes of Health and Howard Hughes Medical Institute. Benaroya Research Institute
A Brief History of Latin Jazz A Look at The Roots, Development, and Pioneers of Afro-Cuban Jazz Mongo Santamaria performs during a concert at Central Park circa 1970 in Manhattan, New York. Walter Iooss Jr / Contributor / Getty Images by Carlos Quintana In general terms, Latin Jazz is a musical label defined by the combination of Jazz with Latin music rhythms. Brazilian Jazz, a style that emerged from the sounds of Bossa Nova thanks to artists like Antonio Carlos Jobim and Joao Gilberto, fits this general concept. However, this introduction to Latin Jazz history deals with the origins and development of the style that has come to define Latin Jazz as a whole: Afro-Cuban Jazz. Habanera and Early Jazz Although the foundations of Latin Jazz were consolidated during the 1940s and 1950s, there is evidence about the inclusion of Afro-Cuban sounds into early Jazz. To this regard, Jazz pioneer Jelly Roll Morton used the term Latin tinge to make a reference to the rhythm that characterized some of the Jazz that was played in New Orleans at the beginning of the 20th century. This Latin tinge was a direct reference to the influence that the Cuban Habanera, a genre that was popular in the dance halls of Cuba at the end of the 19th century, had in the making of some of the local Jazz expressions that were produced in New Orleans. Along those lines, the proximity between New Orleans and Havana also allowed Cuban musicians to borrow elements from the early American Jazz. Mario Bauza and Dizzy Gillespie Mario Bauza was a talented trumpeter from Cuba who moved to New York in 1930. He brought with him a solid knowledge of Cuban music and a big interest for American Jazz. When he arrived in the Big Apple, he joined the big band movement playing with the bands of Chick Webb and Cab Calloway. In 1941, Mario Bauza left Cab Calloway's orchestra to join the band of Machito and the Afro-Cubans. Acting as the music director of Machito's band, in 1943 Mario Bauza wrote the song "Tanga," a single considered by many the first Latin Jazz track in history. When he was playing for the bands of Chick Webb and Cab Calloway, Mario Bauza had the opportunity to meet a young trumpeter named Dizzy Gillespie. They not only forged a lifelong friendship but also influenced each other's music. Thanks to Mario Bauza, Dizzy Gillespie developed a taste for Afro-Cuban music, which he successfully incorporated into jazz. In fact, it was Mario Bauza who introduced the Cuban percussionist Luciano Chano Pozo to Dizzy Gillespie. Together, Dizzy and Chano Pozo wrote some of the most iconic Latin Jazz tracks in history including the legendary song "Manteca". The Mambo Years and Beyond By the beginning of the 1950s, Mambo had taken the world by storm and Latin Jazz was enjoying new levels of popularity. This new popularity was the result of the music produced by artists like Tito Puente, Cal Tjader, Mongo Santamaria, and Israel 'Cachao' Lopez. During the 1960s, when Mambo was being abandoned in favor of a new musical mix named Salsa, the Latin Jazz movement was influenced by different artists who moved between the emerging genre and Jazz. Some of the biggest names include different artists from New York such as pianist Eddie Palmieri and percussionist Ray Barreto, who later played a major role with the legendary Salsa band Fania All Stars. Up to the 1970s, Latin Jazz was mainly shaped in the US. However, back in 1972 in Cuba a talented pianist named Chucho Valdes founded a band named Irakere, which added a Funky beat to traditional Latin Jazz changing forever the sounds of this genre. For the past decades, Latin Jazz has continued to thrive as a more global phenomenon that has incorporated all kinds of elements from the Latin music world. Some of today's most famous Latin Jazz artists include well-established artists such as Chucho Valdes, Paquito D'Rivera, Eddie Palmieri, Poncho Sanchez and Arturo Sandoval, and a whole new generation of stars like Danilo Perez and David Sanchez. Latin Jazz is a never ending business. mla apa chicago Your Citation Quintana, Carlos. "A Brief History of Latin Jazz." ThoughtCo, Apr. 21, 2017, thoughtco.com/a-brief-history-of-latin-jazz-2141615. Quintana, Carlos. (2017, April 21). A Brief History of Latin Jazz. Retrieved from https://www.thoughtco.com/a-brief-history-of-latin-jazz-2141615 Quintana, Carlos. "A Brief History of Latin Jazz." ThoughtCo. https://www.thoughtco.com/a-brief-history-of-latin-jazz-2141615 (accessed September 23, 2017). Five Legends that Made Latin Jazz Where Did Mambo Originate? Trumpeter Mario Bauza Biography Tracing Country Music History from Jimmie Rodgers to Garth Brooks Learn About the History of the Different Genres of Latin Music Introduction to Jazz Music: A Brief History Famous Birthdays in Latin Music What Is the Origin of Cuban Music? Get the Party Started With This Latin Playlist Dizzy Gillespie Profile What Are the Origins of Bolero? Origins and History of Samba What Are the Most Popular Latin Music Genres? A Beginner's Guide to the History of Punk Learn About the History of Jazz From 1940 to 1950 Essential Latin Music Albums
Return to CAP Home NewsPath® Diagnosis of Heparin-Induced Thrombocytopenia Karen A. Moser, MD Heparin-induced thrombocytopenia (HIT) is a complication seen in approximately 1%–3% of patients treated with unfractionated heparin therapy. HIT may also be associated with low molecular weight heparin use.1 Clinical symptoms include moderate thrombocytopenia and, paradoxically, arterial or venous thrombosis in up to 30% of affected patients. Postoperative (specifically orthopedic surgical) patients are at higher risk for HIT than medical patients.1,2 HIT is considered a disease of adults, although pediatric cases have been reported. HIT usually develops within five to ten days after the onset of heparin therapy in unexposed patients. Patients with a history of heparin use in the past three months may form antibodies much earlier, often within 24 to 48 hours of re-exposure.3 HIT is an immunologic disorder resulting from antibodies, most commonly IgG isotype, with specificity for a complex of heparin and platelet factor 4 (PF4).2 PF4 is contained within platelet alpha granules and is released upon platelet activation. In a subset of reactive patients, immune complexes consisting of heparin, PF4, and antibody activate platelets and ultimately lead to a hypercoagulable state.4 Prior to laboratory testing, a clinical scoring system can predict a patient’s likelihood of HIT. One validated system is the 4T’s score. Four factors (degree of Thrombocytopenia, Timing of platelet count drop, signs of Thrombosis, and presence of other causes of thrombocytopenia) have associated point values. The points for each factor are summed to give a pretest probability of HIT.5 Many laboratories use commercial enzyme-linked immunosorbent assay (ELISA) kits to detect antiheparin-PF4 antibodies. ELISA testing measures the optical density (OD) of each patient sample as an indication of the amount of antibody present. The manufacturers provide OD cutoffs based on a reference group not exposed to heparin (typically 0.4–0.5 OD). ELISA is a very sensitive test (95%–99%), but is less specific (74%–86%).1,6 This may be partly because a subset of patients can form heparin-PF4 antibodies, yet never develop HIT symptoms.7 Recent literature suggests that OD values higher than the manufacturer’s suggested cutoff (OD > 1.0) may be more predictive of HIT.2,8,9,10 Additionally, although many laboratories test for IgG, IgA, and IgM antibodies, several studies have suggested that IgG antibodies may be the most clinically significant.7,11,12 There is no consensus in the literature about the best way to maximize ELISA specificity, and practice varies between laboratories.13 Strategies may include using a clinical scoring system along with laboratory testing, increasing the OD cutoff value, and/or using an IgG-only ELISA kit.8 Published HIT diagnostic algorithms combine a clinical pretest probability score with the ELISA OD value to determine whether confirmatory testing is warranted. For patients with concordantly low or high pretest probability and OD values, additional testing is probably not necessary. If the patient has an intermediate pretest probability and an OD value near the cutoff, functional testing is recommended for definitive diagnosis.3,5,6 Functional tests, such as the serotonin release assay, are more specific than ELISAs, but their use is typically limited to specialized hemostasis laboratories due to their technically demanding nature.3,6 In summary, HIT is an immune-mediated reaction to heparin therapy that results in decreased platelet counts and hypercoagulability and requires clinicopathologic correlation for accurate diagnosis. Shantsila E, Lip GY, Chong BH. Heparin-induced thrombocytopenia: a contemporary clinical approach to diagnosis and management. Chest. 2009; 135(6):1651–1664. Warkentin TE. Platelet count monitoring and laboratory testing for heparin-induced thrombocytopenia: Recommendations of the College of American Pathologists. Arch Pathol Lab Med. 2002;126(11):1415–1423. Lefkowitz JB. Heparin induced thrombocytopenia. In: Kottke-Marchant K, ed. An Algorithmic Approach to Hemostasis Testing. Northfield, IL: CAP Press; 2008:287–294. Kelton JG, Warkentin TE. Heparin-induced thrombocytopenia: a historical perspective. Blood. 2008;112(7):2607–2616. Lo GK, Juhl D, Warkentin TE, Sigouin CS, Eichler P, Greinacher A. Evaluation of pretest clinical scores (4T’s) for the diagnosis of heparin-induced thrombocytopenia in two clinical settings. J Thromb Haemost. 2006;4(4):759–765. Otis SA, Zehnder JL. Heparin-induced thrombocytopenia: current status and diagnostic challenges. Am J Hematol. 2010;85(9):700–706. Greinacher A, Juhl D, Strobel U, et al. Heparin induced thrombocytopenia: a prospective study on the incidence, platelet activating capacity, and clinical significance of antiplatelet factor 4/heparin antibodies of the IgG, IgM, and IgA classes. J Thromb Haemost. 2007;5(8):1666–1673. Janatpour KA, Gosselin RC, Dager WE, et al. Usefulness of optical density values from heparin-platelet factor 4 antibody testing and probability scoring models to diagnose heparin-induced thrombocytopenia. Am J Clin Pathol. 2007;127(3):429–433. Warkentin TE, Sheppard JI, Moore JC, Sigouin CS, Kelton JG. Quantitative interpretation of optical density measurements using PF-4 dependent enzyme-immunoassays. J Thromb Haemost. 2008;6(8):1304–1312. Whitlatch NL, Perry SL, Ortel TL. Anti-heparin/platelet factor 4 antibody optical density values and the confirmatory procedure in the diagnosis of heparin-induced thrombocytopenia. Thromb Haemost. 2008;100(4):678–684. Warkentin TE, Sheppard JA, Moore JC, Moore KM, Isgouin CS, Kelton JG. Laboratory testing for the antibodies that cause heparin-induced thrombocytopenia: how much class do we need? J Lab Clin Med. 2005;146(6):341–346. Morel-Kopp MC, Aboud M, Tan CW, Kulathilake C, Ward C. Heparin-induced thrombocytopenia: evaluation of IgG and IgGAM ELISA assays. Int Jnl Lab Hem. 2011;33(3):245–250. doi:10.1111/j.1751-553X.2010.01276.x. Price EA, Hayward CPM, Moffat KA, et al. Laboratory testing for heparin-induced thrombocytopenia is inconsistent in North America: a survey of North American specialized coagulation laboratories. Thromb Haemost. 2007;98(6):1357–1361. Download this article in Microsoft Word format. Download this article in PDF format. NewsPath® Editor: Kyle L. Eskue, MD This newsletter is produced in cooperation with the College of American Pathologists Public Affairs Committee and the NewsPath Editorial Board and may be reproduced in whole or in part as a service to the medical community. Copyright © 2012 by the College of American Pathologists. Please e-mail any comments to newspath@cap.org. Related Links NewsPath NewsPath Archives
Dig it! Two new shrimp species found in burrows at the bottom of the Gulf of California Pensoft Publishers Although the Santa María-La Reforma lagoon complex in the Gulf of California is one of the most important areas for shrimp fishery, little is known about the crustacean species that live in burrows dug in the bottom. In addition to presenting two new species for science, the researchers collaborate to build up on the knowledge of small shrimp species living there. This is a female specimen of the new shrimp species Alpheus margaritae. Credit: Dr. José Salgado-Barragán CC-BY 4.0 Although the Santa María-La Reforma lagoon complex in the Gulf of California is one of the most important areas for shrimp fishery, little is known about the crustacean species that live in the burrows dug in the bottom. In addition to presenting two species new to science, researchers Drs. José Salgado-Barragán, Universidad Nacional Autónoma de México, Manuel Ayón-Parente and Pilar Zamora-Tavares, both affiliated with Universidad de Guadalajara, México collaborated to build on the knowledge of small shrimp species living there. The study is published in the open access journal ZooKeys. Over the span of about two years -- between 2013 and 2015, the scientists conducted series of surveys of the bottom-dwelling crustaceans in Bahía Santa María-La Reforma lagoon, located in the southwest Gulf of California. Following a thorough examination of the collected specimens, they recorded five shrimp species of three genera, inhabiting burrows dug into either mud, sand, or sandy-mud. Two of these species turned out to be previously unknown. One of the new species is named Alpheus margaritae after Dr. Margarita Hermoso-Salazar, a caridean shrimp expert who helped the authors with the identification of the species. This new crustacean lives in the intertidal zone, where it hides in soft mud and gravel of shells and rocks. So far, it is known exclusively from the coastal lagoon Bahía Santa María-La Reforma, Sinaloa, Mexico. Among its characteristic traits are creamy-white colouration splashed with sparse olive green to light brown patches. The second new species, Leptalpheus melendezensis, is reported to live in the fine sand of the beach. It is named after the Melendez island -- the only locality the species has been identified from. Unlike the rest seven members of its genus (Leptalpheus), its major cheliped lacks adhesive disks. Materials provided by Pensoft Publishers. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length. José Salgado-Barragán, Manuel Ayón-Parente, Pilar Zamora-Tavares. New records and description of two new species of carideans shrimps from Bahía Santa María-La Reforma lagoon, Gulf of California, Mexico (Crustacea, Caridea, Alpheidae and Processidae). ZooKeys, 2017; 671: 131 DOI: 10.3897/zookeys.671.9081 Pensoft Publishers. "Dig it! Two new shrimp species found in burrows at the bottom of the Gulf of California." ScienceDaily. ScienceDaily, 12 May 2017. <www.sciencedaily.com/releases/2017/05/170512100422.htm>. Pensoft Publishers. (2017, May 12). Dig it! Two new shrimp species found in burrows at the bottom of the Gulf of California. ScienceDaily. Retrieved May 23, 2017 from www.sciencedaily.com/releases/2017/05/170512100422.htm Pensoft Publishers. "Dig it! Two new shrimp species found in burrows at the bottom of the Gulf of California." ScienceDaily. www.sciencedaily.com/releases/2017/05/170512100422.htm (accessed May 23, 2017). Mating and Breeding Living Together in Mud: New Bivalve Species Dwelling on a Sea Cucumber Discovered in Japan Sep. 8, 2016 — Most bivalves live in sand or mud or attached to rock surface. However, a new bivalve species described from Japan lives on a sea cucumber that burrows in mudflats. This species is attached to the ... read more Invasive 'Demon Shrimp' Threaten British Marine Species Feb. 9, 2014 — A species of shrimp, dubbed the ‘demon shrimp,’ which was previously unknown in British waters, is attacking and eating native shrimp and disrupting the food chain in some of England's rivers and ... read more Scientist Look for New Marine Species for Comercial Use Oct. 21, 2013 — On the northwest of Mexico, the biggest part of the fishery production is based in few species such as sardine, squid, tuna and shrimp. Now biologists have identified new marine species capable of ... read more Tiny Grazers Play Key Role in Marine Ecosystem Health Apr. 2, 2013 — Tiny sea creatures no bigger than a thumbtack are being credited for playing a key role in helping provide healthy habitats for many kinds of seafood, according to a new study. The little crustacean ... read more RELATED TERMS Sea-Monkey Shrimp farm Antarctic Has Seen Widespread Change in Last 50 Years, Moss Study Reveals 'Yellow Crazy Ant' Workers Lay Eggs as a Food Source
Art and Crime: New Essays A new edited volume arrived today:Noah Charney (ed.), Art and Crime: Exploring the Dark Side of the Art World (Praeger, 2009). [WorldCat] [Praeger] [ARCA]Art crime has received relatively little attention from those who study art to those who prosecute crimes. Indeed, the general public is not well-aware of the various forms of art crime and its impact on society at large, to say nothing of museums, history, and cultural affairs. And yet it involves a multi-billion dollar legitimate industry, with a conservatively-estimated $6 billion annual criminal profit. Information about and analysis of art crime is critical to the wide variety of fields involved in the art trade and art preservation, from museums to academia, from auction houses to galleries, from insurance to art law, from policing to security. Since the Second World War, art crime has evolved from a relatively innocuous crime, into the third highest-grossing annual criminal trade worldwide, run primarily by organized crime syndicates, and therefore funding their other enterprises, from the drug and arms trades to terrorism. It is no longer merely the art that is at stake.Through the use of case examples and careful examination, this book presents the first interdisciplinary essay collection on the study of art crime, and its effect on all aspects of the art world. Contributors discuss art crime subcategories, including vandalism, iconoclasm, forgery, fraud, peace-time theft, war looting, archaeological looting, smuggling, submarine looting, and ransom. The contributors offer insightful analyses coupled with specific practical suggestions to implement in the future to prevent and address art crime. This work is of critical importance to anyone involved in the art world, its trade, study, and security.Essays on antiquities include:Derek Fincham, "The fundamental importance of archaeological context"David Gill, "Homecomings: learning from the return of antiquities to Italy"Toby Bull,"Lack of due diligence and unregulated markets: trade in illicit antiquities and fakes in Hong Kong"
Research sheds light on how wounds heal Research by a civil engineer from the University of Waterloo is helping shed light on the way wounds heal and may someday have implications for understanding how cancer spreads, as well as why certain birth defects occur. Professor Wayne Brodland is developing computational models for studying the mechanical interactions between cells. In this project, he worked with a team of international researchers who found that the way wounds knit together is more complex than we thought. The results were published this week in the journal, Nature Physics. "When people think of civil engineering, they probably think of bridges and roads, not the human body," said Professor Brodland. "Like a number of my colleagues, I study structures, but ones that happen to be very small, and under certain conditions they cause cells to move. The models we build allow us to replicate these movements and figure out how they are driven." When you cut yourself, a scar remains, but not so in the cells the team studied. The researchers found that an injury closes by cells crawling to the site and by contraction of a drawstring-like structure that forms along the wound edge. They were surprised to find that the drawstring works fine even when it contains naturally occurring breaks. This knowledge could be the first step on a long road towards making real progress in addressing some major health challenges. "The work is important because it helps us to understand how cells move. We hope that someday this knowledge will help us to eliminate malformation birth defects, such as spina bifida, and stop cancer cells from spreading," said Professor Brodland. Source:University of Waterloo b5663198-3f75-45e4-83f5-d3df6f6b6d9e|0|.0 Posted in: Medical Science News | Medical Research News | Healthcare News Tags: Birth Defects, Cancer, Light, Scar, Spina Bifida, Wound Comments (0) History of sleep: what was normal?Leica Microsystems introduces new augmented reality imaging technologies for surgical microscopesScientists grow womb lining models that could shed light on early pregnancy eventsAre sunscreens giving us the protection we need?miniDAWN TREOS II SEC-MALS Detector for Essential Protein and Polymer CharacterizationUsing nanodiamonds to see smallerDespite availability of life-saving medicaiton, annual deaths due to fungal meningitis are still over 180,000Chikungunya, dengue, and Zika viruses: three viruses in one mosquito Exceptional user comfort during long periods of routine microscope observation from Olympus CX43 and CX33 microscopes
Roasting coffee beans a dark brown produces valued antioxidants: food scientists Food scientists at the University of British Columbia have been able to pinpoint more of the complex chemistry behind coffee's much touted antioxidant benefits, tracing valuable compounds to the roasting process. Lead author Yazheng Liu and co-author Prof. David Kitts found that the prevailing antioxidants present in dark roasted coffee brew extracts result from the green beans being browned under high temperatures. Their findings will appear in a forthcoming issue of Food Research International and can be previewed at: http://dx.doi.org/10.1016/j.foodres.2010.12.037. Liu and Kitts analyzed the complex mixture of chemical compounds produced during the bean's browning process, called the "Maillard reaction." The term refers to the work by French chemist Louis-Camille Maillard who in the 1900s looked at how heat affects the carbohydrates, sugars and proteins in food, such as when grilling steaks or toasting bread. Antioxidants aid in removing free radicals, the end products of metabolism which have been linked to the aging process. "Previous studies suggested that antioxidants in coffee could be traced to caffeine or the chlorogenic acid found in green coffee beans, but our results clearly show that the Maillard reaction is the main source of antioxidants," says Liu, an MSc student in the Faculty of Land and Food Systems (LFS). "We found, for example, that coffee beans lose 90 per cent of their chlorogenic acid during the roasting process," says Kitts, LFS food science professor and director of the Food, Nutrition and Health program. The UBC study sheds light on an area of research that has yielded largely inconsistent findings. While some scientists report increased antioxidant activity in coffee made from dark roasted beans, others found a decrease. Yet other theories insist that medium roast coffees yield the highest level of antioxidant activity. "We have yet to fully decipher all the complex compounds in roasted coffee beans. We only know the tip of the iceberg," says Kitts, who has been studying Maillard reaction chemicals over the past 25 years. Provided by: University of British Columbia Different types of alcohol elicit different emotional responses Different types of alcohol elicit different emotional responses, but spirits are most frequently associated with feelings of aggression, suggests research published in the online journal BMJ Open. Air pollution linked to poorer quality sperm Air pollution, particularly levels of fine particulate matter (PM2.5), is associated with poorer quality sperm, suggests research published online in Occupational & Environmental Medicine. Sunrise and sunset guide daily activities of city-dwellers Despite artificial lightning and social conventions, the dynamics of daylight still influence the daily activities of people living in modern, urban environments, according to new research published in PLOS Computational ... Older men need more protein to maintain muscles The amount of protein recommended by international guidelines is not sufficient to maintain muscle size and strength in older men, according to a new study. Exercising and eating well are greater contributors to health than standing at work By now you've probably heard the edict from the health community: Sitting is the new smoking. Perhaps you've converted to a standing desk, or maybe you have a reminder on your phone to get up once an hour and walk around ... Published in the Journal of Adolescent Health, the study describes changes in young people's sexual practices using nationally-representative data from the National Surveys of Sexual Attitudes and Lifestyles (Natsal), the ... sstritt 2.5 / 5 (2) Feb 02, 2011 What about other sources of antioxidants? Can the antioxidant content of some fruits and vegetables be enhanced by cooking? I had been under the impression that heat was usually bad for nutrients. 1 / 5 (3) Feb 02, 2011 Was the research paid for by Starbucks? They over roast their beans and ruin the flavor of the coffee, everything of theirs tastes burned. Their coffee sucks. But I guess it is loaded with antioxidants. And don't tell me that because they are popular I'm wrong, dark roast is for people with no taste and there are plenty of them, doesn't make it good. Gevalia regular roast is about the best tasting coffee going, too bad they can't get orders straight. 8 o'clock regular roast is not only one of the highest rated, but one of the cheapest too. MrsButterworth not rated yet Feb 02, 2011 I agree about $tarbucks. The light roast at Panera is great. I knew that would garner a 1 from some Starbucks lover who hasn't got a clue what good coffee tastes like.
Parkinson's disease and hand coordination 03 July 2017 - By Scott Heappey The symptoms of Parkinson's disease (PD) may vary from person to person and from day to day. In the early stages, symptoms are mild and may go unnoticed. Symptoms are usually unilateral at the beginning, later it becomes marked and involves both sides. Shaky hands (tremor), slowness of movement, stiff muscles and impaired postural balances are some of the early issues. Later, the symptoms become so evident that the daily activities become tough and even impossible (1). Hand coordination (eye-hand coordination) is one of the major tasks to be affected by PD, which varies from day to day(2). Parkinson's disease is a major neurodegenerative disease that causes progressive disability affecting mostly the elderly populations. The average age of onset is 60, but its prevalence may rise to 300-500/100,000 after the age of 80. Several single genes have been identified to cause PD; hence the first degree relatives have 2-3 times more chance of developing the disease in later life. Environmental factors have also been included as a cause(3). The chemical concerned here is "Dopamine", which is released from the substantia nigra, a part of the brain that lies deep to the cerebral cortex. Dopamine released from these pigmented neurons travel through the axons to the basal ganglia (Striatum- Caudate nucleus and Putamen; Globus pallidus). This part of the brain is situated at the base of the forebrain and is effectively connected with the cerebral cortex, thalamus, and other areas of the brain (4). This part is known to facilitate the voluntary control of the body. When there is degeneration in these pigmented dopamine-containing neurons, the chemical concerned is depleted, as a result, voluntary control over the limbs and trunk are hampered. In hand coordination (eye-hand coordination) i.e. - to grasp a cup, we have to reach our arm toward a visual target (cup), we have to look at the target (cup) and then detect its location relative to the hand. So, in this case, there should be coordination between the visual system (eye) and hand movement in an appropriate manner. These collaborative aspects of gaze and hand systems are referred to as "eye-hand coordination" (5). "Eye-hand coordination may be defined as the skillful, integrated use of the eyes, arms, hands and fingers in fine, precision movements (2)". When impairment in eye-hand coordination appear with Parkinson's disease, these skillful and integrated movements of eyes, arms, hands and fingers are hampered, particularly during accurate and high-velocity movements (2). There is slowness, stiffness, shakiness, and postural imbalance during motor functions including hand coordination. Most of the patients living with Parkinson's complain about fluctuations in their daily activities, particularly memory, mood, sleep pattern, fatigue etc. But motor symptoms (tremor, stiffness, slowness etc) fluctuate far less for newly diagnosed patients. However, motor fluctuations are a crucial problem in advancing PD. It has been estimated that 50% of patients treated with Levo-DOPA for 5 years suffer from motor function variations (4). Several types of fluctuations have been depicted, the most remarkable ones being end-of-dose and on-off fluctuations and dyskinesia (impairment of control over ordinary muscle movement, often resulting in spasmodic movements or tics) (7). These fluctuations usually reflecting patients' medication levels, their activities, their sleep patterns, their social interactions and stress. With the advancement of the disease, the patients become aware that their medicine (L-DOPA) has declined the duration of action. The reason for their declined action is not properly understood, but it is thought that progressive degeneration of the nigro-striatal dopaminergic pathway reduces the ability of nerve terminals to store and release dopamine physiologically. As a result, storage capacity is lost that leads to shorter duration of actionable dopamine. That's why the L-DOPA taken as tablets give rise to a pulsatile action of the drug. Thus there is “on-off fluctuations” and dyskinesia (4, 8). So, as the disease progresses, there are variations in hand coordination on daily activities that vary from day to day. Rodriguez-Oroz, M., Jahanshahi, M., Krack, P., Litvan, I., Macias, R., Bezard, E., & Obeso, J. (2009). Initial clinical manifestations of Parkinson's disease: features and pathophysiological mechanisms. The Lancet Neurology, 8(12), 1128-1139. http://dx.doi.org/10.1016/s1474-4422(09)70293-5 Boisseau, E., Scherzer, P., & Cohen, H. (2002). Eye-Hand Coordination in Aging and in Parkinson?s Disease. Aging, Neuropsychology, And Cognition (Neuropsychology, Development And Cognition: Section B), 9(4), 266-275. http://dx.doi.org/10.1076/anec.9.4.266.8769 Walker, B., Colledge, N., Ralston, S., & Penman, I. (2014). Davidson's principles and practice of medicine (22nd ed., pp. 1195-96). Elsevier Limited. SANTENS, P., BOON, P., VAN ROOST, D., & CAEMAERT, J. (2003). The pathophysiology of motor symptoms in Parkinson’s disease. Acta Neurol. Belg, 103, 129-134. Abekawa, N., & Gomi, H. (2014). Understanding the Coordination Mechanisms of Gaze and Arm Movements. NTT Technical Review, 12, 1-7. Hallett, M. (2012). 2.1.1 PARKINSON DISEASE TREMOR: PATHOPHYSIOLOGY. Parkinsonism & Related Disorders, 18, S81. http://dx.doi.org/10.1016/s1353-8020(11)70388-1 Maetzler, W., Liepelt, I., & Berg, D. (2009). Progression of Parkinson's disease in the clinical phase: potential markers. The Lancet Neurology, 8(12), 1158-1171. http://dx.doi.org/10.1016/s1474-4422(09)70291-1 Kang, S., Wasaka, T., Shamim, E., Auh, S., Ueki, Y., & Lopez, G. et al. (2010). Characteristics of the sequence effect in Parkinson's disease. Movement Disorders, 25(13), 2148-2155. http://dx.doi.org/10.1002/mds.23251 Scott Heappey I aim to create compelling digital solutions that attract and resonate across an ever-evolving landscape. I have a fuelled addiction for new trends and techniques. My focus is to tell a story while meeting all objectives in a user-centric manner. I have a Masters Degree in Art and Design with a focus on Brain Gene Ontology Visualisation. How Project Recorder can help? You can record data and discover patterns to identify good days and bad days. This information can then be shared with your specialist. Progress Recorder is available on the App Store. Click here to download or connect with us via our social channels: Facebook, Instagram or Twitter. Design Estate © Copyright 2017 Progress Recorder
Osseointegrated prosthetic arm controlled via direct nerve implants Earlier this year, the lab of Dr. Rickard Branemark at Sahlgrenska University Hospital was the first to permanently implant electrodes into the nerves and muscles of an amputee in order to allow them to directly control an osseointegrated prosthetic arm. The arm itself is anchored directly to the user’s bone using a titanium screw. This medical advance will allow those with the implanted prosthetic to control it in a way that is much more similar to controlling a natural limb than anything available to date. Dr. Branemark stated that “…implanted electrodes, together with a long-term stable human-machine interface provided by the osseointegrated implant, is a breakthrough that will pave the way for a new era in limb replacement.” The theoretical framework for this technology was explained by Neurogadget last year; to summarize, electrodes detect signals from the nerves and muscles in the stub, and control the prosthetic much as they would control the muscles of a natural hand. However, up until now, these electrodes have only been placed on the surface of the skin. Without implanting the electrodes under the skin, the signal was too unstable to allow for reliable functionality due to the fact that the electrode placement changed every time the user’s skin stretched. By connecting the electrodes directly to the nerves and muscles, researchers are able to allow for much more reliable control of the prosthetic. The operation was possible thanks to new advanced technology developed by Max Ortiz Catalan, supervised by Rickard Brånemark at Sahlgrenska University Hospital and Bo Håkansson at Chalmers University of Technology. To give you an idea how the osseointegrated prosthetic arm will work in the future this video makes a very good illustration: The implanted device is “Osseointegrated”, meaning that it is connected directly to the remaining bone. Osseointegration circumvents many of the issues inherent in socket prosthetics. According to Dr. Branemark, “It allows complete degree of motion for the patient, fewer skin related problems and a more natural feeling that the prosthesis is part of the body. Overall, it brings better quality of life to people who are amputees.” Initial tests with the implanted limb have shown a great degree of success. The user is able to perform movements in a coordinated manner, and even perform several movements simultaneously. Furthermore, they noted that the limb functioned well in hot or cold environmental conditions, which was a limitation of the prosthetic they were using previously. It remains to be seen whether or not the implanted electrodes will allow the user to receive sensory feedback from the limb, an ability which has recently been demonstrated to be possible. To learn more about the research check out the official press release or visit www.integrum.se Looking for a new gadget? How about a Drone? You know you want one! amputationBCIbionicChalmers University of TechnologyimplantimplantedinvasiveMax Ortiz Catalan Previous article 13-year-old girl’s BCI project wins award at the Florida State Science Fair (video) Next article Passthoughts: the future of authentication? TEDx: Tan Le, Co-Founder of Emotiv Talks About Her Extraordinary Immigration Story Neurogadget·March 13, 2012 Neuromarketing study to make iPad ads more effective and enjoyable Neurogadget·December 2, 2010 Concept video of Neurocam shows moments when emotions trigger the camera Hand prosthesis with sense of touch shows lots of promise (video) Melanie·February 24, 2014
Being Human: Conscious Experience Richard Davidson, Jon Kabat-Zinn, Gelek Rimpoche The Being Human Conference, which looks at the science behind the human experience, explores conscious experience." Richard Davidson Neuroscientist Richard Davidson was named one of the 100 most influential people in the world by Time magazine in 2006. His research focuses on correlating emotional states with the brain activity underlying them. Davidson has reached the conclusion that our brain circuitry isn't set in stone: though our emotions are evolved responses, they are remarkably plastic and can be shaped over time. As he says, "I think that what modern neuroscience is teaching us is that, in fact, there is a lot of plasticity, that change is indeed possible, and the evidence is more and more strongly in favor of the importance of environmental influences in shaping brain function and structure and even shaping the expression of our genes." At the Center for Investigating Healthy Minds, Davidson and other researchers investigate qualities of mind such as compassion and mindfulness in order to understand how healthy minds might be cultivated. He is perhaps most famous for his investigations into the neurological effects of meditation, showing how this practice can functionally rewire the brain. In 2012, he spoke at the Being Human conference in San Francisco. Jon Kabat-Zinn, Ph.D. is Professor of Medicine emeritus at the University of Massachusetts Medical School, and founder of the Center for Mindfulness in Medicine, Health Care, and Society and of its world-renown Mindfulness-Based Stress Reduction (MBSR) Clinic. He is the author of numerous best-selling books that have been translated into over 30 languages. Dr. Kabat-Zinn received his Ph.D. in molecular biology from MIT in the laboratory of Nobel Laureate, Salvador Luria, MD. His research career focused on mind/body interactions for healing and on the clinical applications and cost-effectiveness of mindfulness training for people with chronic pain and stress-related disorders, including the effects of MBSR on the brain and how it processes emotions, particularly under stress, and on the immune system (in collaboration with Dr. Richard Davidson). Dr. Kabat-Zinn's work has contributed to a growing movement of mindfulness into mainstream institutions such as medicine, and psychology, health care, schools, corporations, prisons, and professional sports. Hospitals and medical centers around the world now offer clinical programs based on training in mindfulness and MBSR. At present, funding and research in mindfulness is increasing exponentially year by year in the United States. Dr. Kabat-Zinn has received numerous awards over the span of his career, the most recent of which are the Distinguished Friend Award from the Association for Behavioral and Cognitive Therapies (2005); an Inaugural Pioneer in Integrative Medicine Award from the Bravewell Philanthropic Collaborative for Integrative Medicine (2007); and the 2008 Mind and Brain Prize from the Center for Cognitive Science, University of Torino, Italy. He is the founding convener of the Consortium of Academic Health Centers for Integrative Medicine, and a member of the Board of the Mind and Life Institute. His current projects include The Mind's Own Physician, edited by Jon Kabat-Zinn and Richard J. Davidson (Fall, 2011), and guest co-editor (with Mark Williams of Oxford University) of a special issue of the journal, Contemporary Buddhism, devoted to the subject of mindfulness from different classical and clinical perspectives (Volume 12, Issue 1, 2011). He and his wife, Myla Kabat-Zinn support initiatives to further mindfulness in K-12 education and to promote mindful parenting. Born in Lhasa, Tibet in 1939, Kyabje Gelek Rimpoche was recognized as an incarnate lama at the age of four. Among the last generation of lamas educated in Drepung Monastery before the Communist Chinese invasion of Tibet, Gelek Rimpoche was forced to flee to India in 1959. He later edited and printed over 170 volumes of rare Tibetan manuscripts that would have otherwise been lost to humanity. He was director of Tibet House in Delhi, India and a radio host at All India Radio. He conducted over 1000 interviews in compiling an oral history of the fall of Tibet. In the late 1970's Rimpoche was directed to teach Western students by his teachers, the Senior and Junior Masters to His Holiness the Dalai Lama. Since that time he has taught Buddhist practitioners around the world. In 1988, Rinpoche founded Jewel Heart, a Tibetan Buddhist Center. His Collected Works now include over 32 transcripts of his teachings, numerous articles, as well as the national bestseller Good Life, Good Death (Riverhead Books, 2001) and the Tara Box: Rituals for Protection and Healing from the Female Buddha (New World Library, 2004). 01 hr 00 min 25 sec Rest in Awareness Science Has Confirmed Buddhist Philosophy Life of a 21st Century Lama Possible for Science to Oppose Buddhist Philosophy How Meditative Life Informs Scientific Life Who Has the Experiences? What Does Being Human Mean to You? Being Human: What Does It Mean To You? To close the day, Tami Simon asks what "Being Human" means to Jon Kabat-Zinn, Gelek Rimpoche, and Richie Davidson. Being Human 2012 About Being Human 2012 Being Human 2012 Blog Being Human 2012 on Facebook Being Human 2012 on Twitter Wikipedia Entry on Richard Davidson Richard Davidson's Center for Affective Neuroscience Profile Wikipedia Entry on Jon Kabat-Zinn Wikipedia Entry on Gelek Rimpoche Gelek Rimpoche's Jewel Heart Profile Being Human: Orienting to Afternoon + Short Film Being Human: Reflection & Poem 2 Being Human: Individual + Society & Morals + Culture Being Human: Reflection & Poem Being Human: Mental + Representations & Decision-Making
Chemicals giant Evonik has inaugurated the world's first plant for the production of a new source of methionine specifically for shrimp and other crustaceans. The product, sold under the name AQUAVI® Met-Met, is an aquaculture feed additive to make shrimp farming more efficient and sustainable. The plant's modular design allows for increasing production capacity in order to meet customer demand, Evonik said in a press release. "With AQUAVI® Met-Met, we are launching another product for healthy and sustainable animal nutrition. Based on our scientific and technological expertise, we have developed a product innovation that we can now offer to our customers worldwide," said Dr. Reiner Beste, chairman of the board of management of Evonik Nutrition & Care GmbH, at the inauguration ceremony. Since shrimp farming is concentrated in warmer seas close to the equator, the main markets for AQUAVI® Met-Met are located in Asia as well as in South and Central America. Evonik said it has begun to supply customers from these regions with the new product as the plant is ramped up to capacity. We are pleased that Evonik built the first production facility for AQUAVI® Met-Met in Antwerp," said Frank Daman, Evonik site manager in Antwerp. "The new plant affirms our site's key position in Evonik's global production network for methionine." The Antwerp site, with its harbor, is seen as an ideal hub for shipping the product to customers worldwide. AQUAVI® Met-Met is produced in conjunction with an existing methionine plant in a fully backward-integrated process. Evonik said the environmentally friendly production process is water-based and uses no organic solvents. AQUAVI® Met-Met, a dipeptide made up of two DL-methionine molecules, achieves the same weight increase in shrimp and crustaceans as conventional methionine sources, but uses only half the active substance. "This is mainly due to the fact that the dipeptide must be enzymatically broken down in the digestive system of the shrimp and is therefore available for protein synthesis at the right time. That in turn means that a higher share can be processed", Evonik said. In addition, AQUAVI® Met-Met is considerably less water-soluble than other methionine sources and therefore does not leach out of feed as quickly. This relieves the burden on the water. Evonik has over 60 years of experience in the manufacture of essential amino acids and their derivatives. Evonik's Official Website: corporate.evonik.com/en/ Alltech helps needy children in Thailand, advises on layer farming for sustainable support Cobb introduces MV male to China More predictive test kits to come from Kemin's Customer Laboratory Services BIOMIN strengthens commitment to Mexico livestock DSM wins two Innov'Space awards at SPACE 2017