article_text
stringlengths
294
32.8k
topic
stringlengths
3
42
AP Photo/Adrian Kraus An American chestnut tree’s trunk shows signs of blight at the State University of New York’s College of Environmental Science & Forestry Lafayette Road Experiment Station in Syracuse, N.Y., in this 2019 file photo. The blight decimated a towering tree species once dominant in forests from Maine to Georgia. It is an exciting time in the field of conservation and biotechnology. For the first time, it appears likely that a tree that has been developed with genetic engineering (GE) could be approved by U.S. regulatory agencies for use in restoring a threatened species to the wild. That tree is the American chestnut, a species decimated by disease in the last century. Proponents of American chestnut restoration have waited several decades for this day to come, and anticipation for its release builds with every passing month.  Despite the excitement of many for the chance to once again grow the native chestnut tree, there are some who oppose the use of biotechnology for American chestnut restoration. A recent opinion piece in The Hill expressed concerns held by opponents of the release of GE blight-tolerant chestnut trees, so I will try to address those concerns and provide insight into the efforts to restore the American chestnut. The American chestnut tree was once a prominent member of eastern American forests, but was effectively lost because of a fungal blight pathogen that hitchhiked to the U.S. on resistant species of Asian chestnuts. Decades of research at SUNY College of Environmental Science and Forestry in Syracuse, N.Y., yielded American chestnut trees called ‘Darling’ that harbor an acid-detoxifying gene from wheat, which allows the trees to survive infections by the blight fungus.  This feat of modern biology enables the conservation of the native chestnut’s genetics without hybridization with other chestnut species. Opponents of ‘Darling’ trees suggest that Chinese-American hybrid chestnuts would be a safer choice, citing concerns of unexpected changes that could come from the GE techniques used to create the blight-tolerant chestnuts.  While American chestnuts bred with Asian chestnut species can have enhanced tolerance to blight infections, many of the important traits of the native chestnut tree are also lost in the process. This is due to the enormous genetic changes that occur when two species hybridize, where the entire genetic structure of the species are reorganized. In contrast, ‘Darling’ American chestnuts maintain their genetic constitution with the small addition of the gene from wheat. Additionally, the genetic sequence (genome) of ‘Darling’ has been mapped and no unexpected genetic changes were caused by the addition of the gene from wheat.  Another concern is the possibility that resistance to blight may not sufficiently persist over time and that the trees eventually may succumb to the infections they were developed to resist. While this is a valid concern, it is unlikely because of the way the wheat gene protects the tree without directly harming the fungus. Also, the end result of this scenario would be the death of GE trees, which is the same outcome currently experienced by wild trees. In other words, there is no added threat, just an extension of the status quo.   While it is impossible to know every possible consequence of planting ‘Darling’ chestnut trees in the forest, it is possible to rule out many hypothesized risks. For example, some of the most sensitive organisms in the chestnuts’ native ecosystem include pollinating insects and amphibians, which suffer from even small exposures to toxins. Experiments were conducted where bumble bees were fed pollen containing the wheat gene and wood frog tadpoles were fed ‘Darling’ chestnut leaves. In both cases, no differences were observed between chestnuts with or without the wheat gene.  These are just two examples of environmental studies performed with ‘Darling’ chestnut trees to test their safety. In all these published studies that looked at interactions with insects, plants, fungi, amphibians and more, no negative effects were observed from the additional gene from wheat. As a result, it is reasonable to conclude that these trees do not present any novel threats to the environment. This is not surprising, since the same gene used in ‘Darling’ chestnuts is naturally found in many species of plants and fungi that live in the eastern forests and other ecosystems. After assessing these studies in a years-long review, the U.S. Department of Agriculture (USDA) has also concluded that these trees do not represent novel risks to the environment. (The USDA’s public comment period on the trees ends Dec. 27.) The current approach of the American Chestnut Foundation is a multi-pronged strategy that uses biotechnology in conjunction with breeding and biocontrol techniques. ‘Darling’ trees fit into the biotechnology category but there is also a decades-long breeding program that has been incorporating resistance genes from Chinese chestnut into advanced generation hybrid American chestnut trees.  These two approaches aren’t seen as an either/or proposition but as complementary techniques that can be explored in parallel, as well as in combination. The third category is biocontrol, which includes several techniques that mitigate infection damage by treating infected trees with biological agents. Biocontrol helps to maintain existing trees in orchards but has potential for mitigating disease damage in wild ecosystems as well. The combination of these three approaches is known as 3BUR: Biotechnology, Breeding, and Biocontrol United for Restoration. A combined strategy has the best chance at success in American chestnut restoration.  In a time filled with seemingly endless stories about degradation of the environment, the potential restoration of the American chestnut using biotechnology presents a clear example of a positive impact humans can have on the environment. This tree may represent the first of many projects where people can help trees to stand their ground against invasive diseases. But for a start, it will be the return of one of America’s most iconic forest tree species.  Erik Carlson is a research project assistant, teaching assistant and doctoral student at SUNY College of Environmental Science and Forestry and a member of the New York chapter of the American Chestnut Foundation.
Environmental Science
New research led by Scripps Institution of Oceanography at UC San Diego has confirmed that coastal water pollution transfers to the atmosphere in sea spray aerosol, which can reach people beyond just beachgoers, surfers, and swimmers. Rainfall in the US-Mexico border region causes complications for wastewater treatment and results in untreated sewage being diverted into the Tijuana River and flowing into the ocean in south Imperial Beach. This input of contaminated water has caused chronic coastal water pollution in Imperial Beach for decades. New research shows that sewage-polluted coastal waters transfer to the atmosphere in sea spray aerosol formed by breaking waves and bursting bubbles. Sea spray aerosol contains bacteria, viruses, and chemical compounds from the seawater. The researchers report their findings March 2 in the journal Environmental Science & Technology. The study appears in the midst of a winter in which an estimated 13 billion gallons of sewage-polluted waters have entered the ocean via the Tijuana River since Dec. 28, 2022, according to lead researcher Kim Prather, a Distinguished Chair in Atmospheric Chemistry, and Distinguished Professor at Scripps Oceanography and UC San Diego’s Department of Chemistry and Biochemistry. She also serves as the founding director of the NSF Center for Aerosol Impacts on Chemistry of the Environment (CAICE). “We’ve shown that up to three-quarters of the bacteria that you breathe in at Imperial Beach are coming from aerosolization of raw sewage in the surf zone,” said Prather. “Coastal water pollution has been traditionally considered just a waterborne problem. People worry about swimming and surfing in it but not about breathing it in, even though the aerosols can travel long distances and expose many more people than those just at the beach or in the water.” The team sampled coastal aerosols at Imperial Beach and water from the Tijuana River between January and May 2019. Then they used DNA sequencing and mass spectrometry to link bacteria and chemical compounds in coastal aerosol back to the sewage-polluted Tijuana River flowing into coastal waters. Aerosols from the ocean were found to contain bacteria and chemicals originating from the Tijuana River. Now the team is conducting follow-up research attempting to detect viruses and other airborne pathogens. Prather and colleagues caution that the work does not mean people are getting sick from sewage in sea spray aerosol. Most bacteria and viruses are harmless and the presence of bacteria in sea spray aerosol does not automatically mean that microbes – pathogenic or otherwise – become airborne. Infectivity, exposure levels, and other factors that determine risk need further investigation, the authors said. This study involved a collaboration among three different research groups - led by Prather in collaboration with UC San Diego School of Medicine and Jacobs School of Engineering researcher Rob Knight, and Pieter Dorrestein of the UC San Diego Skaggs School of Pharmacy and Pharmaceutical Science, both affiliated with the Department of Pediatrics - to study the potential links between bacteria and chemicals in sea spray aerosol with sewage in the Tijuana River. “This research demonstrates that coastal communities are exposed to coastal water pollution even without entering polluted waters,” said lead author Matthew Pendergraft, a recent graduate from Scripps Oceanography who obtained his PhD under the guidance of Prather. “More research is necessary to determine the level of risk posed to the public by aerosolized coastal water pollution. These findings provide further justification for prioritizing cleaning up coastal waters.” Additional funding to further investigate the conditions that lead to aerosolization of pollutants and pathogens, how far they travel, and potential public health ramifications has been secured by Congressman Scott Peters (CA-50) in the Fiscal Year (FY) 2023 Omnibus spending bill. Besides Prather, Pendergraft, Knight and Dorrestein, the research team included Daniel Petras and Clare Morris from Scripps Oceanography; Pedro Beldá-Ferre, MacKenzie Bryant, Tara Schwartz, Gail Ackermann, and Greg Humphrey from the UC San Diego School of Medicine; Brock Mitts from UC San Diego’s Department of Chemistry and Biochemistry; Allegra Aron from the UC San Diego Skaggs School of Pharmacy and Pharmaceutical Science; and independent researcher Ethan Kaandorp. The study was funded by UC San Diego’s Understanding and Protecting the Planet (UPP) initiative and the German Research Foundation. About Scripps Oceanography Scripps Institution of Oceanography at the University of California San Diego is one of the world’s most important centers for global earth science research and education. In its second century of discovery, Scripps scientists work to understand and protect the planet, and investigate our oceans, Earth, and atmosphere to find solutions to our greatest environmental challenges. Scripps offers unparalleled education and training for the next generation of scientific and environmental leaders through its undergraduate, master’s and doctoral programs. The institution also operates a fleet of four oceanographic research vessels, and is home to Birch Aquarium at Scripps, the public exploration center that welcomes 500,000 visitors each year. About UC San Diego At the University of California San Diego, we embrace a culture of exploration and experimentation. Established in 1960, UC San Diego has been shaped by exceptional scholars who aren’t afraid to look deeper, challenge expectations and redefine conventional wisdom. As one of the top 15 research universities in the world, we are driving innovation and change to advance society, propel economic growth and make our world a better place. Learn more at ucsd.edu.
Environmental Science
[1/5] Kids and parent volunteers ride their bicycles to school on car-free streets as part of the city's "bicibus" (bike bus) in Barcelona, Spain November 23, 2022. REUTERS/Nacho DoceBARCELONA, Nov 28 (Reuters) - It's fun, it's green and it's becoming more popular by the day.Barcelona's bike bus, or "bicibus", as the scheme is known locally, allows hundreds of children to cycle safely to school in a convoy, taking over entire streets in Spain's second largest city.The citizen-led project, supported by Barcelona City Council, began in March 2021 with one route in the Sarria neighbourhood. It now has 15 routes and has inspired similar schemes in the Scottish city of Glasgow and in Portland in the United States.Eight-year-old Lena Xirinacs joins the Eixemple route every Friday with her father, who is one of the volunteers ensuring that the children are safe on the road."She wakes up with joy. I could use it as an excuse every day so that she jumps out of bed," Pablo Xirinacs said.The Eixample route starts at 8.30 am and covers 2.5 kilometres (1.55 miles) in 25 minutes, dropping children at three schools along the way from Monday to Friday. Not all routes operate on a daily basis.Road safety is ensured by a Barcelona police car escort and vigilant parents like Xirinacs, one of 80 parent volunteers in the city, who join the convoy.Organisers estimate that more than 700 people joined the different routes during the 2020-21 school year, which translates into about 15,000 commutes to different Barcelona schools over that time.The bike bus project also aims to encourage more long-term sustainable transportation habits among users, said Jordi Honey-Roses, an urban planner and senior researcher at the Institute for Environmental Science and Technology at Barcelona's Autonomous University."We anticipate that children who participate in 'bicibus' will be more likely to ride a bike, have better cycling habits, more sustainable transportation habits, and we think they will change the travel patterns of their family as well," Honey-Roses said.Reporting by Horaci Garcia; writing by Catherine Macdonald; editing by Charlie Devereux and Gareth JonesOur Standards: The Thomson Reuters Trust Principles.
Environmental Science
Natural gas used for powering household stoves, furnaces and water may contain levels of cancer-linked compounds that are toxic to residents when leaked, a new study has found. The research, published in Environmental Science & Technology on Tuesday, investigated the composition of greater Boston’s “unburned” household gas, or the gas that comes out of kitchen stovetops when switching on the appliance. While sampling natural gas supplies in more than 200 homes, the authors detected varying concentrations of volatile organic compounds (VOCs) — known not only to be carcinogenic, but also to generate secondary air pollutants such as particulate matter and ozone.  Though most related research has focused on methane — the primary component of natural gas — and its impacts on climate change, the degree to which other air pollutants are present in natural gas at household “end use” remains largely unexplored, according to the study.  “When we talk about natural gas, we just talk about methane,” lead author Drew Michanowicz, a visiting scientist at the Harvard T.H. Chan School of Public Health, told reporters in a call prior to the study’s release. Considering gas beyond its methane contents therefore requires “a paradigm shift” that is critical to understanding the potential health impacts of household exposure, according to Michanowicz, who is also a senior scientist at the PSE Healthy Energy research institute.  “Natural gas is mostly methane like pizza sauce is mostly tomatoes,” Michanowicz explained. “There’s other trace ingredients in pizza sauce. You need salt, oregano, pepper.” From December 2019 through May 2021, Michanowicz and his colleagues collected 234 unburned natural gas samples from 69 kitchen stoves and building pipelines across the Boston region, according to the study. Within these samples, they detected 296 unique chemical compounds — 21 of which are designated by the federal government as hazardous air pollutants. “Historically, natural gas has been described as a clean or cleaner fossil fuel,” said co-author Zeyneb Magavi, co-executive director at the Boston-based Home Energy Efficiency Team. “Now that we know there are small quantities of VOCs present in the gas supply in the Greater Boston area, it is reasonable to conclude that our gas supply is not as clean as we thought it once was,” Magavi said. One VOC that the scientists found in 95 percent of the samples was benzene, which is classified by the National Toxicology Program as a known carcinogen. The wintertime concentration of benzene was nearly eight-fold greater than that of the summertime, according to the study. Several other VOCs that are considered “hazardous” by the Environmental Protection Agency also appeared in most samples. Among those compounds were hexane, found in 98 percent of samples; toluene, found in 94 percent; heptane, found in 94 percent; and cyclohexane, found in 89 percent. “Benzene is concerning because it’s a known human carcinogen that affects white and red blood cells and leads to anemia and decreased immune function,” Michanowicz said. “Because of that, it’s strongly regulated,” he added, acknowledging, however, that the benzene levels found in the samples were relatively low. Over the course of the study, Michanowicz said that the team uncovered five leaks — or about one in 20 homes — that were large enough to necessitate a follow-up with an expert.  Michanowicz reiterated that their study focused on hazard identifications only and therefore did not assess human exposure or potential associated health effects.  “It’s really the first step,” he said. “There’s more research that needs to be done.” Nonetheless, he stressed that any such effects would likely be mirror the known impacts already linked to the combustion of natural gas, such as the formation of nitrogen dioxide, particulate matter, carbon monoxide and formaldehyde. “We think there probably is some risk, but that risk may be less than other really well-established environmental health hazards like tobacco smoke,” said co-author Curtis Nordgaard, an environmental health scientist at PSE Healthy Energy. Adding up low-level leaks across a large metropolitan area like Boston could end up being significant, Nordgaard suggested. Also worth considering are those individuals exposed to higher concentrations of gas due to their occupations, such as commercial kitchen or pipeline workers, he said.   Even prior to determining the precise health impact of exposure, the authors stressed that there are proactive measures residents can take to minimize potential harm. Increasing filtration and ventilation in buildings is an effective step, as is finding and fixing indoor gas leaks, Magavi explained. “A fossil fuel pipeline literally ends where a kitchen begins,” Michanowicz said. “This is a direct conduit to a gas well, far away, deep underground.” “Cooking over a natural gas flame is probably the most intimate connection with climate change that we never think about,” Michanowicz added.
Environmental Science
Human exceptionalism hinders environmental action, scientists find What is nature? When Northeastern University researchers asked a sample of undergraduate students this question last spring, many of their responses included "the outdoors," "flora and fauna that exist without human interference" or "natural environment." "That's a very typical response," says John Coley, professor of psychology at Northeastern. "It's like there's us—and then nature is all the stuff that's not us." Scientists call this widely spread way of thinking about the human-nature relationship "human exceptionalism"—when people believe that they exist independently of the ecosystems they live in and draw a sharp line between themselves and what is considered nature. However, from the scientific point of view, humans are part of the living organisms within an ecosystem that interact with the nonliving environment, says Brian Helmuth, professor in the Department of Marine and Environmental Sciences and School of Public Policy and Urban Affairs at Northeastern. Coley and Helmuth are co-authors of new research that aims to decipher how human exceptionalism impacts people's understanding of environmental issues and, ultimately, pro-environmental behavior. This exploration was inspired by another co-author, Nicole Betz, who had found that human exceptionalism appeared to play an important role in how people think about climate change, while working on her doctoral dissertation at Coley's lab. The conclusions they have arrived at thus far in a recently published paper explain why some proposed solutions never made a difference. The paper is published in the journal Topics in Cognitive Science. "We can come up with all sorts of intricate and cutting-edge science and engineering solutions to environmental problems, but unless those are accepted and taken up by people, and consistent with their worldviews, it's all for nothing," Helmuth says. There's a growing understanding among experts, he says, that science and engineering need to be interfaced with social and cognitive sciences in order to understand how people think about environmental and climate-related issues. People respond to representations of the world that their mind constructs rather than the actual world, Coley says. Understanding how people construct their comprehension of nature and perceive environmental issues can help experts create interventions and start finding common ground with the public, he says. After conducting several studies among Northeastern undergraduate students and outside respondents through Amazon Mechanical Turk, the scientists concluded that human exceptionalism is especially true for Western, educated, industrialized, rich and democratic populations. "As the West influences the rest of the world, I think human exceptionalism is one of the lovely cultural gifts that we've brought [to other places]," Coley says. The researchers also argue that human exceptionalism has serious implications in terms of environmental decision-making, conservation, environmental science, nature management and climate change adaptation. Sometimes, it can invoke feelings of guilt and moral obligation to bear the responsibility for climate change, they say, but if it leads to out-of-context environmental decisions made without an accurate or holistic understanding of natural systems, it can cause further ecological damage. For example, wetlands are really good at absorbing storm surge, Helmuth says, removing pollutants from water and preventing flooding. When a wetland gets destroyed, humans usually build a seawall in order to replace those services. But a seawall creates further erosion that then needs to be mitigated again. Exceptionalist culture, Helmuth says, may take it as a given that when humans destroy something like other organisms, they can just put a monetary value on the problem and try to mitigate it. However, people don't really pay attention to what nature does for humanity, what kinds of ecosystem services it provides. "Part of what we're advocating for is that we can't operate independently of nature and not expect to be part of that feedback loop, whether we recognize it or not," Helmuth says. "Once something is on a decline, bringing it back is so much harder than protecting it to begin with." This research also shows that higher levels of human exceptionalism discourage pro-environmental attitudes, values and behaviors like mitigating climate change or investing in environmental cleanups. "There is probably the idea that if I'm not a part of this system, then it's less important for me to be invested in preserving and protecting the system," Coley says. "Whereas if I'm intimately connected to the system, then you could even say it's in my own selfish, best interest to be environmental because I depend on the environment." Although people do have some cognitive patterns or biases that seem universal and are built into human cognitive architecture, Coley says, he thinks human exceptionalism is not one of them. "Some research suggests that human exceptionalism is a learned cultural phenomenon," says Joan Kim, a doctoral degree candidate and another co-author of the research. "So, maybe, given a huge cultural shift, we will see a significant decrease in human exceptionalism in the future." The researchers are suggesting that pointed interventions can help decrease human exceptionalism and change the ways in which people are currently thinking about nature and environmental problems. "You don't need to ship people off to Yosemite to get people to start to think about ways in which they're connected to the natural world," Coley says. Instead, interventions can emphasize what local effects pollution of urban waterways, for example, have on people and eventually help them realize that they're completely surrounded and embedded in nature. "By adding the tools and insights of social and cognitive science to those of environmental and biogeophysical science and engineering, we can address the complexity of these problems with a correspondingly complex, interdisciplinary and transcultural response," the researchers say. Making humans overcome an anthropocentric mindset by considering potential impacts on ecosystems is the ultimate challenge before people run into snowballed unintended environmental consequences. "We need to have larger conversations," Kim says. "And I don't think that we can do that until people largely come to an understanding that we're not isolated from nature." More information: Joan J. H. Kim et al, Conceptualizing Human–Nature Relationships: Implications of Human Exceptionalist Thinking for Sustainability and Conservation, Topics in Cognitive Science (2023). DOI: 10.1111/tops.12653 Provided by Northeastern University
Environmental Science
Microbial necromass carbon causes dramatic carbon loss in permafrost thaw slump of Tibetan Plateau Permafrost in the Tibetan Plateau contains a large amount of soil organic carbon (SOC). Climate change leads to rapid permafrost degradation and thermal collapse, which can change the microgeomorphology and soil physical and chemical properties. Previous studies have proved that thermal collapse causes the loss of the soil carbon pool, but the composition and characteristics of lost organic carbon are not well understood. A joint research team led by Prof. Kang Shichang from the Northwest Institute of Eco-Environment and Resources of the Chinese Academy of Sciences (CAS) collected soil samples in the northeastern Tibetan Plateau to study variations of different organic carbon components (from microorganisms and plants) and their controlling factors during thaw slump. They used amino sugars and lignin phenols to represent the relative abundance of microbial necromass and plant lignin in soil. The study was published in Environmental Science &Technology on April 19. They found that the retrogressive thaw slump caused 61% loss of SOC, and the microbial necromass carbon accounted for 54% of the SOC loss in the permafrost thaw slump. In addition, changes of amino sugars were mainly related to the soil moisture content, pH and plant input, while changes of lignin phenols were mainly related to soil moisture and bulk density. This study reveals the differences of organic carbon loss from different sources caused by thaw slump and its controlling factors, which deepens our understanding of the process and mechanism of carbon loss caused by rapid permafrost degradation. More information: Wenting Zhou et al, Dramatic Carbon Loss in a Permafrost Thaw Slump in the Tibetan Plateau is Dominated by the Loss of Microbial Necromass Carbon, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c07274 Journal information: Environmental Science & Technology Provided by Chinese Academy of Sciences
Environmental Science
Student Spotlight: Christine Ow Christine Ow is a current student in Columbia University’s MPA in Environmental Science and Policy program. Originally from Singapore, she graduated from the University of California, Los Angeles, in 2022 with a B.A. in political science and minors in global studies, Korean and environmental systems and society. After she graduates from the ESP program this May, Christine will be joining Bluefield Research in Boston as a consultant working at the intersection of water and technology. What made you choose ESP? I came straight out of undergrad and as an international student there are a lot of factors that come in when you reach senior year. Do I get a job or do I go to grad school? It was a mixture of me being a little bit afraid of entering the job market but also, more importantly, I knew I wanted to do environmental work. I wanted to explore the intersection of development and water. I didn’t feel that my undergrad gave me enough skills, experience or knowledge to break into this industry specifically. With that in mind I started looking for programs and Columbia. SIPA [Columbia’s School of International and Public Affairs] specifically has always been top of my list because I knew it was known for development, but I didn’t want to commit to a two-year program. I found the ESP program, a one-year program, and spoke with the assistant director at the time, Stephanie. She said that based on my interests, this sounds like a program I would really benefit from. I applied to Columbia and haven’t looked back, and it’s been a fun-filled year so far. What were some highlights of your experience with the program? This has been a year of highlights. Moving to New York City and experiencing everything the city has to offer has been so cool. The field trips that we did in the summer are something that were genuinely really fun. I had not gone on a field trip since middle school so it was a really surreal experience. But also, it was very informative and I really liked how we were able to practically see what we were learning in the field, especially when we were focusing on urban ecology and how the city was adapting to climate change. When it comes to classes, water is my specific interest and very fortunately Columbia does have a Water Center as well as professors in water. What have been your favorite courses? My favorite class up to this point has been my water governance class. The reason why I liked it is because it was my first dedicated class just for water. This means going really into depth into the systems and when it comes to managing water, what are the difficulties. The professor did a really good job organizing the big themes that we had to talk about as water is a very complex issue. Going to this class made me realize just how much more complex it is. It also reaffirmed my love for the issue and my desire to break into the industry. The other class I really liked was my data analytics and visualization class, and this is because it was challenging. I felt that when I came to grad school, I came with the mindset that I wanted to push my boundaries and to learn new things. I knew that for my personal growth it was really important for me to understand what coding is. It was a great introduction and it allowed me to be more aware of what the capabilities are and, truth be told, it also made me a bit curious to in the future learn [more] at a slower pace. What are your big picture interests? My interest is in water. It’s a very niche field when it comes to the environment. I am the only one in our cohort of 52 that explicitly centers on water as my main focus. The reason is because one, I don’t think it gets enough attention, and two, I grew up in a water-scarce country. I’m from Singapore — it’s an island country, so we don’t have fresh water. We’re a very small country that doesn’t have our own natural freshwater source. So ever since I was a child, it’s always been hammered into my mind the idea of water scarcity and the potential crisis that might befall as a result of it. Singapore is very developed and we are very water secure, but an appreciation for water is something that I’ve had since I was a kid. When I was exploring what was my niche in the development space, I found that people were not talking about how water underlies all forms of development. This is critical conversation because billions of people in the world don’t have access to water. Many of these people are in the United States, which is shocking to many people. With Flint, Jackson, most recently East Palestine and Philadelphia, water is an important field that people don’t talk about enough. I’ve kind of made it my personal mission to break into it. In pursuit of that, I am currently working as a graduate research assistant with the Columbia Water Center where my research is specifically focused on Indigenous water access. I’m building a database to allow researchers to get a more comprehensive overview of which communities have access to water and which communities don’t, and more importantly, why they don’t have access. I’ve been working on this for a couple of months now and it’s been a challenge because of the lack of information and the fact that everything is very dispersed, but it’s also been very rewarding and with the Columbia Water Center I’m trying to see where this research can go in the future. Where do you see yourself going into the future and what are your immediate plans after the program? I am very excited to share that I recently signed a job offer with a research services and consulting firm in Boston called Bluefield Research. They do research and consulting for the water sector! I was not anticipating I would go the private sector route, to be honest. I was not anticipating becoming a consultant or a research analyst for the private sector. I always thought I was going to go nonprofit — my goals were EDF or WRI, but this opportunity came up as I was applying for jobs and I really like the company. I also think it’s important to be in tune with what the private sector is doing because water utilities especially are incredibly private and many solutions to the water problems we have can come from the private sector. That is what I’m looking forward to in the immediate term. I’ll be starting in the summer and be specifically working on digital water, which is like cyber security and the intersection of water and technology. This is another new frontier that I am excited to explore. Way into the future though, I know life will take its course and things will change and my ambitions might change, but right now my big goal is to eventually work for the United Nations on their development work centering on water, and maybe pursue a PhD somewhere along the road. But for now, I’m looking forward to wrapping up the program strong and getting set up in Boston! What do you want people to keep in mind concerning the climate change crisis? Climate change is happening and the environment is getting a lot more attention and it’s great. However, it is really important for us to think about the environment as a fundamentally intersectional thing — it’s a challenge that we need to tackle through many means. It will be impossible for us to address our challenges today and into the future if we continue to work in silos. I represent a very niche corner of the environmental sector that is finally opening up and involving ourselves in greater conversations about the climate. My call to action to anyone who is a part of the Columbia community is: when you think about the environment, don’t just think about energy, sustainability, etc. — but also try to see where the intersections are. Create relationships with people working in these other fields, so that there is a collective movement and everyone isn’t trying to strive for the same goal using a different approach. If we can all come together and work as one collective machine, it will be way more powerful than our individual efforts. Saj McBurrows is an intern with the MPA in Environmental Science and Policy program. Students in the MPA in Environmental Science and Policy program enroll in a year-long, 54-credit program offered at Columbia University’s School of International and Public Affairs, in partnership with the Climate School. Since it began in 2002, the MPA in Environmental Science and Policy program has given students the hands-on experience, and the analytical and decision-making tools to implement effective environmental and sustainable management policies. The program’s 1,112 graduates have advanced to jobs in domestic and international environmental policy, working in government, private and non-profit sectors. Their work involves issues of sustainability, resource use and global change, in fields focused on air, water, climate, energy efficiency, food, agriculture, transportation and waste management. They work as consultants, advisers, project managers, program directors, policy analysts, teachers, researchers and environmental scientists and engineers.
Environmental Science
Removing carbon from Earth's atmosphere may not reverse devastating changes to weather patterns in vulnerable areas, a new study suggests. In the study, Korean researchers simulated how removing large quantities of the greenhouse gas carbon dioxide from the air might affect the progress of local climate changes related to global warming. The study, based on computer modeling, examined a hypothetical scenario, in which carbon dioxide concentrations continued to rise from present-day levels for 140 years, then were gradually reduced back to the initial levels over another 140-year period. The researchers were particularly interested in how these changes would affect vulnerable subtropical regions, which are known to suffer from more intense and more frequent droughts as climate change progresses. Related stories: 10 devastating signs of climate change satellites can see from space The study results suggest that the local climate in these areas would not return to normal for more than 200 years after the carbon dioxide concentrations drop. The Mediterranean region, for example, plagued by ever more severe heatwaves, droughts and wildfires, would continue to suffer and could become even drier, the study found. In the study, the researchers modeled the changes to the air circulation pattern called the Hadley Cell, which transports moisture from the equatorial regions toward the tropics of Cancer and Capricorn, which lie at about 23.5 degrees north and south of the equator, respectively. Scientists have known for years that the Hadley Cell circulation responds to climate change by expanding toward the poles. The humid air that rises from around the equator gets dumped back to Earth at higher and higher latitudes, causing worsening droughts in subtropical regions. The modeling done by the Korean team found that when carbon dioxide is removed from Earth's atmosphere, the Hadley Cell doesn't recover its original shape and extent even after another 220 years. In the Northern Hemisphere, the area where moisture arrives from tropical regions moves closer to the equator, a shift that could make the Mediterranean region drier than it is today. In the Southern Hemisphere, on the other hand, the cell remains slightly expanded toward the South Pole, possibly altering precipitation patterns over Australia. Study lead author Seo-Yeon Kim told Space.com that the unpredictable recovery of the crucial atmospheric circulation pattern has to do with the response of the global ocean to the decrease in temperatures prompted by the carbon dioxide removal. "One of the main reasons for this asymmetrical response [of the Hadley Cell] is the different response of the northern and southern oceans," said Kim, a postdoctoral researcher at the Department of Earth and Environmental Science at Seoul National University in South Korea. "It's related to ocean circulation. The response of the ocean is always slower than the removal of the carbon dioxide, and how fast the ocean responds then determines the recovery of the Hadley Cell." In the study, the team used current carbon dioxide levels as a starting point and modeled a scenario in which concentrations by a factor of four before being brought back to the base level. They didn't model a return to the levels that were common in pre-industrial times, before humans began burning fossil fuels. According to the U.S. National Oceanic and Atmospheric Administration (NOAA), concentrations of carbon dioxide rose to 421 parts per million in 2022, more than 50% above the preindustrial era concentrations. Current concentrations of carbon dioxide are even higher than those of the Pliocene Climatic Optimum, a warm period in Earth's history some 4.5 million years ago when sea levels were up to 82 feet (25 meters) higher than they are today, according to NOAA. Despite the warnings of climatologists and political pledges at international conferences, the world still lags behind the greenhouse gas emission reduction targets needed to curb the progress of global warming. The developments are prompting calls that other climate interventions, including active carbon removal, will be necessary to prevent the planet from crossing dangerous warming thresholds. There are various ways to remove carbon dioxide from Earth's atmosphere, ranging from early-stage technologies that suck the warming gas from the air and sequester it in artificial stone to more natural interventions involving reforestation or fertilizing parts of the ocean to promote the growth of algae. Developments around the globe already suggest that climate change is getting out of hand, with Antarctica seeing an unprecedented low sea ice extent during its peak winter period this year, extreme heat waves plaguing parts of Europe and North America and unusually high temperatures in the Atlantic Ocean. Kim, however, cautions that the modeling results show that, while carbon removal might reduce temperatures, environmental changes caused by the warming may continue to affect millions of people in vulnerable regions even centuries later. "I think that the main message of our study is that we should reduce carbon dioxide emissions now, because afterwards it gets really difficult," she said. "We cannot control nature, we cannot reverse the consequences that easily; we cannot fix nature." The study was published in the journal Science Advances on Wednesday (July 26).
Environmental Science
Southern resident killer whales are an endangered population of the famous black-and-white marine animals also known as orcas. A new study suggests certain chemical contaminants may be implicated in the orcas' decline.A team led by researchers at the University of British Columbia published its findings in the journal Environmental Science & Technology in December.   The scientists analyzed tissue from six southern resident killer whales and six Bigg's killer whales stranded along the coast of British Columbia between 2006 and 2018. "They discovered that chemical pollutants are prevalent in killer whales, with a chemical often found in toilet paper one of the most prevalent in the samples studied, accounting for 46 percent of the total pollutants identified," the university said in a statement last week. The compound 4-nonylphenol (4NP) is associated with paper processing and is often used in toilet paper production. It's listed as a toxic substance in Canada and can impact the nervous system and cognitive function. "It can leak into the ocean via sewage treatment plants and industrial runoffs, where it is ingested by smaller organisms and moves up the food chain to reach top predators such as killer whales," the university said. The study is the first to find 4NP in killer whales. The researchers also found 4NP transfers from mother orcas to their fetuses, raising questions about how the chemical might impact fetal development.According to the US Environmental Protection Agency, the southern resident population -- found near British Columbia, Washington state and Oregon -- had only 74 individuals as of December 2020. They are listed as an endangered species in both the US and Canada. EPA points to vessel impacts, low availability of salmon and exposure to contaminants as threats to the orcas' survival.The chemical 4NP is a "contaminant of emerging concern," meaning it's neither well studied nor well regulated. The presence of the chemical in the stranded whales indicates it may have a wider impact on the marine environment and other animals. It could have implications for human health as well, since people eat the same salmon the whales do. The university said governments could help the endangered whales by halting production of the chemicals found in their bodies and by addressing sources of marine pollution. "This research is a wake-up call," said Juan José Alava, study co-author. "Southern residents are an endangered population and it could be that contaminants are contributing to their population decline. We can't wait to protect this species."
Environmental Science
Killer whales, also called orcas, are known for their intelligence and striking presence. They are also enduring a silent but persistent threat beneath the surface of our oceans. My research investigates killer whales and their diets in the North Atlantic. Previous studies have focused on killer whales in the Pacific Ocean. But until now, no data existed for our killer whales in the North Atlantic, including those in Eastern Canada and the Canadian Arctic. With other international researchers, I recently published a study in Environmental Science & Technology that reveals a troubling reality: these apex predators are carrying high levels of persistent organic pollutants (POPs) in their blubber. The accumulation of these synthetic contaminants is also creating health risks for the killer whales. Forever chemicals POPs are also known as “forever chemicals” due to their remarkable stability and long-lasting nature. This group includes well-known compounds like polychlorinated biphenyls (PCBs), chlorinated pesticides like dichlorodiphenyltrichloroethane (DDT) and brominated flame retardants. In the last century, these chemicals were mass produced and used in a wide range of applications, such as industrial processes or agriculture. But research conducted in Sweden in the late 1960s revealed that these chemicals accumulate in living organisms and persist in the environment. The chemicals bind to fats and increase in concentration as they move up the food web, impacting dolphins and whales the most. These animals, being top predators, accumulate the largest concentrations and struggle to eliminate these chemicals. This buildup of contaminants through their diets — known as biomagnification — is especially concerning for marine mammals, as they need ample fat for warmth and energy. A gradient of contamination Our study, focusing on 160 killer whales, reveals a concerning pattern of PCB contamination accross the North Atlantic. The concentrations vary significantly across the North Atlantic, ranging from a staggering 100 mg/kg in the Western North Atlantic, to around 50 mg/kg in the mid-North Atlantic. Intriguingly, killer whales in the Eastern North Atlantic carry lower PCB levels at roughly 10 mg/kg in Norway. For context, PCB-related immune effects start at 10mg/kg, while reproductive failure was observed at 41 mg/kg in marine mammals. Killer whales in Eastern Canada and the Canadian Arctic have PCB levels exceeding twice the threshold linked to reproductive problems in marine mammals. You are what you eat Diet plays a pivotal role in this pattern of contamination. Killer whales that primarily feed on fish tend to have lower contaminant levels. On the other hand, those with diets focused on marine mammals, particularly seals and toothed whales, show higher levels of contaminants. Killer whales with mixed diets — containing both fish and marine mammals — tend to display elevated contaminant levels, particularly in Iceland. Our research investigates the potential impact of diet preferences on killer whale health. Risk assessments suggest that killer whales in the Western North Atlantic, and specific areas of the Eastern North Atlantic where they have mixed diets, face higher risks, directly linked to what they eat. Among the emerging contaminants, hexabromocyclododecane (HBCDD), a flame retardant, is of particular concern. Concentrations of HBCDD in North Atlantic killer whales are among the highest measured in any marine mammals, surpassing levels found in their North Pacific counterparts. Disappearing sea ice This reveals the fascinating complexity of killer whale ecology and underscores how their dietary choices significantly impact their exposure to environmental pollutants. It also raises some concern for “Arctic-invading” killer whales that progressively move north due to climate change. Killer whales’ large dorsal fin has traditionally prevented them from navigating dense sea ice. But the melting of sea ice has allowed killer whales to access a new habitat with new prey species. There, researchers believe that they will hunt more and more marine mammals, like ringed seals, narwhals and belugas. These dietary shifts, influenced by our changing environment, may result in heightened health risks for apex predators. Maternal transfer means females are less contaminated The study also spotlights a sex difference in contaminant concentrations. Male killer whales appear to be more contaminated than their female counterparts, thanks to the transfer of contaminants from adult females to their offspring during gestation and lactation. Killer whale mothers use their own energy to produce fatty milk for their calves, helping them grow quickly and stay healthy. This nutritious milk comes from the mother’s blubber, where contaminants are stored. As she feeds her young ones, she may pass on as much as 70 per cent of these stored contaminants. Urgent action In response to these findings, urgent action is needed to protect North Atlantic killer whales and their ecosystems. The 2001 United Nations treaty’s objective to phase out and destroy PCBs by 2028 is slipping out of reach. Substantial quantities of PCB-contaminated waste are stored in deteriorating warehouses, risking contaminants ending up in the environment, and further affecting our ecosystems. To compound the issue, as one chemical gets banned, another often emerges, with enough variations to avoid previous regulations, perpetuating a harmful cycle. To effectively tackle the issue of contaminant accumulation in killer whales, the following actions are necessary: - Urgent steps are needed for the proper disposal of PCB-contaminated waste, with an emphasis on international collaboration to support nations lacking the infrastructure for waste management. - It is crucial to prevent the release of potentially more harmful contaminants into the environment by improving toxicity testing of chemicals before they enter the market. - Collaboration among ecotoxicologists, conservation biologists, policymakers and other stakeholders is essential. Effective strategies to mitigate pollution’s adverse effects can only be developed through collective efforts. - Targeted conservation efforts should be directed toward populations at higher risk, such as killer whales in the Eastern Canadian Arctic, and Eastern Canada. Chemical pollution has been identified as one of the nine global threats to wildlife, as well as human health in modern times. It is time to give our planet — and killer whales — the relief they need by reducing existing contaminants through concrete actions.
Environmental Science
Climate change isn’t the only threat facing California’s birds. Over the course of the 20th century, urban sprawl and agricultural development have dramatically changed the landscape of the state, forcing many native species to adapt to new and unfamiliar habitats. In a new study, biologists at the University of California, Berkeley, use current and historical bird surveys to reveal how land use change has amplified — and in some cases mitigated — the impacts of climate change on bird populations in Los Angeles and the Central Valley. The study found that urbanization and much hotter and drier conditions in L.A. have driven declines in more than one-third of bird species in the region over the past century. Meanwhile, agricultural development and a warmer and slightly wetter climate in the Central Valley have had more mixed impacts on biodiversity. “It’s pretty common in studies of the impact of climate change on biodiversity to only model the effects of climate and not consider the effects of land use change,” said study senior author Steven Beissinger, a professor of environmental science, policy and management at UC Berkeley and a researcher at the campus’s Museum of Vertebrate Zoology (MVZ). “But we’re finding that the individual responses of different bird species to these threats are likely to promote unpredictable changes that complicate forecasts of extinction risk.” The study, publishing today in the journal Science Advances, presents the latest results from UC Berkeley’s Grinnell Resurvey Project, an effort to revisit and document birds and small mammals at sites first surveyed a century ago by UC Berkeley professor Joseph Grinnell. In the current study, the researchers resurveyed birds at 71 sites in L. A. and the Central Valley. They then used their findings — along with current and historical data on land use, average temperature and rainfall — to analyze how shifts in the climate and landscape may have contributed to changes in bird populations. In L.A., they found that 40% of bird species were present at fewer sites today than they were 100 years ago, while only 10% were present at more sites. Meanwhile, in the Central Valley, the proportion of species that experienced a decline (23%) only slightly outnumbered the proportion that increased (16%). In many cases, opposing responses to climate and land use change by bird species, where one threat caused a species to increase while another caused the same species to decline, moderated the impacts of each threat alone. The decline in bird species in L.A. over the past century is similar to the shocking bird community collapse that the research team documented in national parks in the Mojave Desert over the past 100 years, and linked to heat stress from climate change. “The Central Valley had less change, in general — there were winners and losers,” Beissinger said. “Whereas in L.A., we saw mostly losers.” Windfalls and double whammies Grinnell was a teenager when he first started documenting birds in the late 1890s near his childhood home of Pasadena, California. He later perfected his detailed approach to surveying as a professor of zoology at UC Berkeley and the first director of the MVZ. “In those days, they didn’t have fancy binoculars. They didn’t have recordings of bird calls. So, they had to get in and learn the birds through the resources that were available. Oftentimes that was from specimens in museums. Sometimes that was through popular guides or handbooks,” Beissinger said. “Grinnell was ahead of his time in the way that he was taking field notes, and he was really draconian in also making all his students take those notes.” Grinnell’s meticulous field notes have allowed Beissinger and his team to construct a historical baseline of California’s bird life at the turn of the 20th century. The notes are so detailed that the researchers are able to reconstruct the birds encountered each day and account for the ways new technologies, such as better binoculars and field guides, have made it easier for contemporary biologists to detect birds. This analysis has allowed the team to make direct comparisons between the current and historical bird surveys. To tease apart the disparate and sometimes opposing impacts of land use change and climate change, the researchers analyzed historical maps of urban development and agriculture to determine how the landscape at each study site had been modified during the 20th century. They also obtained historical average temperatures and rainfall at each site. In L.A., they found that species such as Anna’s hummingbird and the American crow were able to adapt to both hotter and drier conditions and to urban development, experiencing what the researchers call a population “windfall.” Other species, such as the western meadowlark and the lark sparrow, were negatively impacted by both changes, instead experiencing a “double whammy.” Species that experienced mixed impacts include the black phoebe, the great egret, the house wren and the blue-gray gnatcatcher. “Our findings really highlight the fact that we’ve got climate and land use change happening at the same time, creating happy conditions for some species, while other species are declining from the same changes,” Beissinger said. “Sometimes, species might also be pushed and pulled in different directions from the climate and land use changes.” Bird species in the Central Valley also experienced a combination of windfalls, double whammies and mixed impacts, but the proportion of species that experienced windfalls was much higher in the Central Valley than in L.A. and nearly offset the proportion that experienced double whammies. “There are some species that have been able to persist under the agricultural changes, and some that even colonized and increased because of those changes. But they tend to be species that are more common and widespread, and the more sensitive species are the ones that started disappearing when the natural grasslands were replaced by agriculture,” Beissinger said. “In the urban areas, there are just fewer species that are able to find what they need and avoid the city hazards.” Additional co-authors of the paper include Sarah A. MacLean of the University of La Verne and Kelly J. Iknayan and Perry de Valpine of UC Berkeley. This work was supported by grants from the National Science Foundation (DEB 1457742, DEB 1911334 and DEB 1601523), the National Geographic Society (9972-16), a UC Berkeley Chancellor’s Fellowship and a Research Professorship from the Miller Institute. RELATED INFORMATION
Environmental Science
Environmental toxin PCB found in deep sea trench PCB has been banned in most countries since the 1970s, but that doesn't mean it no longer exists. Now, deep-sea researchers report that they have found PCB at the bottom of the Atacama Trench in the Pacific Ocean. During their expedition to the deep-sea trench, the research team retrieved sediment cores and analyzed them for PCB occurrences at five different locations in the trench. All the samples of surface sediment analyzed contained PCB. The study, led by Professor Anna Sobek from the Department of Environmental Science at Stockholm University and Professor Ronnie N. Glud, director of the Danish Center for Hadal Research at the University of Southern Denmark, has been published in the journal Nature Communications. PCB is short for Poly-Chlorinated Biphenyls, which covers 209 different substances. They were introduced in the 1930s and have been used primarily in building materials and technical components, but are now banned in most countries and classified as a highly persistent environmental toxin. PCB can be carcinogenic and cause reproductive harm. Although the world's production of PCBs dropped significantly in the 1970s, the substances still pose an environmental threat. In 2018 researchers reported, for example, that half of the world's killer whale populations were weakened by PCB. Another study has found that scavenging amphipods in the deep sea contained large amounts of PCBs. "It is thought-provoking that we find traces of human activity at the bottom of a deep-sea trench; a place that most people probably perceive as distant and isolated from our society," says Professor Ronnie N. Glud, who has participated in more than 10 expeditions to deep-sea trenches around the world. These expeditions have helped to dispel the myth that deep-sea trenches are unaffected by what happens on the surface and have provided insight into the surprisingly rich, active, and varied life at the greatest depths of the ocean. The studies have also shown that deep-sea trenches accumulate large amounts of organic material, contributing to the oceans' ability to absorb carbon released into the atmosphere through fossil fuel burning. However, not only organic material accumulates in deep-sea trenches, which are also called hadal trenches. For example, the Danish Center for Hadal Research reported in 2021 that mercury also accumulates in the trenches' sediments, and in 2022, a similar announcement was made about black carbon, which is particles that are mainly formed by the combustion of fossil fuels. The concentration of PCBs in samples from the Atacama Trench is not alarmingly high, according to Ronnie N. Glud. He points out that much higher concentrations have been found in places like the Baltic Sea, North Sea, and Tokyo Bay. Concentrations 300–1500 times higher have been measured in the Baltic Sea. "These are places with a lot of human activity, so one would expect that. The Atacama samples do not show very high concentrations but considering that they were retrieved from the bottom of a deep-sea trench, they are relatively high. A priori no one would expect to find pollutants in such a place," says Ronnie N. Glud. PCBs are hydrophobic, meaning they are not very soluble in water. Instead, they bind to organic material that sinks to the bottom. "The Atacama trench is located in an area with relatively high production of plankton in surface waters. When the plankton dies, it sinks to the bottom of the ocean," explains Anna Sobek. In addition, large amounts of material are transported down the steep slopes and deposit in the deepest areas. Some of the organic material that reaches the bottom of the Atacama trench is eventually decomposed by microorganisms, and as a result, PCBs accumulate in sediment. PCBs are persistent compounds that are slowly redeposited over time, which is why increasing concentrations can be found in inaccessible areas such as the hadal trenches, even though they were largely banned worldwide in the 1970s. "Unlike coastal areas where PCB concentrations are typically higher in deeper sediment layers deposited 50 years ago, PCB concentrations in hadal sediments are highest in the upper sediment layers, indicating that PCBs have only recently reached the deeper trenches and that concentrations have not yet peaked: We may see higher concentrations in a few years," says Ronnie N. Glud. The deep-sea trenches are home to many different microorganisms and animals that have adapted to the extreme living conditions. Perhaps they are also home to organisms that can metabolize the pollutants that are deposited there. That is one of the focus points of Danish Center for Hadal Research and for this research, the center holds a solid stock of frozen sediment samples collected from expeditions to different deep-sea trenches in 2021 and 2022. "We are interested in finding out if PCBs are also present in other deep-sea trenches or if they are unique to the Atacama trench. We also want to investigate the bacteria that live down there and learn more about their function," says Ronnie N. Glud. The deep-sea trenches are located in the hadal zone of the ocean, which lies at depths of 6-11 km. There are about 27 deep-sea trenches, also called hadal trenches, named after the Greek god Hades, who ruled the underworld. More information: Anna Sobek et al, Organic matter degradation causes enrichment of organic pollutants in hadal sediments, Nature Communications (2023). DOI: 10.1038/s41467-023-37718-z Journal information: Nature Communications Provided by University of Southern Denmark
Environmental Science
Arctic ice algae heavily contaminated with microplastics, reports new research The alga Melosira arctica, which grows under Arctic sea ice, contains ten times as many microplastic particles as the surrounding seawater. This concentration at the base of the food web poses a threat to creatures that feed on the algae at the sea surface. Clumps of dead algae also transport the plastic with its pollutants particularly quickly into the deep sea—and can thus explain the high microplastic concentrations in the sediment there. Researchers led by the Alfred Wegener Institute have now reported this in the journal Environmental Science and Technology. It is a food lift for bottom-dwelling animals in the deep sea: The alga Melosira arctica grows at a rapid pace under the sea ice during spring and summer months, and forms meter-long cell chains there. When the cells die and the ice to whose underside they adhere melts, they stick together to form clumps that can sink several thousand meters to the bottom of the deep sea within a single day. There they form an important food source for bottom-dwelling animals and bacteria. In addition to food, however, these aggregates also transport a dubious cargo into the Arctic deep sea: microplastics. A research team led by biologist Dr. Melanie Bergmann from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) has now published their findings. "We have finally found a plausible explanation for why we always measure the largest amounts of microplastics in the area of the ice edge, even in deep-sea sediment," Melanie Bergmann reports. Until now, the researchers only knew from earlier measurements that microplastics concentrate in the ice during sea ice formation and are released into the surrounding water when it melts. "The speed at which the alga descends means that it falls almost in a straight line below the edge of the ice. Marine snow, on the other hand, is slower and gets pushed sideways by currents so sinks further away. With the Melosira taking microplastics directly to the bottom, it helps explain why we measure higher microplastic numbers under the ice edge," explains the AWI biologist. On an expedition with the research vessel Polarstern in summer 2021, she and a research team collected samples of Melosira algae and the surrounding water from ice floes. The partners from Ocean Frontier Institute (OFI), Dalhousie University and the University of Canterbury then analyzed these in the laboratory for microplastic content. The surprising result: the clumps of algae contained an average of 31,000 ± 19,000 microplastic particles per cubic meter, about ten times the concentration of the surrounding water. "The filamentous algae have a slimy, sticky texture, so it potentially collects microplastic from the atmospheric deposition on the sea, the sea water itself, from the surrounding ice and any other source that it passes. Once entrapped in the algal slime they travel as if in an elevator to the seafloor, or are eaten by marine animals," explains Deonie Allen of the University of Canterbury and Birmingham University, who is part of the research team. Since the ice algae are an important food source for many deep-sea dwellers, the microplastic could thus enter the food web there. But it is also an important food source at the sea surface and could explain why microplastics were particularly widespread among ice-associated zooplankton organisms, as an earlier study with AWI participation shows. In this way, it can also enter the food chain here when the zooplankton is eaten by fish such as polar cod and these are eaten by seabirds and seals and these in turn by polar bears. The detailed analysis of plastic composition showed that a variety of different plastics are found in the Arctic, including polyethylene, polyester, polypropylene, nylon, acrylic and many more. In addition to various chemicals and dyes, this creates a mix of substances whose impact on the environment and living creatures is difficult to assess. "People in the Arctic are particularly dependent on the marine food web for their protein supply, for example through hunting or fishing. This means that they are also exposed to the microplastics and chemicals contained in it. Microplastics have already been detected in human intestines, blood, veins, lungs, placenta and breast milk and can cause inflammatory reactions, but the overall consequences have hardly been researched so far," reports Melanie Bergmann. "Micro and nano plastics have basically been detected in every place scientists have looked in the human body and within a plethora of other species. It is known to change behaviors, growth, fecundity and mortality rates in organisms and many plastic chemicals are known toxins to humans," says Steve Allen, OFI Dalhousie University, a research team member. Moreover, the Arctic ecosystem is already threatened by the profound environmental upheavals caused by the climate crisis. If the organisms are now additionally exposed to microplastics and the chemicals they contain, it can weaken them further. "So, we have a combination of planetary crises that we urgently need to address effectively. Scientific calculations have shown that the most effective way to reduce plastic pollution is to reduce the production of new plastic," says Melanie Bergmann, and adds, "This should therefore definitely be prioritized in the global plastics agreement that is currently being negotiated." That is why Bergmann is also accompanying the next round of negotiations, which will begin in Paris at the end of May. More information: High levels of microplastics in the Arctic ice alga Melosira arctica, a vector to ice-associated and benthic food webs, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c08010 Provided by Alfred Wegener Institute
Environmental Science
The branching coral, Acropora hyacinthus, is a prevalent coral species on the reefs of Mo’orea, French Polynesia, and was used as a focal species in this study. Across the globe and in the Caribbean, coral reef ecosystems—home to an estimated 25 percent of all marine life—face crisis after crisis. Anthropogenic climate change leaves these delicate ecosystems increasingly vulnerable to stressors like mass bleaching, ultimately causing widespread declines in health. In addition to these threats, stony coral tissue loss disease (SCTLD) has emerged and is decimating coral reef communities across the Caribbean at an alarming pace. Scientists are working to determine the cause of SCTLD, which was first observed off the Florida coast in 2014 and has since been reported in 28 Caribbean nations, to identify possible treatment methods and ways to better prevent its spread. Bacteria and viruses have emerged as possible suspects for its pathogen, the disease-causing agent. Filamentous viruses in particular have gained recent attention as a possible contributor to the disease. However, researchers from the University of California, Berkeley and Rice University have discovered evidence that filamentous viruses may be globally distributed in corals, and not a component unique to this devastating disease. Their new study, published today in The ISME Journal, probes the relationship between SCTLD and viral infections of dinoflagellates, a group of symbiotic algae crucial to the health of coral reefs. “There are a lot of different hypotheses out there that try to identify the cause of this disease [SCTLD], but our work has clarified there might not be a ‘one pathogen-one disease’ model in play,” said Lauren Howe-Kerr, who led the research while a postdoctoral researcher at Rice University. “We will need more lines of evidence to determine what is causing this.” Environmental Science, Policy, and Management professor Adrienne Correa, who joined Rausser College this fall from Rice University, served as the study’s senior author. An emerging challenge Coral colonies infected with SCTLD tend to exhibit rapidly expanding lesions, which often spread quickly across the colony's surface, consuming the coral’s living tissue until none remains. The disease can kill small corals within weeks, while larger colonies—like Florida’s “Big Momma,” which was reportedly over 300 years old and roughly the size of a small car—can die in months. “There has been a strong desire by states, scientists, and ocean managers to understand what this disease is and what is causing it,” said Correa. “While we know corals contain dinoflagellates and bacteria that are important for their health, they also contain viruses that we know less about.” Lauren Howe-Kerr collects a tissue biopsy of the branching coral, Acropora hyacinthus, off the coast of Mo’orea, French Polynesia for genomic preservation and microscopy imaging. In 2021, a team led by researchers from the National Wildlife Health Center Field Station in Honolulu published a series of images captured using a transmission electron microscope that showed filamentous virus-like particles (VLPs) in dinoflagellates sampled from corals afflicted by SCTLD. The finding led many to ask whether filamentous viruses caused the disease. “People really began to look for different lines of evidence to see whether or not this was true,” Correa said. Howe-Kerr, who also conducted her PhD research in Correa’s lab formerly at Rice, had previously worked to compile a separate database of more than 700 dinoflagellate cell samples from healthy and bleached coral colonies collected between March 2018 and August 2019. Those samples were taken from reefs off the coast of Mo’orea, French Polynesia, then imaged using a transmission electron microscope. According to Howe-Kerr, the 2021 publication, which included images from corals in Florida, helped members of Correa’s lab correctly identify filamentous VLPs in their images. “It was wild to identify the VLPs in our images,” she said. “At the time, we thought these structures weren’t part of the normal cell machinery, but they were so common and abundant that we weren’t quite sure they were viruses.” Navigating uncharted waters Researchers in Correa’s lab identified filamentous VLPs in images of healthy and bleached coral samples collected in the South Pacific island of Mo’orea even though SCTLD has not been reported in the Pacific Ocean basin. The VLPs were most abundant in dinoflagellates of bleached corals sampled during a marine heatwave, suggesting that high temperatures may worsen these viral infections. They also documented potential filamentous viruses in dinoflagellate cells expelled from coral colonies. The movement of these expelled cells in the water on the reef may be one way these microorganisms are transmitted from one coral colony to another. “All this suggests that these viruses might actually be widespread in coral dinoflagellates around the world,” explained Howe-Kerr. “This tells us that they are not solely associated with SCTLD and that we need to take a much closer look at what these viruses are doing.” A large mounding coral, Porites cf. lobata, in the back reef waters of Mo’orea, French Polynesia. Correa, who has studied coral reef virology for the past 14 years, said more research is needed to understand better how viruses and microorganisms affect coral colonies' overall health and function. To advance that goal, her lab is now collaborating with scientists at Oregon State University and the Smithsonian at the University of California’s Gump Research Station to deep sequence corals and other reef organisms—including their resident viruses—on Mo’orea. “This work will create an unprecedented dataset with a lot of information,” Correa said. “We will likely identify additional viruses that we currently don’t know are found on reefs, and doing that will inherently generate more questions about their function.” Additional co-authors include Anna Knochel, Matthew Meyer, Carly Karrick, and Alex Veglia of Rice University; Oregon State University professors Andrew Thurber and Rebecca Vega Thurber; George Mason University PhD student Jordan Sims; and Carsten Grupstra, a postdoctoral researcher at Boston University. Funding was provided by the National Science Foundation and the Gordon and Betty Moore Foundation.
Environmental Science
uniformsA study of school uniforms in the US and Canada reveals high levels of so-called "forever chemicals".The chemicals, known as PFAS, are used to make clothing resistant to stains or water but they have been linked to asthma, obesity and fertility issues.Researchers found that uniforms made with 100% cotton showed higher levels than synthetic materials.Exposing children to these chemicals may increase the long-term health risk, the scientists believe.The issue is less of a concern in the UK because almost all retailers' own brand uniforms are PFAS free, say campaigners.Possible breakthrough to destroy 'forever chemicals'UK's safe level for tap water too high - scientists'Forever chemicals' over safe levels in rainwaterFrom fire-fighting foams to food packaging and textiles, PFAS chemicals are widely used because of their non-stick and water-resistant properties.But researchers have long been concerned about these chemicals, known as per- and polyfluoroalkyl substances - because they don't break down under normal environmental conditions.Around 20% of US and Canadian school children wear uniformsThese "forever chemicals", which number in the thousands, persist in soil and water. However they can also accumulate in the human body when ingested.While direct evidence linking them to health problems is mixed, scientists are concerned about exposure, particularly in young people whose lower body weight and sensitive development may result in a greater lifetime threat.This latest study focuses on the contact with the chemicals that may occur as a result of wearing school uniforms.It's estimated that around 20% of children in the US and Canada wear uniforms, in public and private schools."All of these clothes that we targeted are polo shirts and khaki pants, the usual uniforms, but they were specifically marketed as a stain resistant," said Dr Marta Venier from Indiana University who led the study."So we were selective in picking clothes that were labelled as stain resistant. And what we found was that PFAS were present in all of these items."The researchers looked for total fluorine levels in the products to indicate the presence of PFAS. They found that the school uniform samples had higher levels than weather-resistant outdoor wear.Products made from 100% cotton contained more than synthetic products. The scientists believe that this is because synthetic items have a higher water and stain resistance.The researchers acknowledge that their study is small, involving 72 product samples labelled as water or stain resistant. The researchers also don't know how or if the PFAS in the clothing get into children's bodies."We're chemists, not toxicologists. So we tend we try not to go into realms that are not necessarily ours," said Dr Venier.PFAS are found in may locations including in rainwater in Tibet"But we know that PFAS have health concerns, so the idea that children wearing uniforms, that can continuously release these substances, is a concern," she said.Growing worries about PFAS have seen some of the chemicals banned in the UK and EU.In the UK, environmental charity Fidra has led a campaign to inform retailers about the presence of PFAS in school uniforms."We found that people that bought uniforms, with these coatings, actually washed them more often, and replaced them just as much as people that didn't buy the stain-resistant ones," said Dr Clare Cavers, a senior project manager with Fidra."The stain resistant coatings came off uniforms after a 10 to 20 washes. So if they were buying the uniform for that function, it wasn't making any difference."As a result of their efforts, all major retailers in the UK now sell their own-brand school uniforms PFAS free, said Dr Cavers.The move to eliminate PFAS from clothing and textiles is gathering pace with California ending the sale from 2025. The EU is also looking at a ban and the UK is also examining the idea as part of a post-Brexit chemical strategy.The study has been published in Environmental Science and Technology Letters.Follow Matt on Twitter @mattmcgrathbbc.
Environmental Science
Air pollution is a major public health problem: The World Health Organization has estimated that it leads to over 4 million premature deaths worldwide annually. Still, it is not always extensively measured. But now an MIT research team is rolling out an open-source version of a low-cost, mobile pollution detector that could enable people to track air quality more widely. The detector, called Flatburn, can be made by 3D printing or by ordering inexpensive parts. The researchers have now tested and calibrated it in relation to existing state-of-the-art machines, and are publicly releasing all the information about it — how to build it, use it, and interpret the data. “The goal is for community groups or individual citizens anywhere to be able to measure local air pollution, identify its sources, and, ideally, create feedback loops with officials and stakeholders to create cleaner conditions,” says Carlo Ratti, director of MIT’s Senseable City Lab. “We’ve been doing several pilots around the world, and we have refined a set of prototypes, with hardware, software, and protocols, to make sure the data we collect are robust from an environmental science point of view,” says Simone Mora, a research scientist at Senseable City Lab and co-author of a newly published paper detailing the scanner’s testing process. The Flatburn device is part of a larger project, known as City Scanner, using mobile devices to better understand urban life. “Hopefully with the release of the open-source Flatburn we can get grassroots groups, as well as communities in less developed countries, to follow our approach and build and share knowledge,” says An Wang, a researcher at Senseable City Lab and another of the paper’s co-authors. The paper, “Leveraging Machine Learning Algorithms to Advance Low-Cost Air Sensor Calibration in Stationary and Mobile Settings,” appears in the journal Atmospheric Environment. In addition to Wang, Mora, and Ratti the study’s authors are: Yuki Machida, a former research fellow at Senseable City Lab; Priyanka deSouza, an assistant professor of urban and regional planning at the University of Colorado at Denver; Tiffany Duhl, a researcher with the Massachusetts Department of Environmental Protection and a Tufts University research associate at the time of the project; Neelakshi Hudda, a research assistant professor at Tufts University; John L. Durant, a professor of civil and environmental engineering at Tufts University; and Fabio Duarte, principal research scientist at Senseable City Lab. The Flatburn concept at Senseable City Lab dates back to about 2017, when MIT researchers began prototyping a mobile pollution detector, originally to be deployed on garbage trucks in Cambridge, Massachusetts. The detectors are battery-powered and rechargable, either from power sources or a solar panel, with data stored on a card in the device that can be accessed remotely. The current extension of that project involved testing the devices in New York City and the Boston area, by seeing how they performed in comparison to already-working pollution detection systems. In New York, the researchers used 5 detectors to collect 1.6 million data points over four weeks in 2021, working with state officials to compare the results. In Boston, the team used mobile sensors, evaluating the Flatburn devices against a state-of-the-art system deployed by Tufts University along with a state agency. In both cases, the detectors were set up to measure concentrations of fine particulate matter as well as nitrogen dioxide, over an area of about 10 meters. Fine particular matter refers to tiny particles often associated with burning matter, from power plants, internal combustion engines in autos and fires, and more. The research team found that the mobile detectors estimated somewhat lower concentrations of fine particulate matter than the devices already in use, but with a strong enough correlation so that, with adjustments for weather conditions and other factors, the Flatburn devices can produce reliable results. “After following their deployment for a few months we can confidently say our low-cost monitors should behave the same way [as standard detectors],” Wang says. “We have a big vision, but we still have to make sure the data we collect is valid and can be used for regulatory and policy purposes,” Duarte adds: “If you follow these procedures with low-cost sensors you can still acquire good enough data to go back to [environmental] agencies with it, and say, ‘Let’s talk.’” The researchers did find that using the units in a mobile setting — on top of automobiles — means they will currently have an operating life of six months. They also identified a series of potential issues that people will have to deal with when using the Flatburn detectors generally. These include what the research team calls “drift,” the gradual changing of the detector’s readings over time, as well as “aging,” the more fundamental deterioration in a unit’s physical condition. Still, the researchers believe the units will function well, and they are providing complete instructions in their release of Flatburn as an open-source tool. That even includes guidance for working with officials, communities, and stakeholders to process the results and attempt to shape action. “It’s very important to engage with communities, to allow them to reflect on sources of pollution,” says Mora. “The original idea of the project was to democratize environmental data, and that’s still the goal,” Duarte adds. “We want people to have the skills to analyze the data and engage with communities and officials.”
Environmental Science
A portable instrument to measure indoor air pollution Most people know air pollution exists outside from cars, trucks and industry, but many are unaware their indoor air quality could be worse than that of a big city. Until now, there has been no easy way to measure indoor air quality given the size and complexity of the equipment—it would likely fill a single car garage and need several scientists to operate it—but researchers at York University have designed an instrument that could assess pollution levels inside homes and businesses. The total reactive nitrogen (tNr) instrument, developed by York University Assistant Professor Trevor VandenBoer of the Faculty of Science along with former York Postdoctoral researcher Leigh Crilley, uses an oven to measure a variety of chemicals that make up indoor air pollution and is the size of a small bookcase on wheels. In the future, it could be loaded onto a truck and navigated through the doorways of homes and businesses to measure reactive nitrogen species in the air in a kitchen, bedroom or basement by existing professionals with similar training to energy auditors. "The purpose of this instrument is to target emissions we know come from cooking and cleaning and have a huge impact on our indoor air quality. Managing indoor air quality involves more than just using the range hood over your gas stove, especially when your space is sealed tight for winter," says VandenBoer. "Some of the pollutants, or reactive nitrogen species, can come from stoves, furnaces, fireplaces or even burning candles, but it can also come from food you cook, such as a steak or a piece of fish. Cooking can have a large impact on the level of indoor pollutants, such as ammonia and amines." Using a gas stove, compared to an electric hot plate, would also emit much higher levels of chemicals that would become airborne, such as gas-phase nitrous acid, as well as nitrogen oxide and nitrogen dioxide. Cleaning, building materials and even human breath and skin emissions can be other significant sources of indoor air pollution. Even using cleaning staples, such as hydrogen peroxide and bleach, can create high emissions and lead to significantly worse air quality, says the researchers. "There is a need for this kind of tool to measure indoor air quality, especially given the detriments to health associated with high levels of reactive nitrogen oxides," says Crilley. "Typically, there has been no good way to measure the average home's indoor air quality and this instrument could provide an unobtrusive way to do that." The researchers tested the tNr instrument in a commercial kitchen, known for its complex indoor environment with rapidly changing levels of pollutants. Compared to the more passive method for testing indoor air pollution, this method was able to detect about 82% of reactive nitrogen species. Although not immediately available for use, the idea is that the instrument could provide people and businesses with a good understanding of their levels of indoor pollution so they can take steps to address it. The idea for designing the open-source components of the instrument came from an earlier study where the researchers measured the indoor air quality of a home in New York and found unexpectedly high pollutions levels. "We realized we really needed new instruments to study the pollution in these spaces," says VandenBoer. "There are still a lot of outstanding questions. For example, is the air in your indoor space safe for you to breathe? What makes indoor air good or bad? Could there be simple things we could do? These are questions that atmospheric chemists are just beginning to turn their attention to." The paper, "An instrument to measure and speciate the total reactive nitrogen budget indoors: description and field measurements," was published in Environmental Science: Processes & Impacts. More information: Leigh R. Crilley et al, Emerging investigator series: an instrument to measure and speciate the total reactive nitrogen budget indoors: description and field measurements, Environmental Science: Processes & Impacts (2023). DOI: 10.1039/D2EM00446A
Environmental Science
Photosynthesis plays a crucial role in shaping and sustaining life on Earth, yet many aspects of the process remain a mystery. One such mystery is how Photosystem II, a protein complex in plants, algae and cyanobacteria, harvests energy from sunlight and uses it to split water, producing the oxygen we breathe. Now researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Lawrence Berkeley National Laboratory, together with collaborators from Uppsala University, Humboldt University, and other institutions have succeeded in cracking a key secret of Photosystem II. Using SLAC’s Linac Coherent Light Source (LCLS) and the SPring-8 Angstrom Compact free electron LAser (SACLA) in Japan, they captured for the first time in atomic detail what happens in the final moments leading up to the release of breathable oxygen. The data reveal an intermediate reaction step that had not been observed before. The results, published today in Nature, shed light on how nature has optimized photosynthesis and are helping scientists develop artificial photosynthetic systems that mimic photosynthesis to harvest natural sunlight to convert carbon dioxide into hydrogen and carbon-based fuels. “The more we learn about how nature does it, the closer we get to using those same principles in human-made processes, including ideas for artificial photosynthesis as a clean and sustainable energy source,” said co-author Jan Kern, a scientist at Berkeley Lab. Details of Photosynthesis Captured by SLAC’s X-ray Laser New X-ray methods at SLAC have captured the first detailed images showing how water is split during photosynthesis at the temperature at which it occurs naturally. The research team took the images using the bright, fast pulses of light at SLAC’s X-ray free-electron laser – the Linac Coherent Light Source (LCLS), a DOE Office of Science User Facility. Chris Smith/SLAC National Accelerator Laboratory Co-author Junko Yano, also at Berkeley Lab, said, “Photosystem II is giving us the blueprint for how to optimize our clean energy sources and avoid dead ends and dangerous side products that damage the system. What we once thought was just fundamental science could become a promising avenue to improving our energy technologies.” Bases loaded During photosynthesis, Photosystem II’s oxygen-evolving center – a cluster of four manganese atoms and one calcium atom connected by oxygen atoms – facilitates a series of challenging chemical reactions that act to split apart a water molecule to release molecular oxygen. The center cycles through four stable oxidation states, known as S0 through S3, when exposed to sunlight. On a baseball field, S0 would be the start of the game when a player on home base is ready to go to bat. S1-S3 would be players on first, second, and third. Every time a batter connects with a ball, or the complex absorbs a photon of sunlight, the player on the field advances one base. When the fourth ball is hit, the player slides into home, scoring a run or, in the case of Photosystem II, releasing one molecule of breathable oxygen. In their experiments, the researchers probed this center by exciting samples from cyanobacteria with optical light and then probing them with ultrafast X-ray pulses from LCLS and SACLA. The data revealed the atomic structure of the cluster and the chemical process around it. A home run Using this technique, the scientists for the first time imaged the mad dash for home – the transient state, or S4, where two atoms of oxygen bond together and an oxygen molecule is released. The data showed that there are additional steps in this reaction that had never been seen before. “Other experts argued that this is something that could never be captured,” said co-author Uwe Bergmann, a scientist and professor at the University of Wisconsin-Madison. “It’s really going to change the way we think about Photosystem II. Although we can't say we have a unique mechanism based on the data yet, we can exclude some models and ideas people have proposed over the last few decades. It’s the closest anyone has ever come to capturing this final step and showing how this process works with actual structural data.” The new study is the latest in a series undertaken by the team over the past decade. Earlier work focused on observing various steps of the photosynthetic cycle at the temperature at which is occurs in nature. “Most of the process that produces breathable oxygen happens in this last step,” said co-author Vittal Yachandra, a scientist at Berkeley Lab. “But there are several things happening at different parts of Photosystem II and they all have to come together in the end for the reaction to succeed. Just like how in baseball, factors like the location of the ball and the position of the basemen and fielders affect the moves a player takes to get to home base, the protein environment around the catalytic center influences how this reaction plays out.” Brighter X-rays for a brighter future Based on these results, the researchers plan to conduct experiments designed to capture many more snapshots of the process. “There are still things happening in between that we could not catch yet,” Kern said. “There are more snapshots we really want to take which would bridge the remaining gaps and tell the whole story.” To do so, they need to push the quality of their data even further. In the past, these types of measurements proved challenging because the X-ray signals from the samples are faint and the rates at which existing X-ray lasers like LCLS and SACLA produce X-ray pulses are too slow. “It took quite some effort to optimize the setup, so we couldn't collect all the data we needed for this one publication in a single experiment,” said co-author and SLAC scientist Roberto Alonso-Mori. “These results actually include data taken over six years.” When an LCLS upgrade called LCLS-II comes online later this year, the repetition rate will skyrocket from 120 pulses per second to up to a million per second. “With these upgrades, we will be able to collect several days’ worth of data in just a few hours,” Bergmann said. “We will also be able to use soft X-rays to further understand the chemical changes happening in the system. These new capabilities will continue to drive this research forward and shed new light on photosynthesis.” Key components of this work were carried out at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), Berkeley Lab’s Advanced Light Source (ALS) and Argonne National Laboratory’s Advanced Photon Source (APS). LCLS, SSRL, APS, and ALS are DOE Office of Science user facilities. This work was supported by the DOE Office of Science and the National Institutes of Health, among other funding agencies. Citation: A. Bhowmick et al., Nature, 3 May 2023 (10.1038/s41586-023-06038-z) Press Office Contact: Manuel Gnida, mgnida@slac.stanford.edu SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation. SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. Related Topics - Accelerators - Energy sciences - Stanford PULSE Institute - Science news - Biological sciences - Chemistry and catalysis - Energy science - Environmental science - Lasers - Molecular movies - Sustainability - Ultrafast science - X-ray crystallography - X-ray science - X-ray spectroscopy - X-ray light sources and electron imaging - LCLS Macromolecular Femtosecond Crystallography (MFX) - LCLS-II - Linac Coherent Light Source (LCLS) - Stanford Synchrotron Radiation Lightsource (SSRL)
Environmental Science
Across the U.S., children and adults are increasingly exposed to harmful chemicals from a source few people are even aware of. It begins on a street outside a home or school, where a worker in a manhole is repairing a sewer pipe. The contractor inserts a resin-soaked sleeve into the buried pipe, then heats it, transforming the resin into a hard plastic pipe. This is one of the cheapest, most common pipe repair methods, but it comes with a serious risk: Heating the resin generates harmful fumes that can travel through the sewer lines and into surrounding buildings, sometimes several blocks away. These chemicals have made hundreds of people ill, forced building evacuations and even led to hospitalizations. Playgrounds, day care centers and schools in several states have been affected, including in Colorado, Connecticut, Massachusetts, Michigan, Pennsylvania, Washington and Wisconsin. With this sewer pipe repair method, the chemical waste is blown into the air and can enter buildings through buried sewer pipes, plumbing, foundation cracks, windows, doors and HVAC units. Andrew Whelton/Purdue University With the 2022 Bipartisan Infrastructure Law now sending hundreds of millions of dollars into communities across the U.S. to fix broken pipes, the number of children and adults at risk of exposure will likely increase. For more than a decade, my colleagues and I have worked to understand and reduce the risks of this innovative pipe repair technique. In two new studies, in the Journal of Environmental Health and Environmental Science and Technology Letters, we show that workers, and even bystanders, including children, lack adequate protection. Our research also shows the technology can be used safely if companies take appropriate action. Fixing aging pipes with harmful chemicals As U.S. water infrastructure ages, communities nationwide are grappling with thousands of broken sewer pipes in their 1.3 million-mile inventory. The new law provides US$11 billion for sewer fixes, about one-fifth of the EPA’s estimate of the need. The blue cured-in-place pipe, or CIPP, can be seen inside this damaged storm sewer pipe. The CIPP was created by steam cooking the resin into the hard plastic. Andrew Whelton/Purdue University The least expensive repair method is called cured-in-place pipe, or CIPP. It avoids the need to dig up and replace pipes. Instead, contractors insert a resin-saturated sleeve in the manhole and through the buried pipe. The resin is then “cooked,” typically with steam or hot water, and transformed into a hard plastic. One challenge is that the resin safety data sheets do not disclose all of the chemicals, and some entirely new ones are created during heating. Chemical plumes rising from nearby manholes and contractor exhaust pipes are also not just “steam.” These plumes contain highly concentrated chemical mixtures, uncooked resin, particulates and nanoplastics that can harm human health. When we examined the heating process in the lab, we found that as much as 9% of the resin was emitted into the air. CIPP production is known to discharge about 40 chemicals. Some cause nausea, headaches and eye and nasal irritation. They can also lead to vomiting, breathing difficulties and other effects. Waste that contains chemicals, uncooked resin, particulates, and nanoplastics is discharged into the air during CIPP manufacture. This complex emission is not steam. Andrew Whelton/Purdue University. Styrene, the most frequently documented chemical, is acutely toxic, and “reasonably anticipated” to cause cancer, according to the National Research Council. Chemicals other than styrene can be responsible for plume toxicity. CIPP-associated illnesses in nearby buildings So far, chemical exposures have been reported in at least 32 states and seven countries. In addition to schools, this process has contaminated homes, restaurants, medical facilities and other businesses. Companies have been cited for exposing their workers to unsafe levels of styrene. The earliest U.S. incident we know about was in 1993 at an animal shelter in Austin, Texas. Seven people were overcome by fumes and transported to a hospital. In 2001, fumes entered a hospital inn Tampa, Florida, causing employee breathing problems. Since then, hundreds more people are known to have been exposed, and the numbers are likely much higher. In our experience, exposures are rarely made public. Municipalities have encouraged people affected by the fumes to only contact the CIPP contractor and pipe owner. In some cases, people were told the exposures were always harmless. Chemicals can enter buildings through sinks, toilets, foundation cracks, doors, windows and HVAC systems. The chemicals can even enter buildings that have water-filled plumbing traps. Anticipating this risk, bystanders have been told to cover their toilets and close all windows and doors. Wind can help dilute outdoor chemical levels. However, concentrated plumes can rush through buried pipes into nearby buildings. Bathroom vent fans may sometimes increase the indoor chemical levels. Levels that should prompt firefighters to wear respirators have been found in the buried pipes. Fumes generated during sewer line repair, on the right, can enter nearby homes, schools and other buildings. Andrew Whelton/Purdue University The highest levels have been found during and after the heating process. Hand-held air testing devices commonly used by some firefighters and contractors do not accurately identify specific chemical levels. An earlier study showed the styrene levels were sometimes wrong by a thousandfold. How to protect public health With the wave of infrastructure projects coming, it’s clear that controls are needed to lower the risk that people will be harmed. Our research points to several actions that residents, companies and health officials can take to keep communities safe. We advise residents to: Close all windows and doors, fill plumbing traps with water and leave the building during pipe-curing operations, especially when children are in the building. Report unusual odors or illnesses to health officials or call 911. Seek medical advice from health officials, not the contractors or pipe owners. Evacuate buildings when fumes enter. Companies can minimize risks too. They can: Stop the cooking process when fumes leave the worksite to lessen the spread of contamination and exposures. Use resins that release less air pollution than standard resins. Ask federal agencies to evaluate hand-held air testing device use. Capture and treat air pollution from the process. While this has not yet been done at scale, it is straightforward and would be a fraction of the overall project cost. This waste will be hazardous because of its toxicity. Public health and environmental agencies should also get engaged. Federal agencies know that the practice poses health risks and can be fatal to workers. California and Florida recognize in safety documentation that bystanders could be harmed. But, so far, few steps have been taken to protect workers’ and bystanders’ health.
Environmental Science
International research team analyzes February 2023 Ohio train derailment On February 3, 2023, a train derailed in the United States near East Palestine, Ohio, leading to the combustion of vinyl chloride. Following that accident, an international team of researchers undertook an in-depth analysis of the environmental consequences of the accident. Their analysis is published in the journal Frontiers of Environmental Science & Engineering. In their analysis, the team examined a series of questions related to the environmental risk and management of the chemical accident. "We emphasized that it is unscientific to overestimate or underestimate the environmental risk of this event, and that an accurate environmental risk assessment requires comprehensive environmental monitoring data, which is currently far from complete. However, based on the chemical amount, nature, and combustion conditions, the potential environmental risks of the leaked chemicals and their combustion by-products deserve attention," said Bin Wang, an associate professor at Tsinghua University. The first question the team explored was whether or not vinyl chloride should be burned on the spot after the train derailed. The team noted that moving the derailed chemical tank cars can be very difficult and dangerous, so it is better to deal with the chemical at the scene of the accident. There was a risk of explosion, which is more dangerous and uncontrollable than ignition. The second question the team examined was whether or not the vinyl chloride burned under control in this accident. They noted that emergency responders carried out a "controlled combustion" after the train derailed. However even with controlled combustion, hazardous by-products can be generated. Because of the hazardous effects of vinyl chloride and its combustion byproducts, it was necessary to evacuate the local residents. The third question the team raised was what environmental risk was caused by the combustion of vinyl chloride in this accident. They noted that publicly there were conflicting views about the environmental risk. Some believed that the risk was very high, fearing that the accident would pollute Lake Erie and the Ohio River, which provides drinking water for millions of people. Others held the opposing view felt that the environmental risk was very small. Based on the view that the risk was small, the mandatory evacuation order was revoked on February 8, when monitoring showed that the water and air were safe. The team suggests that both of these opposing views might be wrong. They note that the accident has surely caused environmental and health risks to some extent. While the pollution in the air would quickly decrease, chemicals in the soil and groundwater will continue to pollute the environment over a long period of time unless they are cleaned up. The fourth question the team examined was what follow-up work should be done. At this point there is not enough information to conduct a comprehensive risk assessment. Follow-up work that includes monitoring and evaluation, hazardous waste disposal, site remediation, and resident relocation is needed. This comprehensive risk assessment and remediation will take time. While this assessment is happening, local residents should be relocated to avoid potential exposure to the toxic chemicals. If the assessment confirms that the environment is free of risks, then the residents can return to their homes. "In this emergency case, there were actually no good solutions to simultaneously avoid explosion and environmental risks after the train derailment. Hence it is better to solve the chemical risks from the source. We also need to bridge the gap between chemical safety management and environmental risk management," said Wang. The team notes that while modern industry and agriculture require the use of more and more chemicals, these chemicals can pollute the environment. They suggest ways to reduce the risks by substituting highly hazardous chemicals for more environment-friendly ones, through safer handling of chemicals, and with stricter transportation management. Since long before the February 2023 train derailment, the research team has been engaged in research on environmental risk assessment and control of chemicals, including the chemicals themselves and their by-products. "With regard to the emerging contaminant control action being carried out in China, we expect to be able to more closely link environmental pollutants and their source chemicals to coordinate chemical safety management and environmental risk management. We hope that our research and proposals can contribute to a healthy environment free of toxic chemicals," said Wang. More information: Bin Wang et al, Insight of chemical environmental risk and its management from the vinyl chloride accident, Frontiers of Environmental Science & Engineering (2023). DOI: 10.1007/s11783-023-1652-x Provided by Higher Education Press
Environmental Science
Victoria Fortiz, right, then a graduate student at Penn State, and Jean Self-Trail, a research geologist at the U.S. Geological Survey, work on a core sample from the Howards Tract site in Maryland. Credit: Penn State. All Rights Reserved.UNIVERSITY PARK, Pa. — Changes in Earth’s orbit that favored hotter conditions may have helped trigger a rapid global warming event 56 million years ago that is considered an analogue for modern climate change, according to an international team of scientists. “The Paleocene-Eocene Thermal Maximum is the closest thing we have in the geologic record to anything like what we’re experiencing now and may experience in the future with climate change,” said Lee Kump, professor of geosciences at Penn State. “There has been a lot of interest in better resolving that history, and our work addresses important questions about what triggered the event and the rate of carbon emissions.” The scientists analyzed core samples from a well-preserved record of the PETM near the Maryland coast using astrochronology, a technique for dating sediments against orbital patterns that occur over tens to hundreds of thousands of years, known as Milankovitch cycles. They found the shape of Earth’s orbit, or eccentricity, and the wobble in its rotation, or precession, favored hotter conditions at the onset of the PETM and that these orbital configurations together may have played a role in triggering the event.  “An orbital trigger may have led to the carbon release that caused several degrees of global warming during the PETM as opposed to what’s a more popular interpretation at the moment that massive volcanism released the carbon and triggered the event,” said Kump, the John Leone Dean in the College of Earth and Mineral Sciences. The findings, published in the journal Nature Communications, also indicated the onset of the PETM lasted about 6,000 years. Previous estimates have ranged from several years to tens of thousands of years. The timing is important to understand the rate at which carbon was released into the atmosphere, the scientists said. “This study allows us to refine our carbon cycle models to better understand how the planet reacts to an injection of carbon over these timescales and to narrow down the possibilities for the source of the carbon that drove the PETM,” said Mingsong Li, assistant professor in the School of Earth and Space Sciences at Peking University and a former assistant research professor of geosciences at Penn State, who is lead author on the study. A 6,000-year onset, coupled with estimates that 10,000 gigatons of carbon were injected into the atmosphere as the greenhouse gases carbon dioxide or methane, indicates that about one and a half gigatons of carbon were released per year. “Those rates are close to an order of magnitude slower than the rate of carbon emissions today, so that is cause for some concern,” Kump said. “We are now emitting carbon at a rate that’s five to 10 times higher than our estimates of emissions during this geological event that left an indelible imprint on the planet 56 million years ago.” Core sample from the Howards Tract site in Maryland   Credit: Penn State. All Rights Reserved.The scientists conducted a time series analysis of calcium content and magnetic susceptibility found in the cores, which are proxies for changes in orbital cycles, and used that information to estimate the pacing of the PETM. Earth’s orbit varies in predictable, calculable ways due to gravitational interactions with the sun and other planets in the solar system. These changes impact how much sunlight reaches Earth and its geographic distribution and therefore influence the climate. “The reason there’s an expression in the geologic record of these orbital changes is because they affect climate,” Kump said. “And that affects how productive marine and terrestrial organisms are, how much rainfall there is, how much erosion there is on the continents and therefore how much sediment is carried into the ocean environment.” Erosion from the paleo Potomac and Susquehanna rivers, which at the onset of the PETM may have rivaled the discharge of the Amazon River, carried sediments to the ocean where they were deposited on the continental shelf. This formation, called the Marlboro Clay, is now inland and offers one of the best-preserved examples of the PETM. “We can develop histories by coring down through the layers of sediment and extracting specific cycles that are creating this story, just like you could extract each note from a song,” Kump said. “Of course, some of records are distorted and there are gaps — but we can use the same types of statistical methods that are used in apps that can determine what song you are trying to sing. You can sing a song and if you forget half the words and skip a chorus, it will still be able to determine the song, and we can use that same approach to reconstruct these records.” Timothy Bralower, professor of geosciences at Penn State, also contributed to this research. Other contributors were James Zachos, distinguished professor at the University of California Santa Cruz; William Rush, a postdoctoral associate at Yale University and the Cooperative Institute for Research in Environmental Science at the University of Colorado Boulder; and Jean Self-Trail and Marci Robinson, research geologists at the Florence Bascom Geoscience Center, United States Geological Survey. The National Key R&D Program of China and the Heising-Simons Foundation provided funding for this work. Last Updated December 18, 2022
Environmental Science
Letting those leaves pile up? New research shows leaf litter contains persistent free radicals Research led by environmental health sciences (ENHS) associate professor Eric Vejerano has found that leaves are a source of biogenic persistent free radicals (BPFRs). Vejerano and Ph.D. in ENHS alumna Jeonghyeon Ahn published their findings in Environmental Science & Technology Letters. "We found that both coniferous and broadleaf plants contained substantial levels of persistent free radicals," he says. "This suggests that the vast amount and perpetual supply of leaf litter is an unaccounted source of persistent free radicals that, if toxic, may have negative health impacts when inhaled or ingested." As an atmospheric/air quality scientist, Vejerano's work focuses on environmental pollutants, particularly those with the potential to be airborne. He specializes in studying and tracking environmentally persistent free radicals (EPFRs)—a class of pollutants that can remain in the environment for hours or even months, sometimes traveling long distances and capable of causing adverse impacts to human and environmental health. In his lab at the South Carolina SmartState Center for Environmental Nanoscience and Risk, Vejerano and his team look at the EPFRs that are created by human activities, such as manufacturing and driving gasoline-powered vehicles. They also study naturally-occurring, or biogenic persistent free radicals (BPFR), including forest fires. With this latest study, leaf litter can be added to the list of BPFRs. Though EPFRs and their environmental/health risks have been studied extensively in the decades since they were discovered in 1954, most research has focused on those resulting from combustion and thermal processes. With this study, Vejerano and his team turned their attention to naturally-occurring materials, looking to see whether BPFRs can develop and stabilize in leaves. In addition to looking at different types of plants, they also assessed the presence of BPFRs in live and decaying leaves as well as their persistence through multiple wet and dry cycles. The BPFR levels not only persisted but increased throughout the wet/dry cycles. "With 82 percent of the Earth's land biomass comprised of plants, the presence of BPFRs in leaf litter has significant implications," Vejerano says. "When contained in leaves, BPFRs pose no health threats. However, when leaf litter eventually disintegrates, BPFRs can be absorbed into and then dispersed, where they can create potential hazards for human and environmental health." More information: Eric P. Vejerano et al, Leaves are a Source of Biogenic Persistent Free Radicals, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00277 Journal information: Environmental Science & Technology Letters Provided by Arnold School of Public Health
Environmental Science
Backwashing affects the removal of micropollutants and the dynamic changes in the microbial community in sand filters Sand filters are commonly applied in drinking water treatment and can efficiently remove suspended solids, organic matter, and microorganisms from source water. During the process, particulate matter and microbes can attach to filter sands and develop a thick biofilm in both rapid and slow sand filters. To prevent clogging and restore pollutant removal efficiency, backwashing using air, water, or a combination of both is usually required for sand filters. However, backwashing can induce a loss in biomass, thus decreasing the pollutant removal efficiency of sand filters. Slow sand filters, whose empty bed contact times (EBCTs) are typically longer than one hour, have a higher removal efficiency of micropollutants than rapid sand filters, however, their clogging issues are more severe. Slow sand filters can remove a fraction of the dissolved organic carbon, ammonium, and manganese through microbial transformation and degradation. Additionally, recent evidence suggests that some organic micropollutants can be removed by sand filters, which is at least partly attributed to the biofilm formed in the filter sands. Owing to the limited micropollutant adsorption onto sand material, biodegradation of micropollutants by indigenous microbial populations is the most crucial removal mechanism of micropollutants in slow sand filters. During backwashing, the water and air used can expand the volume of the filter media, and thus strip and remove the biofilm from the filter sands. Therefore, biofilm formation occurs periodically in the slow sand filter, and microbial community dynamics are crucial for pollutant removal in the sand filter after backwashing. However, few studies have focused on microbial biofilm community dynamics after backwashing. To solve this problem, Dr. Yaohui Bai from the Chinese Academy of Sciences and his team members have worked together to reveal the temporal dynamics of both the concentration of micropollutants and the microbial community after backwashing and to indicate the optimal intervals for backwashing slow sand filters for micropollutant removal. They conducted a laboratory column experiment to track the dynamics and resilience of the microbial biofilm community and the corresponding micropollutant removal after backwashing with slow sand filters. Two types of filter materials, manganese and quartz sand filters, were used to fully compare the influence of backwashing on the microbial community in slow sand filters under two different empty bed contact times (EBCTs). The research team's study, titled "Impacts of backwashing on micropollutant removal and associated microbial assembly processes in sand filters," is published online in Frontiers of Environmental Science & Engineering. In this study, the temporal dynamics of micropollutant removal and microbial community composition after backwashing were tracked, and the removal efficiencies of caffeine, sulfamethoxazole, sulfadiazine, trimethoprim, and atrazine gradually recovered within two days in both sand filters at two-hour EBCT, whereas declining trends in sulfadiazine and trimethoprim degradation were found at four-hour EBCT. After backwashing, the removal efficiency of atenolol in the manganese sand filter increased rapidly but remained at a high level (almost 100%) in the quartz sand filter for both EBCTs. Correspondingly, the active biomass recovered within two days under all conditions. Microbial community composition gradually recovered to the pre-backwashing level at two-hour EBCT, and the recovered microbes accounted for 82.76 % ± 0.43 % and 46.82 % ± 4.34 % in the manganese and quartz sand filters, respectively, at two-hour EBCT. In contrast, at four-hour EBCT, the community composition in sand filters did not recover to the pre-backwashing level (R < 0.25), and the depleted microbes were the major group in both types of sand filters. This study explored variations in micropollutant degradation and temporal dynamics of the microbial community after backwashing for the first time. It is indicated that a two-day optimal interval for the recovery of micropollutant removal after backwashing should not affect the operation of slow sand filters with a short EBCT. It has deepened our understanding of the effect of microbial biofilm community on the removal of pollutants in sand filter, and provides a new perspective for the removal of pollutants in drinking water. More information: Donglin Wang et al, Impacts of backwashing on micropollutant removal and associated microbial assembly processes in sand filters, Frontiers of Environmental Science & Engineering (2022). DOI: 10.1007/s11783-023-1634-z Provided by Higher Education Press
Environmental Science
EPA's new PFAS rules don't account for major source of drinking water contamination Earlier this year, the US Environmental Protection Agency proposed maximum allowable levels in drinking water for six PFAS (per- and polyfluoroalkyl substances)—so-called forever chemicals. But the draft standards do not account for half of the PFAS at contaminated sites across the country. The findings are from a team led by the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and are published in the journal Environmental Science & Technology. PFAS are present in fire retardant foams among other products and have been building up in the environment since they were first invented by Dupont in the 1930s and manufactured widely by 3M beginning in the 1950s. Exposures to some PFAS are linked to a range of health risks including cancer, immune suppression, diabetes, and low infant birth weight. PFAS compounds come in two forms: a precursor form and a terminal form. Most of the monitored PFAS compounds are terminal compounds. The EPA's draft drinking water rules are for six terminal compounds that do not degrade under normal environmental conditions. Precursor compounds can be transformed through biological or environmental processes into terminal forms. There are many precursor compounds, most of which are not routinely monitored, and none are currently regulated. The U.S. military is the largest global user of fire-retardant foams containing PFAS known as AFFF (aqueous film forming foam). For decades, hundreds of military bases across the U.S. and around the world used AFFF containing high levels of PFAS for fire training drills and fighting fires. AFFF use is one of the largest sources of PFAS contamination in drinking water. "Many PFAS precursors present in AFFF are difficult to measure. This work shows that they are slowly transforming into PFAS of health concern at AFFF-contaminated sites and contributing to downstream contamination" said Elsie Sunderland, Fred Kavli Professor of Environmental Chemistry and Professor of Earth and Planetary Sciences at SEAS and senior author on the new paper. Much of the PFAS at military sites consists of precursors that are omitted from standard analytical methods. Using a method previously developed in the Sunderland lab that captures all precursors in AFFF, the Harvard team modeled the expected duration and contribution of those precursors to groundwater contamination. The study finds that contamination of two of the newly regulated PFAS chemicals (perfluorohexane sulfonate: PFHxS and perfluorbutane sulfonate: PFBS) at one military base on Cape Cod, Massachusetts are sustained by microbial precursor biotransformation in the soil. These precursors are retained in the soil where they leach into groundwater in terminal form at concentrations thousands of times greater than the safe levels established by the EPA. The researchers projected using a computer model and field data that, without remediation, widespread PFAS contamination of drinking water supplies near military facilities is likely to persist for centuries. Despite contamination of nearby aquifers that may already pose a risk to human health, the majority of PFAS are still sitting in the soils surrounding these contaminated sites, emphasizing the urgent need for advances in remediation technology that are effective at cleaning up both terminal and precursor compounds. Since regulations focus only on terminal compounds, the effectiveness of current remediation technologies at cleaning up precursors is not known. The researchers concluded that elevated PFAS exposures downstream of more than 300 U.S. military facilities that used the fire-fighting foams could similarly persist for centuries. "The role of PFAS precursors in sustaining hazardous levels of contamination at Joint Base Cape Cod raises concern about whether exposure risks are underestimated near hundreds of other sites where they are not measured" said Bridger Ruyle, the first author of the study and former doctoral student in Sunderland's Lab. The public comment period for EPA's draft PFAS drinking water regulation closes on May 30. While a step in the right direction, there are thousands of PFAS chemical structures, several hundred of which have already been detected in the environment, Sunderland notes. In related work also published in Environmental Science & Technology today, Sunderland's group also has shown that the number of military fire training areas within a watershed is a good predictor for PFAS contamination in a community's drinking water supply. But some groups are at higher risk than others; a forthcoming publication by the Sunderland lab documents marked sociodemographic disparities in exposures to PFAS and proximity to PFAS sources across the country. Additional authors include Colin Thackray and Chad Vecitis of Harvard; Craig Butt of AB Sciex LLC; and Denis LeBlanc and Andrea Tokranov of the U.S. Geological Survey. More information: Centurial Persistence of Forever Chemicals at Military Fire Training 2 Sites, Environmental Science & Technology (2023). Journal information: Environmental Science & Technology
Environmental Science
PHILIP FONG/AFP via Getty Images toggle caption Storage tanks for contaminated water at the Fukushima Daiichi nuclear power plant are near capacity. PHILIP FONG/AFP via Getty Images Storage tanks for contaminated water at the Fukushima Daiichi nuclear power plant are near capacity. PHILIP FONG/AFP via Getty Images Workers in Japan have started releasing treated radioactive water from the Fukushima Daiichi nuclear power plant into the Pacific Ocean. The plant was destroyed in a 2011 earthquake and massive tsunami, and water has been accumulating ever since. On Thursday, the Chinese government announced it was immediately suspending aquatic imports, such as seafood, from Japan. A review by the UN's nuclear watchdog says that the discharge will have a negligible radiological impact to people and the environment, but some nations remain concerned. Here's what the Japanese government is doing, and why. Why is there water at the Fukushima plant? After the 2011 Tohoku earthquake and tsunami, several reactors melted down at the Fukushima Daiichi nuclear power plant. To avert further disaster, workers flooded the reactors with water, and that water quickly became highly contaminated. The plant is now offline and the reactors are defunct, but they still need to be cooled, which is why waste water continues to accumulate. In the years since the accident, groundwater has also filtered into the site, and some of it has become contaminated as well. Dealing with all this radioactive water has been a huge technical challenge for the Japanese Government. Currently, some 350 million gallons are being stored in more than one thousand tanks on site, according to Japanese authorities. The tanks are nearing capacity and the site can't fit any more, so some of the water needs to be released, according to the government. PHILIP FONG/AFP via Getty Images toggle caption Japan has built an elaborate system to filter out radioactive contamination from the water. But some forms of radiation cannot be filtered. PHILIP FONG/AFP via Getty Images Japan has built an elaborate system to filter out radioactive contamination from the water. But some forms of radiation cannot be filtered. PHILIP FONG/AFP via Getty Images Can't they just filter the radioactive particles out of the water? The government has been working on a complex filtration system that removes most of the radioactive isotopes from the water. Known as the Advanced Liquid Processing System (or ALPS, for short), it can remove several different radioactive contaminants from the water. The authorities have used ALPS and other systems to remove some of the most hazardous isotopes, like Cesium-137 and Strontium-90. But there's a radioactive isotope that they cannot filter out: tritium. Tritium is an isotope of hydrogen, and hydrogen is part of the water itself (H20). So it is impossible to create a filter that could remove the tritium. So how does the Japanese government plan to release this water safely? There's a couple of parts to the plan. First, they are going to dilute the water with seawater, so that there's a lot less tritium in every drop. The government says they will bring tritium levels well below all safety limits, and below the level released by some operating nuclear plants. Second, they're taking that diluted water and passing it through a tunnel under the seafloor to a point off the coast of Fukushima in the Pacific Ocean. That will dilute it further. Finally, they are going to do this slowly. It will take decades to empty all these tanks. JUNG YEON-JE/AFP via Getty Images toggle caption South Korea's main opposition Democratic Party members hold electric candles and a sign reading "No Fukushima nuclear contaminated water!" during a rally against Japan's plan on Wednesday. Other Pacific nations are also worried by the release. JUNG YEON-JE/AFP via Getty Images South Korea's main opposition Democratic Party members hold electric candles and a sign reading "No Fukushima nuclear contaminated water!" during a rally against Japan's plan on Wednesday. Other Pacific nations are also worried by the release. JUNG YEON-JE/AFP via Getty Images Do others think this process is safe? The Japanese government maintains that, especially when compared to some of the other radioactive material at the site, tritium isn't all that bad. Its radioactive decay is relatively weak, and because it's part of water, it actually moves through biological organisms rather quickly. And its half-life is twelve years, so unlike elements such as uranium 235, which has a half life of 700 million years, it won't be in the environment all that long. Given all that, the government believes that this is the safest option available. The International Atomic Energy Agency has peer-reviewed this plan and believe it is consistent with international safety standards. The IAEA also plans to conduct independent monitoring to make sure that the discharge is done safely. "The risk is really, really, really low. And I would call it not a risk at all," says Jim Smith, a professor of environmental science at the University of Portsmouth. He's spent the past few decades studying radioactivity in waterways after nuclear accidents, including at Chernobyl. "We've got to put radiation in perspective, and the plant release — if it's done properly — then the doses that people get and the doses that the ecosystem get just won't be significant, in my opinion," Smith says. Edwin Lyman is the director of nuclear power safety at the Union of Concerned Scientists in Washington, D.C. He says that out of the limited options Japan has for this waste water, none of them are good, but: "In my view, I think that their current plan, unfortunately is probably the least bad of a bunch of bad options," he says. "The idea of deliberately discharging hazardous substances into the environment, into the ocean is repugnant," Lyman says. "But unfortunately, if you do look at it from the technical perspective, it's hard to argue that the impacts of this discharge would be worse than those that are occurring at nuclear power plants that are operating worldwide." But not everyone agrees that discharging the water is the best option. Ken Buesseler, a senior scientist at the Woods Hole Oceanographic Institute, thinks it would have been better to keep the contaminated water on land "where it's much easier to monitor." Options could have included mixing it into concrete to immobilize it. Beussler doesn't think the water will pose a risk across the Pacific. "We don't expect to see widespread direct health effects, either on humans or on marine life," he says. But he does think that non-tritium contaminates missed by the ALPS system could build up over time near the shore. "Nearshore in Japan could be affected in the long term because of accumulation of non-tritium forms of radioactivity," he says. That could ultimately hurt fishermen in the area. Moreover, Buesseler worries about the message sent to other nations, who may be eager to dispose of nuclear waste at sea. How are other nations responding to Japan's decision? Other nations have expressed concern over Japan's plan. South Korea has seen mounting public protests over the decision. Beussler consults for the Pacific Islands Forum, a coalition of nations including the Marshall Islands and Tahiti that are also apprehensive about Japan's decision. He notes that many of these countries were subjected to high levels of radioactive fallout as a result of atmospheric nuclear tests during the Cold War. "There are islands they can't return to...because of legacy contamination," Buesseler says. Moreover, "they're suffering in many ways from climate change and sea level rise more than the rest of the world," he says. From their perspective, Japan's release into the Pacific "is just one insult, environmentally, among others."
Environmental Science
Researchers have known for decades that orcas across the North Pacific have harmful pollutants in their system. Now, a new study reveals orcas in the western North Atlantic, including those in the Arctic, are significantly more contaminated than animals in the east—a finding that “shocked” study leader Anaïs Remili, a postdoctoral researcher at McGill University in Montreal. The research strongly points to their diet playing a major role in the level of pollutants, rather than their location. The study looked at the presence of persistent organic pollutants, or toxic chemicals that degrade slowly and accumulate in the body, in the blubber of orcas across the North Atlantic. These pollutants, relics of industrial and agricultural processes, “have a nasty tendency to bind to fat,” says Remili, whose study was published in October in the journal Environmental Science & Technology. These chemicals weaken orcas’ immune systems, disrupt their endocrine function, impede growth and brain development, and even interfere with reproduction. (Read more about the impact of pollution on orca populations.) Contaminants amplify as they move up through the food chain, and the orcas that consume top predators—for example, those that primarily eat other marine mammals rather than fish—are most polluted. Thanks to their high body fat and position as apex predators, orcas are some of “the most contaminated animals on the planet,” Remili says. Her earlier research showed that eastern North Atlantic orcas primarily feed on herring; mid-North Atlantic orcas feed on seals and mackerel; and western North Atlantic orcas feed on baleen whales, porpoises, belugas, narwhals, and seals. It makes sense that western North Atlantic orcas would have higher pollutants, due to their diet, but “you would expect less contaminants overall in the Arctic compared to industrialized areas,” such as off the east coast of North America, Remili says. She was also surprised to discover that some orcas had a pollutant concentration of more than 90 parts per million, which is more than double the maximum threshold determined to cause reproductive failure in marine mammals. “We’ve really come to learn that you are what you eat,” says Peter Ross, senior scientist and healthy waters program director at the Raincoast Conservation Foundation in British Columbia, who wasn’t involved in the study. “The top of the food chain, as exemplified by these long-lived killer whales, is extremely vulnerable.” A ‘nail in the coffin’ For the study, researchers collected blubber samples from 162 orcas of varying ages and sexes across the North Atlantic, including the Canadian Arctic, Greenland, Iceland, Norway, and the Faroe Islands. The samples were obtained between 2008 and 2022 through various methods—some were taken from boats via harmless biopsy-collecting darts, and others were collected from harvested or stranded individuals. The team then analyzed some of the samples’ contaminant levels in the lab. Ross, whose research in 2000 first established that orcas carried pollutants, wonders to what extent age might have influenced the apparent higher pollutant levels in the western North Atlantic orcas. Older orcas have more pollutants in their system after “accumulating a lifetime of contaminants,” possibly as long as 90 years. Orca calves are also particularly susceptible since they nurse; a calf “is actually one step higher in the food chain than its mother,” Ross says. Overall, he says this study is another “nail in the coffin” showing the effects of these persistent chemicals in the environment. “The oceans are not limitless, nor is the planet.” While scientists lack evidence to determine whether these pollutants are affecting reproductive rates now, this will likely affect them in years to come. The contaminants’ impacts are compounded by other pressures, including noise disturbance and the low availability of quality prey, and some populations “are not going to be able to recover and/or grow over time,” says Tanya Brown, a research scientist at Canada’s Fisheries and Oceans department studying the effects of contaminants on orca health. With only 73 animals left in the fish-eating southern resident killer whale population, which lives off the Pacific Northwest, “those cumulative pressures are ultimately leading to [its] demise, potentially.” ‘Majestic animals’ In the 1940s, an industrial boom that relied on pesticides, coolants, and flame retardants spread many of these pollutants throughout the environment, according to the U.S. Environmental Protection Agency. In 2001, the United States and more than 90 other countries signed a United Nations treaty agreeing to stop using certain chemicals and to destroy stockpiles, but many of these remain, likely leaking pollutants. Even though many of these chemicals were banned more than 50 years ago, they’re still “wreaking havoc” on orcas’ health, Brown says. As climate change worsens, so might orca pollution. For instance, warmer Arctic waters may draw more orcas north, where they’ll feed on high-fat marine mammals, Remili speculates. “It tells us that we need to start acting now,” Remili says. She calls on countries to destroy toxic waste sitting in warehouses around the world and prevent new contaminants from being released. “Killer whales are majestic animals,” she says. “If we don't have killer whales anymore, our ecosystems are going to be completely out of balance.”
Environmental Science
Even as the world slowly begins to decarbonize industrial processes, achieving lower concentrations of atmospheric carbon requires technologies that remove existing carbon dioxide from the atmosphere — rather than just prevent the creation of it. Typical carbon capture catches CO2 directly from the source of a carbon-intensive process. Ambient carbon capture, or “direct air capture” (DAC) on the other hand, can take carbon out of typical environmental conditions and serves as one weapon in the battle against climate change, particularly as reliance on fossil fuels begins to decrease and with it, the need for point-of-source carbon capture. New research from Northwestern University shows a novel approach to capture carbon from ambient environmental conditions that looks at the relationship between water and carbon dioxide in systems to inform the “moisture-swing” technique, which captures CO2 at low humidities and releases it at high humidities. The approach incorporates innovative kinetic methodologies and a diversity of ions, enabling carbon removal from virtually anywhere. The study was published today in the journal Environmental Science and Technology. “We are not only expanding and optimizing the choice of ions for carbon capture, but also helping unravel the fundamental underpinnings of complex fluid-surface interactions,” said Northwestern’s Vinayak P. Dravid, a senior author on the study. “This work advances our collective understanding of DAC, and our data and analyses provide a strong impetus to the community, for theorists and experimentalists alike, to further improve carbon capture under practical conditions.” Dravid is the Abraham Harris Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering and director of global initiatives at the International Institute for Nanotechnology. Ph.D. students, John Hegarty and Benjamin Shindel, were the paper’s co-first authors. Shindel said the idea behind the paper came from a desire to use ambient environmental conditions to facilitate the reaction. “We liked moisture-swing carbon capture because it doesn't have a defined energy cost,” Shindel said. “Even though there’s some amount of energy required to humidify a volume of air, ideally you could get humidity ‘for free,’ energetically, by relying on an environment that has natural dry and wet reservoirs of air close together.” There are already companies working to commercialize direct air carbon capture. The group also expanded the number of ions used to make the reaction possible. “Not only have we doubled the number of ions that exhibit the desired humidity-dependent carbon capture, we have also discovered the highest-performing systems yet,” John Hegarty said. In recent years, moisture-swing capture has taken off. Traditional carbon capture methods use sorbents to capture CO2 at point-of-source locations, and then use heat or generated vacuums to release CO2 from the sorbent. It comes with a high-energy cost. “Traditional carbon capture holds onto CO2 tightly, which means it takes significant energy to release it and reuse it,” Hegarty said. It also doesn’t work everywhere, Shindel said. Agriculture, concrete and steel manufacturers, for example, are major contributors to emissions but take up large footprints that make it impossible to capture carbon at a single source. Shindel added that wealthier countries should be attempting to get below zero emissions as developing countries, which rely more on the carbon economy, ramp down CO2 production. Another senior author, chemistry professor Omar Farha, has experience exploring the role of metal-oxide framework (MOF) structures for diverse applications, including CO2 capture and sequestration. “DAC is a complex and multifaceted problem that requires an interdisciplinary approach,” Farha said. “What I appreciate about this work is the detailed and careful measurements of complex parameters. Any proposed mechanism must explain these intricate observations." Researchers in the past have zeroed in on carbonate and phosphate ions to facilitate moisture-swing capture and have specific hypotheses relating to why these specific ions are effective. But Dravid’s team wanted to test a wider breadth of ions to see which were the most effective. Overall, they found ions with the highest valency — mostly phosphates — were most effective and they began going down a list of polyvalent ions, ruling out some, as well as finding new ions that worked for this application, including silicate and borate. The team believes that future experiments, coupled with computational modeling, will help better explain why certain ions are more effective than others. There are already companies working to commercialize direct air carbon capture, using carbon credits to incentivize companies to offset their emissions. Many are capturing carbon that would already have been captured through activities such as modified agricultural practices, whereas this approach unambiguously sequesters CO2 directly from the atmosphere, where it could then be concentrated and ultimately stored or reused. Dravid’s team plans to integrate such CO2 capturing materials with their earlier porous sponge platform, which has been developed to remove environmental toxins including oil, phosphates and microplastics.
Environmental Science
A framework for screening pharmaceuticals and personal care products in landfill leachates The prevalence of pharmaceuticals and personal care products (PPCPs) in the environment has generated increasing concern due to the potential threats they pose to the ecosystem and human health. Landfill leachate is an important source of PPCPs in water; however, it has rarely been involved in source apportionment due to the lack of indicator-PPCPs (i-PPCPs) in landfill leachates. A team of researchers from East China University of Science and Technology provides the first systematic framework for identifying i-PPCPs for landfill leachates based on the wide-scope target monitoring of PPCPs. The number of target PPCPs increased from less than 20 in previous studies to 68. Their analysis was published in the journal Frontiers of Environmental Science & Engineering on September 20, 2023. PPCPs consist of different classes of compounds, including antibiotics, adrenergic agents, anthelmintics, anticoagulants, antidepressants, hypoglycemic agents, and lipid regulators, etc. They are continuously discharged from various emission sources. Because of this, PPCP source identification is indispensable for effective control of PPCP discharge, to reduce the risks to aquatic environments. Numerous approaches have been developed to track PPCPs, and indicator-based methods have been applied widely for source identification. To date, there have been a few studies on screening indicators, the majority of which have been based on several criteria: concentration, detection frequency, and detection ratio (defined as measured concentration divided by the limit of quantification). In most cases, compounds chosen as indicators due to their higher concentrations and higher detection frequency were not investigated to verify their source-specificity. Recently, investigations on PPCP occurrence and characteristics in municipal solid waste (MSW) landfills have indicated that landfill leachates are an underrecognized source of PPCPs. Landfill leachate discharge reaching surrounding aquatic environments unintentionally could result in high environmental risks. Notably, in most cases, landfill leachate is not the only source of PPCPs in the adjacent region. Rural areas, where MSW landfills are usually located, can also be contaminated by domestic wastewater, livestock waste water and other potential PPCP sources. Source apportionment, therefore, is an effective approach to identify where PPCPs in the surface water and groundwater originated from. Unfortunately, while there are many studies on the indicators for other emission sources that can be used for source apportionment, indicators for landfill leachate are limited. The work of Professor Xia Yu's team fills this gap. In this study, the research team developed a systematic framework for the identification of indicator-PPCPs (i-PPCPs) in raw landfill leachate samples. A total of 68 PPCPs were simultaneously analyzed in the leachate samples collected from an MSW landfill in Shanghai, China. Principal component analysis (PCA) was conducted to identify PPCPs of high concern according to the occurrence, exposure potential, and ecological effect, to ensure the practicality of using the proposed indicators in the aquatic environment. Finally, the source-specificity and representativeness of i-PPCPs were verified by comparison to other emission sources. By applying the screening framework with statistical analysis, researchers can use the results to implement source apportionment in the vicinity of landfills. More information: Xiping Kan et al, Screening of indicator pharmaceuticals and personal care products in landfill leachates: a case study in Shanghai, China, Frontiers of Environmental Science & Engineering (2023). DOI: 10.1007/s11783-023-1716-y Provided by Frontiers Journals
Environmental Science
Ohio train derailment, clean-up resulted in high levels of some gases, study shows A freight train carrying industrial chemicals derailed near East Palestine, Ohio, in February 2023, and to avoid explosions, authorities conducted a controlled release and burned the cars' contents. Residents were worried about their health and the environment, so researchers have been assessing the local air quality with stationary and mobile sampling methods. Now, in ACS' Environmental Science & Technology Letters, they report that some gases, including acrolein, reached levels that could be hazardous. After the derailment, disaster response teams emptied and burned the cargo. Because the tanker cars carried many volatile compounds, such as vinyl chloride and butyl acrylate, localized air-quality-related evacuation orders were issued. However, after returning to their homes, some residents reported symptoms similar to those that typically result from exposure to hazardous levels of airborne compounds. So, Albert Presto and colleagues wanted to monitor air quality and identify the potential health risks in and around East Palestine. The researchers downloaded air-quality monitoring data from two U.S. Environmental Protection Agency (EPA) stations at fixed locations. And to map patterns of airborne compounds, they drove a cargo van around the area for two days in late February. Inside the van was a mass spectrometer, which was used to identify a wide array of gases, upwind and downwind of the accident site. Then the team calculated the health risks for the gases that were above average or background levels. From the EPA data, the team determined that the levels of nine of the 50 gases initially rose above their normal baselines, especially acrolein, a respiratory irritant. If these nine compounds remained at those levels, the ambient air could pose health risks, say the researchers. Yet, through February, the amounts of many pollutants decreased significantly. In fact, vinyl chloride declined to concentrations below long-term limits of health concern. Mobile monitoring detected changes over time and space that the stations could not. For instance, during the day, acrolein and butyl acrylate were up to six times higher near the accident site than background levels, but at night they dropped to the background amount. These results indicate the importance of complementary stationary and mobile air-quality assessment techniques, the researchers say, and both should continue as cleanup activities proceed. More information: Oladayo Oladeji et al, Air Pollutant Patterns and Human Health Risk following the East Palestine, Ohio, Train Derailment, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00324 Journal information: Environmental Science & Technology Letters Provided by American Chemical Society
Environmental Science
A rapid spike in cases of a potentially deadly, drug-resistant fungus has concerned public health officials across the nation. But a team of Southern Nevada researchers hope their new study applying wastewater surveillance can help health officials get a step ahead of this emerging global public health threat. The Pathogen Problem Candida auris is a fungus that can cause serious infections, particularly in patients who are immunocompromised, have pre-existing health conditions, are in long-term healthcare settings, or are undergoing treatment with invasive medical devices such as a catheter. Infection prevention and control is challenging because the fungus can grow on both dry and moist surfaces such as furniture, door handles, clothing, and medical equipment in healthcare facilities. It's also shown resistance to many commonly used surface disinfectants and all three types of antifungal medicines. More than 1 in 3 patients with invasive C. auris infections -- which can affect the blood, heart, or brain -- dies. What's more, Nevada -- one of six states with recently high burdens of C. auris -- last year experienced outbreaks across multiple healthcare facilities and logged the most U.S. cases of the fungal infection. The Silver State experienced a 16-fold increase from just 24 cases in 2021 to 384 cases in 2022, according to the Centers for Disease Control and Prevention (CDC). Cases have also been reported in dozens of other countries. What They Found A research team led by Casey Barber, a UNLV School of Public Health doctoral student and Southern Nevada Water Authority (SNWA) graduate intern, recently published a study in the journal Environmental Science & Technology that analyzed 10 weeks' worth of wastewater samples from seven Southern Nevada sewersheds. The scientists detected the genetic material of C. auris in at least one untreated sewage sample from each Southern Nevada wastewater treatment facility and nearly 80% of all untreated sewage samples in the study. The sewersheds serving healthcare facilities involved in the outbreak also showed higher detection frequencies for the fungus. Researchers noted that no fungus was detected in untreated sewage samples from a wastewater treatment facility in Utah, an area with no known C. auris cases at the time. The fungus was not detected in the Las Vegas Wash, which contains treated wastewater effluent, nor in Lake Mead, indicating that there is no sign that C. auris poses a risk to drinking water. "These results show that wastewater surveillance may help monitor the spread of C. auris and could serve as an early warning system for public health action," Barber said. Other Takeaways The first human case of C. auris was reported in 2009, but it's become more prevalent in recent years. The fungus is often spread via contaminated surfaces or skin-to-skin contact with infected individuals, including with those who are asymptomatic. Scientists called the Southern Nevada fungus flare-up -- which erupted in August 2021 and has now affected over 30 healthcare facilities -- one of the largest recent outbreaks of healthcare-associated C. auris in the U.S. The research team formally launched C. auris-specific monitoring and data collection in late June 2022, as part of a larger ongoing UNLV wastewater surveillance collaboration with SNWA. In addition to implications for large-scale C. auris detection and prevention, researchers said the study is groundbreaking in its progress towards helping establish new procedures for sewage sample processing, preparation, and analysis to look for C. auris. Wastewater surveillance, they said, may provide a more accurate estimate of C. auris prevalence than traditional public health surveillance methods, in part because traditional methods may not accurately identify C. auris, leading to delays in targeted intervention measures. The team also anticipates that their previously established approach to monitoring COVID-19 levels in wastewater could be applied to watching for mutations and new strains of C. auris. "Detection of Candida auris through wastewater surveillance has already prompted expanded screenings in Southern Nevada healthcare facilities in an effort to prevent larger outbreaks," said SNWA principal research microbiologist Daniel Gerrity. "This demonstrates how wastewater surveillance can be applied to emerging public health threats beyond COVID-19." Story Source: Journal Reference: Cite This Page:
Environmental Science
This story originally appeared on Grist and is part of the Climate Desk collaboration.Appalachian states like Kentucky have a long, turbulent history with coal and mountaintop removal—an extractive mining process that uses explosives to clear forests and scrape soil in order to access underlying coal seams. For years, researchers have warned that land warped by mountaintop removal may be more prone to flooding, due to the resulting lack of vegetation to prevent runoff. Without trees to buffer the rain and soil to soak it up, water pools together and heads for the least resistant path—downhill.In 2019 a pair of Duke University scientists conducted an analysis of flood-prone communities in the region for Inside Climate News, identifying the most “mining-damaged areas.” These included many of the same Eastern Kentucky communities that saw river levels rise by 25 feet in just 24 hours this past week. “The findings suggest that long after the coal mining stops, its legacy … could continue to exact a price on residents who live downstream from the hundreds of mountains that have been leveled in Appalachia to produce electricity,” wrote Inside Climate News’ James Bruggers at the time.Now, in 2022, those findings feel tragically prescient. From July 25 to 30, Eastern Kentucky saw a mixture of flash floods and thunderstorms bringing upwards of 4 inches of rain per hour, swelling local rivers to historic levels. To date, the flooding has claimed at least 37 lives.Nicolas Zégre, director of West Virginia University’s Mountain Hydrology Laboratory, studies the hydrological impacts of mountaintop removal and how water moves through the environment. While it’s too early to know how much the area’s history of mining contributed to this year’s flooding, he said he thinks of Appalachia as “climate zero,” a region built on the coal industry, which contributed to rising global temperatures and increased carbon in the atmosphere.“Whether it was the 2016 flood in West Virginia or the recent floods in Kentucky, there’s more intense rainfall due to warmer temperatures,” Zégre said, “and then that rainfall was falling on landscapes that have had their forests removed.”To some regional scientists, strip mining isn’t the only factor behind increased flooding. A 2017 Environmental Science and Technology study looked at how mountaintop-removal mining might actually help store precipitation. When a mountaintop is rocked by explosions, leftover material is packed into areas known as valley fills. According to the authors, “mined watersheds with valley fills appear to store precipitation for considerable periods of time.”The study did note that the material in valley fills often contains toxic chemicals and heavy metals resulting from the mining process. These compounds are subsequently washed into streams during heavy rain, a process known as alkaline mine drainage. According to a 2012 study, also from Environmental Science and Technology, alkaline mine drainage has polluted as much as 22 percent of all streams in central Appalachia.
Environmental Science
At least two massive underwater blasts caused this week’s damage to the Baltic Sea gas pipelines and methane leaks.The ruptures on the Nord Stream natural gas pipeline system under the Baltic Sea have led to what is likely the biggest single release of climate-damaging methane ever recorded, the United Nations Environment Programme (UNEP) has said. A huge plume of highly concentrated methane, a greenhouse gas far more potent but shorter-lived than carbon dioxide, was detected in an analysis this week of satellite imagery by researchers associated with the UNEP’s International Methane Emissions Observatory, or IMEO, the organisation said on Friday. “This is really bad, most likely the largest emission event ever detected,” Manfredi Caltagirone, head of the IMEO for UNEP, told Reuters. “This is not helpful in a moment when we absolutely need to reduce emissions.” Researchers at GHGSat, which uses satellites to monitor methane emissions, estimated the leak rate from one of four rupture points was 22,920kg (around 50,000 lbs) per hour. That is equivalent to burning about 630,000 pounds (around 286,000kg) of coal every hour, GHGSat said in a statement. “This rate is very high, especially considering it’s four days following the initial breach,” the company said. The IMEO tweeted on Saturday that new data appears to indicate that the leakage of methane appears to be diminishing. “New analysis of data provided by the satellite Sentinel2 today indicates a significant reduction in the estimated diameter of the methane plume – from 520m to 290m. A similar reduction is also observed in the estimated concentration of methane leaked in the pipeline rupture,” the IMEO said, 📌New analysis of data provided by the satellite Sentinel2 today indicates a significant reduction in the estimated diameter of the methane plume – from 520m to 290m. A similar reduction is also observed in the estimated concentration of methane leaked in the pipeline rupture.📉 pic.twitter.com/35muTsGTyU — International Methane Emissions Observatory (@CH4Observatory) September 30, 2022 At least two underwater blasts, likely packing the force of a bomb blast “corresponding to an explosive load of several hundred kilos” of explosives, caused this week’s leaks in Baltic Sea gas pipelines, the governments of Denmark and Sweden said. The blasts measured 2.3 and 2.1 on the Richter scale, resulting in four leaks, venting gas into the sea. Two of the leaks are in Danish territory; another two are in Swedish territory. In a statement on Friday to the UN Security Council, the two countries noted that the gas plumes being vented were disrupting air and sea vessels and could be dangerous to marine life. Additionally, greenhouse gas is being released into the environment. The leaks could continue through at least Sunday. “All available information indicates that those explosives are the results of a deliberate act. Such acts are unacceptable, endanger international security and give cause for our deep concern,” the statement read. Sweden’s coastguard also reported on Friday that the amount of gas leaking from the breach in its exclusion zone had diminished, after observing the situation from the air. The coastguard also pointed out that ships in the areas should now keep a safe distance of 7 nautical miles, (just under 13km), rather than 5 nautical miles, as previously requested. ‘Most wasteful’ The total amount of methane leaking from the Gazprom-led pipeline system may be higher than from a major leak that occurred in December from offshore oil and gas fields in Mexican waters of the Gulf of Mexico, which spilled about 100 tonnes of methane per hour, Caltagirone said. The Gulf of Mexico leak, also viewable from space, ultimately released about 40,000 tonnes of methane over 17 days, according to a study conducted by the Polytechnic University of Valencia and published in the journal Environmental Science & Technology Letters. Improved satellite technology has rapidly enhanced the ability of scientists to find and analyse greenhouse gas emissions in recent years, something some governments hope will help companies detect and prevent methane emissions. The major leaks that suddenly erupted in the Nord Stream gas pipelines that run from Russia to Europe have generated plenty of theories but few clear answers about who or what caused the damage. Both Russia and the European Union have suggested the ruptures were caused by saboteurs. Europe and the United States have heaped sanctions on Moscow in retaliation for its invasion of Ukraine, raising worries the Kremlin will seek to deprive Europe of crucial energy supplies leading into the winter. Caltagirone said, whatever the cause, the damage to the pipeline posed a problem beyond energy security. “This is the most wasteful way to generate emissions,” he said.
Environmental Science
Vinyl chloride entered the spotlight after the Feb. 3. But the hazardous substance has been around for decades and is everywhere – from buildings and vehicle upholstery to children's toys and kitchen supplies – and factories have been emitting the EPA-designated toxic chemical into the air for years. The train that derailed had the manmade and volatile compound on board, prompting temporary evacuations amid concerns it could quickly impact people in the area. Then when officials decided to burn it, there were also concerns it could release phosgene, a gas that can be highly lethal and was used as a chemical weapon in WWI. But the derailment isn't the first time vinyl chloride has alarmed experts. They've been concerned about its potential impacts for decades. On Jan. 2, the U.S. Department of Health and Human Services published a draft toxicological profile for the substance. In it, experts say that the volatile compound, "used almost exclusively by the plastics industry," has "leached into groundwater from spills, landfills, and industrial sources," and that people who live around plastic manufacturing facilities "may be exposed to vinyl chloride by inhalation of contaminated air." "This disaster is really a wakeup call," Jimena Díaz Leiva, the science director for nonprofit Center for Environmental Health told CBS News. "...There needs to be a lot more regulatory oversight and action to address not just the safety and the actual transport around these chemicals, but also just stemming our production of all these chemicals." Díaz Leiva also said its risk has been underestimated – both in terms of its potential toxins and the greenhouse gas emissions involved in its production. And in the U.S., there are dozens of places where such exposure is possible. The base for "poison plastic" Vinyl chloride is the "essential building block of PVC plastic," Díaz Leiva said. "It's an incredibly dirty process that emits a lot of chemicals and uses a lot of chemicals in the manufacturing process, resulting in a lot of worker exposures and also exposures of people in frontline and fenceline communities," Díaz Leiva, who got her Ph.D. in environmental science, policy and management, said. "...PVC is called the poison plastic." The CEH published a report on polyvinyl chloride (PVC), a type of plastic used in pipes, buildings, packaging film, flooring and more, in 2018, saying, "the bottom line is there is no way to safely manufacture, use, or dispose of PVC products." The problem begins at vinyl chloride's origins. It's generated from ethane, which is obtained through fracking natural gas, a process that's significantly grown since 2013 and when done, emits the greenhouse gas methane – a major driver of climate change. PVC, according to a 2020 study, has a "high potential in global warming than other plastics" due to its high energy consumption and CO2 emissions. The U.S. Energy Information Administration said ethane production hit a monthly record last year of more than 2.4 million barrels per day. They expect production will hit 2.7 million barrels per day this year, as the global PVC market is expected to become a $56.1 billion industry within the next 3 years. A 2022 study also found that U.S. PVC production emitted roughly 18 million metric tons of CO2 in 2020. According to the EPA's Toxics Release Inventory (TRI), which "tracks the management of certain toxic chemicals that may pose aand the environment," there are 38 TRI facilities in 15 states – mostly around the Gulf of Mexico and the eastern U.S. – that use vinyl chloride, emitting about half a million pounds of the substance every year. In 2021, there were 428,523 pounds of the substance released, according to the EPA. As of 2021, vinyl chloride ranks as one of the most released chemicals in the U.S. Out of 531 chemicals reported to the agency, the substance ranks 117th, with one being the highest releases. Essentially all of those emissions came from the chemical industry in 2021 and were released into the air, and just five facilities made up more than half of those releases. The top emitter, Formosa Plastics Corp. Texas, sits along a bay leading into the Gulf of Mexico. They released more than 68,000 pounds of vinyl chloride into the air that year. These numbers, however, may be lower than what's true because not all facilities using the chemical compound are required to report to the EPA. "An underestimated risk" The emissions are known to have contributed to health issues in nearby communities. Mossville, Lousiana, a tiny town just west of Lake Charles that was founded by people who were formerly enslaved, has been historically plagued by manufacturing pollution. The area is surrounded by more than a dozen industrial facilities, including at least one working with vinyl chloride that has a history of violations and scores far above national and industry levels for factors contributing to health issues. In 2021, the site was fined more than $447,000 for violations for failure to ensure performance, management safety, mechanical integrity and record keeping, among other things. The area is part of what's known as "." "It's a predominantly Black and Brown community. And a lot of the plastics manufacturing companies that are around there, these are the ones that are producing the same precursors that are getting us to PVC plastic and other types of plastics," Díaz Leiva said. Dr. Juliane Beier, assistant professor of medicine at Pittsburgh Liver Research Center and an expert who contributed to the DHHS report, told CBS News those most at-risk are occupational workers. But those in areas near PVC-producing factories could also face exposure. How much vinyl chloride people can be subjected to before suffering health effects is still being researched, and different agencies have set different limits and recommendations. The Occupational Safety and Health Administration, for example, says that workers should not be exposed to more than 1 ppm of vinyl chloride over an 8-hour period, or more than 5 ppm averaged over any period less than 15 minutes. The Agency for Toxic Substances and Disease Registry, however, sets its minimal risk levels – the estimate of how much someone can ingest without a noticeable health impact – much lower. Those who have been exposed for 14 days or less have an MRL of 0.5 ppm for inhalation, while those who have been exposed for 15 to 364 days have an MRL of 0.02 ppm. Once in the air outside, vinyl chloride dissipates within a few days, so emissions from PVC production don't necessarily pose a long-term or widespread impact. However, the ATSDR says areas near vinyl chloride manufacturing and processing plants, as well as waste sites and landfills, have seen a wide range of vinyl chloride concentrations. It's usually a range from "trace amounts to over 1 ppm," the agency says, but levels have gotten as high as 44 ppm around landfills. Beier is currently researching exposure limits and the impact on livers, and told CBS News that at .08 ppm – which is less than the max threshold considered "safe" by OSHA standards – vinyl chloride could still impact health. The concentration at which it can impact health is also far lower than when it's immediately detectable. The substance's odor threshold – the concentration when most people can smell it – is 3,000 ppm in the air, according to the ATSDR. "We have shown experimentally – this is not in humans – that these lower concentrations will enhance liver disease that is either pre-existing or caused by other factors," she said. "And so that is one of my concerns...are there residents that have underlying liver disease?" When asked if she there should be more concern about the hazards of vinyl chloride, Beier issued a swift response: "Yes." "We need to raise awareness that low levels of vinyl chloride that are currently considered safe may enhance underlying disease, this may be liver disease, but maybe also other disease," she said. "...But this is, I think, an underestimated risk." "The whole vinyl chloride story is absolutely, absolutely under-studied and definitely needs more investigation," Beier said. for more features.
Environmental Science
California is once again being deluged by atmospheric rivers. What are they, and will climate change make them worse? California is once again being deluged by atmospheric rivers that have unleashed major flooding across the state, with river number 12 scheduled to dump more precipitation the week of March 19. Experts from Northeastern explained in a story that originally ran in January what causes these flowing streams of vapor that causing the record rains and deadly floods that have put millions of people under an evacuation order. They talked with Northeastern Global News about the science behind the weather events—and whether climate change and global warming will increase the intensity of rainfall associated with these rivers in the sky. Atmospheric rivers are responsible for ferrying fresh water from the warm tropics eastward to the Western United States, where the associated vapors condense into rain and sometimes snow. The 'Pineapple Express' "There are these ribbons of very moisture laden air that extend out of the tropics," says Samuel Munoz, assistant professor of marine and environmental science at Northeastern's Marine Science Center and Coastal Sustainability Institute. "When they collide with a coast, that's the dominant mechanism by which California and much of the Western U.S. and Pacific Northwest in Canada get rain," he says. Because the atmospheric rivers originate in Pacific tropic areas such as Hawaii, meteorologists dub them "the Pineapple Express," says Lindsay Lawrence, a Northeastern Ph.D. student who has bachelor's and master's degrees in meteorology. "Generally speaking, atmospheric rivers are most extreme during winter months, December, January and February," Lawrence says. Even in a normal season, atmospheric rivers can wreak havoc. Capable of being 300 miles wide and thousands of miles long, the moisture-laden streams can carry many times the volume of water of the Mississippi River, says Auroop Ganguly, director of the Sustainability & Data Sciences Laboratory at Northeastern. "When these massive waterfronts—the largest transporters of freshwater on the planet Earth—hit land, they usually result in massive rain and snow events leading often to devastating floods." Not all atmospheric river events are catastrophic, but events such as the Pineapple Express have the ability to dump an immense amount of rain and snow in a short amount of time, according to the National Oceanic and Atmospheric Administration. "We've had six storms in the last two weeks. This is the kind of weather you would get in a year and we compressed it just into two weeks," California Lt. Gov. Eleni Kounalakis said Wednesday, according to CNN, which reported that four more atmospheric rivers are supposed to hit the state in the next week or so. The new events are on top of the five that have already occurred and been associated with the deaths of 19 people, including a 5-year-old boy swept away in flood waters. "They have come one after another," Munoz says. "That's causing really huge amounts of water to be delivered all at once." The impact of climate change Atmospheric rivers may deliver even more rain in the future, scientists say. "Ongoing research suggests that they may get more intense in climate change," Ganguly says. As the atmosphere warms, it holds more moisture, Munoz says. As with hurricanes, some research associates the increased moisture to heavier rainfalls. Other scientists are looking into whether climate change is associated with changes in where atmospheric rivers make landfall, possibly moving southward, Munoz says. Atmospheric rivers have cold fronts and warm fronts located off a low pressure center, Lawrence says. As the rivers stream westward, the cold front tries to catch up to the warm front, forming a narrow band of water vapor, she says. If the location of the low pressure shifts, the entire system will move around, Lawrence says. "To know where they are going to move is unfortunately hard to say right now." Munoz says it appears that catastrophic atmospheric river events occur every 100 to 200 years, with the phenomenon contributing to the Great Flood of 1862. The flood is believed to have killed 4,000 people in California according to U.S. Today and other sources. Wildfires and drought are making floods worse The current flooding in California is being aggravated by historic wildfires and drought that are tied to climate change themselves. "When you burn a forest, the burning of organic compounds produces compounds that are hydrophobic, that repel water," in the form of ash, Munoz says. When rain falls on the ash it is less likely to seep into the soil and recharge into groundwater and more likely to run off toward streams and rivers and lakes, he says. The runoff contributes to flooding, Munoz says. Drought also poses a flooding risk because the soil can get so dry it loses its ability to absorb water, sort of like a too-dry sponge, he says. "When they're super dry, their ability to take up moisture is reduced." Planning for the future "California is sort of this world of extremes," Munoz says. "They do have prolonged periods of drought, and that's getting more severe. But they also have this history of really severe atmospheric river events that cause flooding. It's possible that in the future these kinds of events will become more severe." With access to water being such an issue in California, Munoz says he believes officials need to find ways to capture and store water during deluges as part of planning for the future. Right now, state and local California officials are concerned about saving lives and the infrastructure. But after the storms are over, Munoz says officials will face important and continuing questions about water supply. "Are there ways we can store some of this water that right now there's an abundance of but will very quickly flow back out to sea?" he asked. "Is there some way to capture that?" Provided by Northeastern University
Environmental Science
Human DNA is everywhere. That's a boon for science, and an ethical quagmire On the beach. In the ocean. Traveling along riverways. In muggy Florida and chilly Ireland. Even floating through the air. We cough, spit, shed and flush our DNA into all of these places and countless more. Signs of human life can be found nearly everywhere, short of isolated islands and remote mountaintops, according to a new University of Florida study. That ubiquity is both a scientific boon and an ethical dilemma, say the UF researchers who sequenced this widespread DNA. The DNA was of such high quality that the scientists could identify mutations associated with disease and determine the genetic ancestry of nearby populations. They could even match genetic information to individual participants who had volunteered to have their errant DNA recovered. David Duffy, the UF professor of wildlife disease genomics who led the project, says that ethically handled environmental DNA samples could benefit fields from medicine and environmental science to archaeology and criminal forensics. For example, researchers could track cancer mutations from wastewater or spot undiscovered archaeological sites by checking for hidden human DNA. Or detectives could identify suspects from the DNA floating in the air of a crime scene. But this level of personal information must be handled extremely carefully. Now, scientists and regulators must grapple with the ethical dilemmas inherent in accidentally—or intentionally—sweeping up human genetic information, not from blood samples but from a scoop of sand, a vial of water or a person's breath. Published May 15 in Nature Ecology and Evolution, the paper by Duffy's group outlines the relative ease of collecting human DNA nearly everywhere they looked. "We've been consistently surprised throughout this project at how much human DNA we find and the quality of that DNA," Duffy said. "In most cases the quality is almost equivalent to if you took a sample from a person." Because of the ability to potentially identify individuals, the researchers say that ethical guardrails are necessary for this kind of research. The study was conducted with approval from the institutional review board of UF, which ensures that ethical guidelines are adhered to during research studies. "It's standard in science to make these sequences publicly available. But that also means if you don't screen out human information, anyone can come along and harvest this information," Duffy said. "That raises issues around consent. Do you need to get consent to take those samples? Or institute some controls to remove human information?" Duffy's team at UF's Whitney Laboratory for Marine Bioscience and Sea Turtle Hospital has successfully used environmental DNA, or eDNA, to study endangered sea turtles and the viral cancers they are susceptible to. They've plucked useful DNA out of turtle tracks in the sand, greatly accelerating their research program. The scientists knew that human eDNA would end up in their turtle samples and probably many other places they looked. With modern genetic sequencing technology, it's now straightforward to sequence the DNA of every organism in an environmental sample. The questions were how much human DNA there would be and whether it was intact enough to harbor useful information. The team found quality human DNA in the ocean and rivers surrounding the Whitney Lab, both near town and far from human settlement, as well as in sand from isolated beaches. In a test facilitated by the National Park Service, the researchers traveled to part of a remote island never visited by people. It was free of human DNA, as expected. But they were able to retrieve DNA from voluntary participants' footprints in the sand and could sequence parts of their genomes, with permission from the anonymous participants. Duffy also tested the technique in his native Ireland. Tracing along a river that winds through town on its way to the ocean, Duffy found human DNA everywhere but the remote mountain stream where the river starts, far from civilization. The scientists also collected room air samples from a veterinary hospital. They recovered DNA matching the staff, the animal patient and common animal viruses. Now that it's clear human eDNA can be readily sampled, Duffy says it's time for policymakers and scientific communities to take issues around consent and privacy seriously and balance them against the possible benefits of studying this errant DNA. "Any time we make a technological advance, there are beneficial things that the technology can be used for and concerning things that the technology can be used for. It's no different here," Duffy said. "These are issues we are trying to raise early so policy makers and society have time to develop regulations." More information: David Duffy, Inadvertent human genomic bycatch and intentional capture raise beneficial applications and ethical concerns with environmental DNA, Nature Ecology & Evolution (2023). DOI: 10.1038/s41559-023-02056-2. www.nature.com/articles/s41559-023-02056-2 Journal information: Nature Ecology & Evolution Provided by University of Florida
Environmental Science
Eat almost anything. Sleep almost anywhere. These, it seems, are the secrets to surviving in the city as a wild animal. Among the species that dominate urban spaces—pigeons, cockroaches, rats, foxes—these are the most obvious characteristics successful city dwellers have.But they aren’t the only tactics for urban survival. A new study has uncovered four very different sets of traits that animals use to prosper in the city. “There isn’t one-size-fits-all for how different species or different taxa respond to urbanization,” says Amy Hahs of the Green Infrastructure Research Group at the University of Melbourne, who led the research. Understanding how different types of animals adapt to the city in different ways, and what drives these changes, could help us improve urban biodiversity, and with it the overall health of our urban environment.Biodiversity studies in cities tend to focus on which species dominate, not how they manage to do so. So the study’s research team set out to change this. Specifically, their ambition was to answer two questions: Is eating anything and sleeping anywhere the only way to succeed as an animal urbanite? And how does this vary across the globe?The researchers looked at four animal characteristics—diet, body size, mobility, and reproductive strategy—that can vary according to what a city has to offer and how flexible a species can be. By reaching out to experts who had previously published research on the traits of urban animals, and drawing together these researchers’ data sets, the team then built a bespoke mega-database to compare these four characteristics across more than 5,000 species found in nearly 400 cities around the world. The team was able to gather data for six groups of animals: amphibians, bats, bees, birds, carabid beetles, and reptiles.Unsurprisingly, they found flexibility is useful—the ability to move throughout large areas, eating a broad diet and keeping an open mind about nesting and resting places. They labeled animals in this group “mobile generalists,” with urban bats and carabid beetles tending to profit from adopting these traits. But it wasn’t the only strategy for success they found.In contrast, urban birds and bees often succeed by becoming “central place foragers.” These creatures have a fixed place to nest and rest, but they compensate for this site fidelity by broadening their diets. The next time you see a pigeon pecking at a scrap of food waste on a downtown street, you’ll be witnessing this in action.Reptiles and amphibians adopt a different strategy again: Faced with scarcer food, higher vulnerability to predators, road accidents, and pollution, they respond to urbanization by specializing their diets, moving around smaller areas, and reducing the size of their clutches. It makes sense: If the shelves are stacked with fewer but constant varieties of food, eating only one of them reduces competition with other species, while having fewer offspring means enough food for them all to grow well and be fitter. Known as “site specialists,” these species run the risk of ending up trapped. Because they don’t move around, if their food or habitat disappears, so do they.The team also hypothesized that there could be a fourth category: “mobile specialists”—animals that eat a very specific diet, and are able to easily travel to wherever they need to get it. They’d seen such animals in other locations, for instance waterbirds living on wetlands, but didn’t encounter any in their urban study.Overall, the research looked at data from 72,086 plots in 379 cities in 48 countries, covering 5,302 species. Working at this global scale was important, for two reasons. First, studies about animals and urbanization usually only look at the evolution of one particular species, mostly plants or birds, in one specific location, and this doesn’t allow for comparisons across multiple groups of animals in multiple locations. Yet, Hahs explains, “biodiversity is diverse, and what has been observed in one context may not necessarily translate to another.” To make reliable assessments of how animals behave, the team needed to include multiple groups of animals that might adapt to metropolitan life in different ways. This required working with experts on many species.Secondly, research on urban biodiversity has traditionally focused on cities from the global north and Australia. Yet cities in the global south are also critical biodiversity hot spots, and they are expected to expand significantly in the coming decades. As much as 90 percent of the increase in urban populations between now and 2050 will take place in Asia and Africa, amounting to billions of additional people living in urban areas in these regions. Such a large amount of urban expansion could mean key habitat and species losses; a better understanding of urban biodiversity in these places will be needed if losses like these are to be stopped. Knowledge from papers like this could help.“Organisms live or die based on environmental conditions like habitat availability, food, lethal threats,” says Loren Byrne, a professor of biology and environmental science at Roger Williams University in Rhode Island, who was not involved in the research. “This paper provides some fascinating new perspectives about how to think about this filtering process.”If you look at the traits animals are adopting to survive in urban environments, you can see how cities could be modified to become more habitable to a wider variety of species. For example, to encourage a wider variety of birds and bees, you could increase the number of potential nesting sites. And to help reptiles and amphibians avoid ecological traps, city planners could introduce more connections between waterways to allow them to move around wider areas. But more research is needed to see what specific changes certain species would need to thrive. “This research does not provide the specific information about species that is actually needed for implementing good conservation plans,” says Byrne. “So there’s more work to be done in that regard.”Would this work be worth it? Does having a richer, more diverse array of wildlife in cities, as opposed to fewer, more dominant species, really make a difference? The answer, according to ecologists WIRED spoke to who weren’t involved in the research, is a resounding yes. “Wildlife can help mitigate against the impact of climate change in cities,” says Nathalie Pettorelli of the Zoological Society of London. Greater biodiversity provides knock-on benefits, what are known as “ecosystem services.”“Macroinvertebrates that live in the soil keep the soil alive and well,” says Pedro Pinho from the Centre for Ecology, Evolution, and Environmental Changes at the University of Lisbon. And healthy soil is really important in cities, Pinho adds, because it can absorb a lot of water. This can help to avoid flash flooding during heavy rains and protect against drought. A more vibrant urban ecosystem also helps plant life thrive and suck more CO2 down from the air. “We can get more carbon stored in soils when the insects and their predators are present than when they’re absent,” says Oswald Schmitz, a professor of population and community ecology at the Yale School of the Environment.Having more animal life in cities can also protect human health. One effect of climate change is that it can increase the spread of where disease-carrying insects, such as mosquitoes, can thrive, raising their populations in cities. A more diverse set of predators can keep these insects in check. “Those can be animal species, like birds or bats,” says Pinho.“We can’t forget that a lot of organisms in the city are fun to watch, like birds and butterflies,” says Byrne. “People derive educational value, psychological and spiritual value from living alongside other organisms.” An important fact, given that more than two-thirds of the world’s population is projected to live in cities by 2050.Falling biodiversity is a global problem, and cities are already responding to the UN’s call to “be part of the solution” by investing in green infrastructure—parks, green belts, urban forests. London has invested almost £30 million ($37 million) since 2016, and New York a huge $3.5 billion on its waterways since 2012. In 2021, 31 mayors from cities around the world pledged to cover up to 40 percent of their urban areas with green or blue infrastructure. Armed with knowledge from research like this, these sorts of investments can become better and better at improving urban biodiversity in the future—and make city wildlife about much more than pigeons, rats, and foxes.“Fundamentally, biodiversity underpins our world, and the sustainability and resilience of our systems,” says Hahs. “If we want to have sustainable and resilient urban areas, we need biodiversity.”
Environmental Science
When it comes to the United States phasing out PFAS, the “forever chemicals” are true to their nickname in more ways than one. It’s not going to be straightforward or swift to eliminate these substances from countless industries, even though they have been potentially linked to myriad health issues. Found in products like food packaging, clothes and firefighting foam, PFAS have contaminated drinking water sources nationwide since becoming commercially available in the middle of the last century, building up in the environment where they won’t break down for a very long time. A recent study concluded that rainwater, surface water and ground soil across the globe is extensively contaminated with these chemicals to a point that cannot be reversed without expensive, advanced technological intervention. “This stuff is toxic at incredibly low levels and it’s persistent — it stays there for hundreds of years in the groundwater, thousands of years,” said Graham Peaslee, a Notre Dame professor and researcher who’s tested many products for PFAS in his lab. “And that means the next generations will be drinking it, and that’s not the kind of legacy we want to leave our kids.” It’s a familiar story that has played out before, from DDT to PCBs. A hazardous chemical is widely used, its adverse health and environmental effects are revealed far after the fact, scientists and other concerned parties ring the alarm, and the substance in question finally garners federal attention, sometimes in the form of improved regulation or, more rarely, a full-stop ban. We’re well within the third act of that script when it comes to PFAS, with many researchers and consumers calling on industries and institutions to phase these chemicals out of their products, manufacturing processes and general use, and instead pursue safer alternatives that serve similar functions. The Environmental Protection Agency recently issued two updated interim drinking water health advisories for PFOA and PFOS — two legacy, or “long chain,” and well-studied PFAS that have been phased out of manufacturing in the U.S. but are still used in other parts of the world and products or materials that contain them can be imported. The agency also issued advisories for two newer, “short chain” PFAS known as PFBS and “GenX chemicals” that were developed to replace the legacy substances yet are still problematic from a health and environmental standpoint. Those EPA advisories don’t carry the force of law, PFAS are largely unregulated and nothing is stopping manufacturers from using the chemicals in their supply chains, which are often murky to begin with. Companies face limited pressure — at least at the federal level — to get them out of their supply chains. Multiple states, though, have taken their own legislative steps toward phasing PFAS out or outright banning them in certain products. Beyond the regulatory world, researchers are leading the way with a vision of what it means to address PFAS contamination at its source. Some companies are also voluntarily taking steps to help make that happen. It’s realistically going to take several more decades, Peaslee said, before we can truly get a handle on PFAS. But that doesn’t mean that efforts to stop further contamination by getting it out of existing manufacturing practices and products will be fruitless. What are PFAS, and why are they considered hazardous? The term “PFAS” stands for per- and polyfluoroalkyl substances. It refers to a family of thousands of different chemicals that have a wide range of commercial and industrial uses. These substances are particularly good at repelling things — their dual hydrophobic and hydrophilic properties help them resist water, plus oils and stains. These qualities help make products waterproof, stain-proof or non-stick, in addition to their use as in industrial lubricants. PFAS have been detected in goods ranging from cosmetics to period underwear to anti-fogging cloths and sprays for glasses, among many others. A 2020 study identified them across 200 different use categories. Only a handful of those thousands of chemicals have been well-studied to determine their impacts on human health. Many experts argue for approaching PFAS as a class of chemicals — as in assuming that less studied members of the chemical family may have health and environmental impacts akin to those that have been better researched, and making decisions around their use accordingly. Existing evidence suggests that high levels of exposure to PFAS – among those that have been better studied – may lead to increased cholesterol levels, decreased vaccine responses in children, higher risk of preeclampsia in pregnant people and increased risk of kidney and testicular cancer, and other outcomes, according to the Agency for Toxic Substances and Disease Registry. Graphic by Megan McGrew/PBS NewsHour In other words, limited research so far suggests that these chemicals can affect multiple systems in the body, said Courtney Carignan, an environmental epidemiologist and assistant professor at Michigan State University. “It seems that the property that makes them useful — that they’re very persistent and they have this one part of them that really likes water and the other part that does not — also seems to be what makes them problematic in the body,” Carignan said. Legacy PFAS like PFOA and PFOS were known to take years to leave the body, whereas the shorter chain ones more often in use today are shown to be expelled more along the timeframe of months. For consumers, labels can be confusing or misleading — a product may boast its “PFOA-free” status, for example, but that’s just one chemical within the PFAS family. Both legacy and shorter chain types persist in the environment and can have human health impacts regardless of how long they take for your body to eliminate them, which is why many experts maintain that there’s no world in which continuing their use is justified. “I’ve never met the good PFAS, and there are no such things,” Peaslee said. “They are all long-lived, they all bioaccumulate, a good number of them are shown to be toxic and the rest we just haven’t measured yet.” How do PFAS get into our bodies? Humans can be exposed to PFAS via ingestion, such as by drinking contaminated water or eating fish in which these chemicals have bioaccumulated. Inhalation is another route, and it can happen via indoor air — for example, if the furniture or carpeting in your home or office has been treated with PFAS to prevent stains — or outdoor air, particularly if you live close to a factory that emits PFAS through its stacks. When it comes to major sources of PFAS contamination in the U.S., “the biggest culprit to date” has been firefighting foam, also known as AFFF, Peaslee said. As of 2021, the Department of Defense was investigating nearly 700 military installations where this foam was used extensively, often during training operations, where it had ample opportunity to permeate the environment. Multiple institutions have made the switch to PFAS-free firefighting foam in recent years, or are at least in the process of doing so. Congress has ordered the Department of Defense, for example, to switch to PFAS-free firefighting foam by October 2024. But Peaslee noted that the transition isn’t quite that simple — for one thing, countless gallons of the older, fluorinated foam are still on the shelves at fire stations nationwide, and each container could contaminate hundreds of millions of gallons of water. Safely disposing of it is a massive task. The turnout gear that firefighters wear when they respond to fires is also often treated with PFAS in order to help it resist moisture and heat, and many are concerned that wearing and handling it could put them at additional risk. A independent committee facilitated by the National Fire Protection Association has recently drafted new proposed safety standards for that gear, which are open to public comment. Though exposure through consumer products is a reasonable concern, there are two even larger facets of the problem, said Shari Franjevic, who leads the GreenScreen For Safer Chemicals program at the nonprofit Clean Production Action. One is how that product came to exist in the first place – people who might work at a plant where PFAS are produced or heavily used are typically among the most most exposed to the hazardous chemicals. The other is where it will end up once it’s discarded, which is a problem for those who live nearby and are exposed through contaminated drinking water. Once a product that contains PFAS is thrown away, it can contaminate the environment in the form of leachate that eventually passes through our wastewater treatment systems, which were not designed to remove those chemicals, Carignan said. “I can wrap my hotdog or hamburger in this packaging, and the grease will never come through it,” Peaslee said, explaining the cycle. “That’s good, except that when we throw that wrapper away, 100% of that PFAS will come off in a landfill in 60 days, and then we’re all drinking it.” Getting PFAS out of products Plenty of products contain PFAS on purpose in order to perform a specific function. But to Franjevic and the GreenScreen program, there’s a distinction between intentionally added PFAS and those that most likely resulted from cross-contamination during the manufacturing process. She argues that “turning off the tap on PFAS” means prioritizing getting the chemicals out of products into which they’ve historically been added on purpose. GreenScreen helps companies by examining whether chemicals in their products have the potential to harm human health, like PFAS, and works on how to either swap them out with safer alternatives or reduce exposure if their use is absolutely essential. This comprehensive, hazard-first approach helps prevent manufacturers from going down the well-trod path of using substitutes that still come with a slew of their own health and environmental concerns. In the PFAS world, many researchers point to those shorter chain chemicals currently still in use that were considered solid replacements for legacy PFAS as an example of that phenomenon. Meanwhile, a plastic part that’s used in a broader product might not contain PFAS by design, but could still have detectable amounts of the forever chemicals when tested. That could be because the manufacturer uses a PFAS-containing release agent that helps each part pop out of its mold faster to speed up the production process, Franjevic said. Supply chains are often long, and there’s plenty of room for cross-contamination. In her view, it’s a first-things-first type of situation: Give companies a realistic pathway toward getting intentionally added PFAS out of their products, and then address impurities. “To notch down impurities now to really, really low thresholds puts almost an unfair burden [on manufacturers], and it’s not prioritizing where the biggest impact is,” she said. “And so we’re trying to be pragmatic about, ‘How do we really create the change we need to see in the world?’” Several states have passed legislation aimed at getting toxic chemicals out of consumer products, including Washington. After establishing what’s hazardous and what’s a viable alternative, the state can take steps to restrict the use of chemical of concern or mandate that consumers be notified if a product contains it, explained Rae Eaton, a chemist in the Hazardous Waste and Toxics Reduction Program at the Washington State Department of Ecology. Eaton works on a program that evaluates short-term food packaging — think takeout clamshell containers, bowls that hold hot soup or paper sandwich wrappers. PFAS are used in some of those materials to keep food from sticking to or soaking through its container before that packaging is discarded. “We’re using chemicals that can last for hundreds of years, sometimes for products that get used for 45 minutes, and then they go in the trash or they go in your compost,” Eaton said. Eaton noted that some compostable or recyclable food packaging contains PFAS, which is not good news for the industrial compost sites they’re designed for. She and her colleagues have released two reports on takeout-style packaging that analyzed a range of existing products and the purposes they serve, then detailed which alternative materials could be feasibly used in place of PFAS. It’s not a complete analysis of every alternative on the market, she said, but it does include a range of accessible options that are already in use. Some of those alternatives may be wax or clay-coated materials, ones that use polylactic acid (PLA), a biodegradable polymer that can break down under commercial composting conditions, or even switching to reusable packaging. Companies can use her team’s analysis as a resource on how to feasibly move away from PFAS-containing products and toward safer, more sustainable options. PFAS will be banned in nine types of food packaging in Washington by September 2024. Eaton said her team is now researching alternatives for longer-term food packaging, including microwaveable popcorn bags, baking paper and pet food bags, and actively soliciting input from businesses that make them, particularly if they already don’t use PFAS. What can governments and individuals do? In 1987, the Montreal Protocol aimed to phase out hazardous substances — including CFCs, or chlorofluorocarbons — that were known at the time to be depleting the ozone layer in Earth’s atmosphere. Today, that international agreement is largely considered a success — as of 2019, nations phased out 98 percent of ozone-depleting substances, and the hole in the ozone layer that prompted international cooperation was getting smaller, according to the UN Environment Program. But there’s no comparable international agreement or imperative on PFAS. Some environmentally minded companies and governments have led the charge on working to ban or phase out some of these chemicals. But it’s less clear how long it will take others to catch up – and change will depend on decision-makers committing to the effort. “There’s a combination of challenges that we have to overcome, [including] technical challenges to try and find replacements that work, but also the vested economic interests that we have to have to tackle,” said Ian Cousins, a professor in the department of environmental science at Stockholm University. He’s a leading proponent of a framework that depends on defining when and where the use of PFAS is actually essential. Plenty of companies are already interested in and working toward making a proactive pivot away from PFAS. But the U.S. regulatory system largely lacks teeth on this issue, and it’s not clear that federal officials will mandate that American companies stop using PFAS in their products and supply chains anytime soon. For now, when it comes to companies that aren’t taking initiative, a little consumer pressure can go a long way, Franjevic said. She encouraged concerned consumers to contact companies they care about and ask if their products contain PFAS or any other harmful chemicals, like phthalates. Corporations tend to track those types of requests, and when they get to a certain number, she added, they may take action. “If they get enough people asking, they will do the work,” Franjevic said. “It’ll get on their radar. So ask.”
Environmental Science
Climate change could be gradually making the world's tropical rainforests too hot for photosynthesis to occur, and it may eventually trigger their collapse, a new study has warned. Using data collected from the International Space Station (ISS), scientists found that a small yet growing percentage of tree leaves in tropical forests are approaching the maximum temperature threshold for leaves to photosynthesize. The average critical temperature beyond which photosynthetic machinery in tropical trees begins to fail is 116 degrees Fahrenheit (46.7 degrees Celsius). Currently, only 0.01 % of all leaves surpass this critical temperature every year. But scientists warn that air temperature rises of 7.2 F (4 C) could push trees in tropical forests beyond a tipping point and into mass death. If this were to happen, it would spell disaster for Earth's climate systems and biodiversity, researchers report in a study published Wednesday (Aug. 23) in the journal Nature. "It's concerning from our perspective that you see nonlinear trends. So you heat the air by, let's say, 2, 3 degrees Celsius [3.6 to 5.4 F], and the actual upper temperature of these leaves goes up by 8 degrees [Celsius; 14.4 F]," Christopher Doughty, an associate professor of ecoinformatics at Northern Arizona University, said during a press conference on Monday (Aug. 21). "Even though a small percentage of leaves are currently doing this, our best guess is that a 4 degrees Celsius increase in temperature could cause some serious issues for certain tropical forests." How to take a rainforest's temperature Tropical rainforests are vital regions for our planet. They encompass 3 billion acres (1.2 billion hectares), or around 6%, of Earth's surface area, and are home to half of the world's animal and plant species. They are also vital stores of the world's fresh water — with the Amazon Basin alone storing one-fifth. Photosynthesis in rainforests produces 32% of the planet's oxygen and helps stabilize global climates by sucking billions of tons of carbon dioxide from the atmosphere each year. To build up a picture of the temperatures in the world's tropical forests, the researchers turned to the Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) sensor on the ISS. The scientists combined ECOSTRESS temperature readings from 2018 to 2020 with thousands of ground measurements made from infrared-sensing pyrgeometers in rainforests across South America, Central Africa and Southeast Asia. Aggregating these results revealed that canopy temperatures peaked at around 93.2 F (34 C) on average, and a small proportion exceeded 104 F (40 C). Moreover, every season, 0.01% of leaves exceeded a critical temperature beyond which their photosynthesis is likely to shut down, resulting in their deaths. This number may sound inconsequential, but the researchers noted it could increase rapidly. "While the number is small it has large implications — it's not going to go 0.01 to 0.02. It's going to jump nonlinearly, it's going to increase potentially much faster," Joshua B. Fisher, an associate professor of environmental science at Chapman University in California said at the press conference. By performing laboratory leaf experiments at 3.6, 5.4 and 7.2 F (2, 3 and 4 C) of warming, the researchers found that temperatures around some of the leaves peaked much higher than the air temperature — by up to 14.4 F (8 C). Plugging these peak temperatures into a mathematical model, the scientists found that an average 7 F (3.9 C) increase in the air temperature surrounding the leaves caused those most exposed to the heat to have their water-carrying stomata closed off by the tree, leading to their deaths. This triggered a cascade effect, increasing the temperature around the remaining leaves and potentially killing them, their branches and the trees in turn. "If you have 10% of the leaves dying, the whole branch is going to be warmer because a critical part of that branch can no longer cool the broader branch. Likewise you can make that assumption across the whole forest when a tree dies," Doughty said. Yet in spite of their findings, the scientists are optimistic that humanity has enough time to curb emissions and avoid potential tipping points in tropical forests. "This is a glimpse into a potential tipping point. It is not saying that the tropical forests are now going to be savannas tomorrow," Fisher said. "If you think about human health, you want to know if you're sick or have cancer so you can deal with it before it takes over." Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.
Environmental Science
Thirty years ago, the world’s nations agreed to prevent dangerous human interference with the climate system. But what is “dangerous climate change”? Just turn on the television, read the headlines of the morning paper or view your social media feeds. For we are watching it play out in real time this summer, more profoundly than ever before, in the form of unprecedented floods, heatwaves and wildfires. Now we know what dangerous climate change looks like. As has been said of obscenity, we know it when we see it. We’re seeing it – and it is obscene. Scorching temperatures persist across Europe, North America and Asia, as wildfires rage from Canada to Greece. The heat is as relentless as it is intense. For example, Phoenix, Arizona, has broken its record of 18 consecutive days above 110F (43.3C). Even the nights, generally relied upon as a chance to recover from the blistering days, now offer little relief: for more than a week, night-time temperatures in Phoenix have exceeded 90F (32.2C). Meanwhile, severe and deadly flooding has stricken South Korea, Japan, and the north-east United States, from Pennsylvania to Vermont. The climate crisis – and yes, it is now a crisis – is endangering us now, where we live. Whether it’s the recurrent episodes of hazardous air quality in the east coast cities some of us call home from windblown Canadian wildfire smoke or the toll sadly now being measured in human lives from deadly nearby floods, we are witnessing the devastating and dangerous consequences of unabated human-caused warming. That is a fact. Indeed, as you “doomscroll” on whatever social media platform you prefer these days, you might see selective images and graphs that would lead you to think Earth’s climate is spinning out of control, in a runaway feedback loop of irreversible tipping points leading us down an inescapable planetary death spiral. But that’s not what’s happening. The average warming of the planet – including the most up-to-date measurements for 2023 – is entirely consistent with what climate modelers warned decades ago would happen if we continued with the business-as-usual burning of fossil fuels. Yes, there are alarming data coming in, from record-shattering loss of winter sea ice in the southern hemisphere to off-the-charts warmth in the North Atlantic with hot tub-grade waters off the Florida coast. We’ve also seen the hottest week on record for the planet as a whole this month. We can attribute blame to a combination of ongoing human-caused warming, an incipient major El Niño event and the vagaries of natural variability. These episodes are a reminder that we can not only expect to see records broken, but shattered, if we continue burning fossil fuels and heating up the planet. And one of the areas where observed trends truly are exceeding the predictions of climate models is in those extreme weather events we are seeing this summer. One of us has been involved in research that suggests that climate models are still not capturing some of the more subtle physical mechanisms behind persistent summer weather extremes. As the Arctic warms faster than lower latitudes, the temperature difference between the poles and tropics decreases and the jet stream – which is driven by that difference – weakens. Under certain conditions that can lead to a slow, wiggly jet stream, with amplified weather systems that get stuck in place. When weather systems stall like this, the same regions get baked or rained on day after day – precisely the sort of persistent, extreme weather events we’re experiencing this summer. The incessant parade of heat domes, floods and tornado outbreaks this summer seems to suggest a precarious if not downright apocalyptic “new abnormal” that we now find ourselves in. And it understandably feeds the fearful impression that we’ve exceeded some sort of breaking point in our climate. How do we reconcile that impression with the picture that emerges from the steady, rather than erratic, warming response we see in both the observations and models? The answer is that the behavior of Earth’s climate system represents a tussle between sometimes opposing mechanisms that alternatively favor stability and fragility. That constant tussle is evident in an examination of Earth’s past climate history. If the system is pushed, it responds steadily – to a point. Push too hard, however, and we risk crossing certain “tipping points”, such as the disintegration of the ice sheets and the massive sea level rise that will ultimately follow. The only way to avoid crossing these tipping points is to stop heating up the planet. And comprehensive Earth system models show that if we stop adding carbon pollution, the warming of Earth’s surface stops soon thereafter. So that brings us back to where we started. Yes, we have failed to prevent dangerous climate change. It is here. What remains to be seen is just how bad we’re willing to let it get. A window of opportunity remains for averting a catastrophic 1.5C/2.7F warming of the planet, beyond which we’ll see far worse consequences than anything we’ve seen so far. But that window is closing and we’re not making enough progress. We cannot afford to give in to despair. Better to channel our energy into action, as there’s so much work to be done to prevent this crisis from escalating into a catastrophe. If the extremes of this summer fill you with fears of imminent and inevitable climate collapse, remember, it’s not game over. It’s game on. Michael E Mann is a professor of earth and environmental science and the director of the Center for Science, Sustainability and the Media at The University of Pennsylvania. He is author of the forthcoming book Our Fragile Moment: How Lessons from Earth’s Past Can Help Us Survive the Climate Crisis Susan Joy Hassol is the director of Climate Communication. She publishes Quick Facts, on the links between extreme weather and climate change, and recently published a piece in Scientific American on the importance of language in communicating about climate
Environmental Science
National economies recover faster when countries are powered by renewable energy, says new research National economies recover significantly faster from shocks when countries are powered by renewable energy sources, according to new research that has profound implications for global energy policy. Researchers from Trinity College Dublin looked for patterns in data from 133 systemic economic crises that affected 98 countries over a 40-year span. While their analyses show that countries relying on a broader range of energy sources experience longer recovery times, the best predictor of economic recovery was the extent to which a country relied on renewable energy. Underlining the significance of the finding is the fact that while data came from a widely diverse set of societies and their economies, the extent of reliance on renewable energy consistently accounted for a major proportion of the variability in economic recovery time. Ireland (and the 2008 economic crash) was among the countries included in the combined analysis. Currently, the Irish goals for integrating renewables into its energy supply chain include a target to produce 80% of its electricity from renewable sources by 2030. As such, this work provides strong support for a strengthened national focus on transitioning to a greater reliance on renewable energy. Ian Donohue, Professor in Environmental Science and Head of Trinity's School of Natural Sciences, is the lead author of the research, which is published in Ecological Economics. He said, "Our findings highlight the importance of the intrinsic link between natural resources provided by ecosystems and the stability of the economies that rely on them. Ultimately, they point to the need for a fundamental reassessment of both national and international energy policy, not only for the sake of our environment, but also to enhance the stability—and sustainability—of our economies." Professor Robert Costanza, Professor of Ecological Economics at the Institute for Global Prosperity at University College London, and co-author of the study, said, "Although the mechanisms underpinning our results are unclear, one likely explanation is that renewables accelerate recovery because they are locally-produced and not subject to the high volatility of availability and prices connected with fossil fuels." Focusing on the situation in Ireland, Professor Donohue said, "This work provides another compelling reason to double down on our efforts to focus on renewable energy sources such as on- and off-shore wind. Doing so would add a third 'win' to what was already a win-win scenario, as a greater reliance on renewable energy will reduce our carbon emissions, help protect our precious biodiversity and now, seemingly, also provide a more resilient economy." More information: Ian Donohue et al, Accelerated economic recovery in countries powered by renewables, Ecological Economics (2023). DOI: 10.1016/j.ecolecon.2023.107916 Provided by Trinity College Dublin
Environmental Science
Coastal water pollution transfers to the air in sea spray aerosol and reaches people on land, confirms study New research led by Scripps Institution of Oceanography at UC San Diego has confirmed that coastal water pollution transfers to the atmosphere in sea spray aerosol, which can reach people beyond just beachgoers, surfers, and swimmers. Rainfall in the US-Mexico border region causes complications for wastewater treatment and results in untreated sewage being diverted into the Tijuana River and flowing into the ocean in south Imperial Beach. This input of contaminated water has caused chronic coastal water pollution in Imperial Beach for decades. New research shows that sewage-polluted coastal waters transfer to the atmosphere in sea spray aerosol formed by breaking waves and bursting bubbles. Sea spray aerosol contains bacteria, viruses, and chemical compounds from the seawater. The researchers report their findings March 2 in the journal Environmental Science & Technology. The study appears in the midst of a winter in which an estimated 13 billion gallons of sewage-polluted waters have entered the ocean via the Tijuana River, according to lead researcher Kim Prather, a Distinguished Chair in Atmospheric Chemistry, and Distinguished Professor at Scripps Oceanography and UC San Diego's Department of Chemistry and Biochemistry. "We've shown that up to three-quarters of the bacteria that you breathe in at Imperial Beach are coming from aerosolization of raw sewage in the surf zone," said Prather. "Coastal water pollution has been traditionally considered just a waterborne problem. People worry about swimming and surfing in it but not about breathing it in, even though the aerosols can travel long distances and expose many more people than those just at the beach or in the water." The team sampled coastal aerosols at Imperial Beach and water from the Tijuana River between January and May 2019. Then they used DNA sequencing and mass spectrometry to link bacteria and chemical compounds in coastal aerosol back to the sewage-polluted Tijuana River flowing into coastal waters. Aerosols from the ocean were found to contain bacteria and chemicals originating from the Tijuana River. Now the team is conducting follow-up research attempting to detect viruses and other airborne pathogens. Prather and colleagues caution that the work does not mean people are getting sick from sewage in sea spray aerosol. Most bacteria and viruses are harmless and the presence of bacteria in sea spray aerosol does not automatically mean that microbes—pathogenic or otherwise—become airborne. Infectivity, exposure levels, and other factors that determine risk need further investigation, the authors said. This study involved a collaboration among three different research groups—led by Prather in collaboration with UC San Diego School of Medicine and Jacobs School of Engineering researcher Rob Knight, and Pieter Dorrestein of the UC San Diego Skaggs School of Pharmacy and Pharmaceutical Science, both affiliated with the Department of Pediatrics—to study the potential links between bacteria and chemicals in sea spray aerosol with sewage in the Tijuana River. "This research demonstrates that coastal communities are exposed to coastal water pollution even without entering polluted waters," said lead author Matthew Pendergraft, a recent graduate from Scripps Oceanography who obtained his Ph.D. under the guidance of Prather. "More research is necessary to determine the level of risk posed to the public by aerosolized coastal water pollution. These findings provide further justification for prioritizing cleaning up coastal waters." More information: Bacterial and Chemical Evidence of Coastal Water Pollution from the 2 Tijuana River in Sea Spray Aerosol, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c02312. pubs.acs.org/doi/10.1021/acs.est.2c02312 Journal information: Environmental Science & Technology Provided by University of California - San Diego
Environmental Science
As the world’s first case of a parasitic roundworm infecting a human’s brain made headlines this week, infectious diseases experts warned the threat of novel infections is rising. In a case report written about the Australian patient in the journal Emerging Infectious Disease, doctors who pulled the Ophidascaris robertsi larvae from her brain warned the case highlights the danger of diseases caused by viruses, bacteria, parasites and fungi passing from wildlife to humans, known as zoonotic infections. “The occasional person developing an odd brain worm from eating greens contaminated with python faeces containing the larvae is unfortunate,” says Rowland Cobbold, the University of Queensland’s associate professor of veterinary science. “But it’s a rare event, and if it’s treatable, that’s OK.” What is more concerning, he says, is that about 75% of the novel and emerging infections globally are zoonotic. They often include infections with far greater impact that can lead to millions of deaths, such as the Sars-CoV-2 virus which causes Covid-19 and is believed to have emerged from bats. “It just comes down to the fact that the human population has never been higher, and we’re all looking for places to live and food to eat and it has an impact on the environment,” Cobbold says. “And so the environment impacts us back.” It’s an issue Dr Anthony Fauci, the former chief medical adviser to the president of the United States, warned of as he prepared to step down from the role earlier this year. In a perspective written for the New England Journal of Medicine, Fauci said: “There is no reason to believe that the threat of emerging infections will diminish since their underlying causes are present and most likely increasing.” “The emergence of new infections and the reemergence of old ones are largely the result of human interactions with and encroachment on nature,” he wrote. “As human societies expand in a progressively interconnected world and the human-animal interface is perturbed, opportunities are created, often aided by climate changes, for unstable infectious agents to emerge, jump species and in some cases adapt to spread among humans.” In response to these concerns, the World Health Organization (WHO) has begun holding a series of webinars on the “One Health” approach, which emphasises the importance of different sectors such as public health, veterinary science, social science, urban design, government and environmental science working together in all aspects of society. But WHO’s director of the department of pandemic and epidemic diseases, physician Dr Sylvie Briand, tells the Guardian this approach “doesn’t happen naturally”. “It requires effort for each sector to talk to each other to work together,” Briand says. “It requires sufficient financing to do those joint activities. And it requires, also, a political will to implement interventions. “Because, for example, the killing of poultry or the killing of minx that has happened during Covid-19 to prevent the spread of a new variant also has a real cost for the economy. So it’s important that governments understand these preventive measures may be costly, but it’s much less costly than to cure [a pandemic].” Wildlife not the ‘villain’ Since emerging disease starts, almost always, at the interface between environment, humans and animals, Briand says surveillance at this point is critical to the One Health approach. In many cases, Briand says, diseases that jump from wildlife to humans will result in no or little further transmission. Recreating the conditions in which the Ophidascaris robertsi larvae infected a woman would be difficult, for example, since it involved a python (where the larvae usually reside) contaminating grass with the larvae through its faeces, and a human accidentally ingesting the larvae when collecting and preparing the grass for cooking. The patient’s body then failed to kill off the larvae through an immune response, likely because she had a preexisting condition that made her immunocompromised. The parasite is also not transmitted between people, so there is no pandemic risk. But in some cases, like with Covid-19, the virus was sufficiently fit to be transmitted to other humans after it likely spilled over to humans from bats being kept in cages at the Huanan seafood wholesale market in China. A pathogen that jumps from one species to another can sometimes exploit the new host’s lack of defences, causing disease. “So this is to highlight that human contact with different animal species under different conditions may favour disease emergence, or at least the initial spillover from one species to another,” Briand says. “We have seen in the past the emergence of Ebola in West Africa in remote villages in the forest, communities in contact with wild animals. The disease often starts locally in communities, and if they are not contained at this stage they can lead to epidemics.” Countries need to identify places where emerging diseases are most likely, she says. It might mean monitoring, for example, the movement of wild animals such as foxes and bats closer to urban areas as their habitats are destroyed, or monitoring farms and piggeries close to mosquito populations that can transmit viruses to humans. Cobbold says it also “comes down to being very careful with developments and changes in land use”. “There really has to be very careful management of our natural resources,” he says. “When we have urban … developing decisions being made by local governments, for example, a One Health perspective needs to be kept in mind so that we are thinking about the risks that could come with changes in land use, particularly where we’re going to interact more with wildlife. “But I think a lot of people outside of research and health communities aren’t so aware of the One Health concept. There’s also, of course, conflicts of interest as councils want to have development and developers want to develop and people need places to live and farmers need places to farm.” Dr Michelle Baker, a principal research scientist with the Australian Centre for Disease Preparedness, says it is critical to remember that animals renowned for triggering many emerging diseases, such as bats and birds, “play an amazing role in our ecosystem”. “I just never like to portray wildlife as the villains here, as they play such a critical role in areas like pollination and seed dispersal,” she says. “That’s why it’s so critical to have people from different sectors including the environment at the table when we talk about these issues. We’re becoming increasingly urbanised, and animals kind of don’t have anywhere else to go but closer to us. “Urbanisation is our fault. Climate change is our fault. And without all of these animals, we’d be in big trouble.”
Environmental Science
October 5 - The incredible human development over the last 10,000 years was founded on a relatively stable climate system. But now that stability is under threat more than ever. We are already at 1.2 degrees Celsius hotter than pre-industrial levels, and without immediate action face a 50-50 chance of planetary warming surpassing 1.5C in the next five years.In September, myself and colleagues published a study showing that this rapid warming is unleashing climate tipping points – when massive parts of the climate system, which have remained stable since the dawn of civilisation, pass a point of no return and irreversibly change state.Ten years back, we saw tipping points as low-risk but high-impact events, which were only likely if the temperature reaches 3C or more later this century. But now, with more scientific understanding of interactions and self-reinforcing feedbacks, together with changes in the Earth system occurring faster than predicted, they are an increasingly present threat. In fact, our conclusion today is that four of the 15 known climate tipping element systems are likely to cross tipping points at 1.5C of warming. These include decisive systems for people and planet; the Greenland ice sheet, the west Antarctic ice sheet, tropical coral reef systems, and abrupt thawing of boreal permafrost.Register now for FREE unlimited access to Reuters.comJust as the science reveals with stark clarity the scale of the problem, we also now understand the solutions.We know that emissions must halve every decade from now to 2050. This exponential halving is established as a universal benchmark: a "carbon law" that applies to business and cities as well as countries. Renewable energy systems that replace oil, coal and gas are today available, economically competitive and scalable in most economies in the world. Fossil-fuel free solutions are available for difficult-to-abate sectors such as cement and steel. Circular business models are possible to move beyond pilot scale, if combined with the right policies for material reuse and recycling.But halving greenhouse gas emissions from energy, industry and transport will not be enough. To stabilise global temperatures, we need natural climate solutions that turn working lands – farms, grazing pastures and forests – from amongst the highest emitters of greenhouse gases into enormous carbon stores. We need a carbon law for nature, and a path to get there.In September, Conservation International published an “exponential roadmap” for natural climate solutions. It charts, for the first time, how a rapid change in land management, protection and restoration of healthy ecosystems can turn a problem into a solution this decade.A southeast Greenland polar bear on a glacier is seen in this handout photograph taken in September 2016. Thomas W. Johansen/NASA Oceans Melting Greenland/Handout via REUTERS Stabilising temperature close to 1.5C means that emissions from land must not only reach net zero by 2030, moving from 12.5 gigatonnes (Gt) of greenhouse gas emissions from land each year currently, but be followed by a 5Gt sink by 2040 and a 10Gt sink by 2050.We all have a role to play to make this happen. While governments need to provide leadership and direction, businesses must act at speed and scale. Three things must happen, urgently.First, all companies with supply chains dependent on sourcing from land or ocean must adapt their business models to be in line with a net-zero world. This starts with the carbon law for fossil fuels, and to this is added the carbon law for nature. Setting net-zero targets for both fossil fuels and nature is necessary to align business with the scientific assessment of what is required to hold the 1.5C line. Managed nature, ie agriculture, contributes 22% of global emissions, and fully half of that comes from deforestation and land conversion to grow food, fibres and fuel. Business must set net-zero targets with annual or even monthly targets.Second, these companies need to create nature-positive supply chains. That means sourcing forest and food commodities that are produced using regenerative agriculture and sustainable forestry practices that deliver positive outcomes for people, nature and climate.Third, we need to develop high-integrity carbon credits that allow businesses to invest in nature and climate solutions in addition to their phase out of fossil-fuel emissions. There is no room for offsetting between carbon law reductions and nature investments: both are needed with full force, simultaneously.This is the only way to make businesses truly net-zero, and in due course even net-negative, in other words absorbing more carbon than they emit, which is necessary in order to meet the scientific targets.Key in all this is that nature-based solutions must recognise the legitimate domestic interests of developing countries, including smallholders, indigenous peoples and local communities.All these elements need to be underpinned by robust and transparent reporting.Britain's King Charles visits FarmED, a center for food and farm education in Oxfordshire, when he was Prince of Wales. He has championed sustainable agriculture. REUTERS/Toby Melville/PoolMeanwhile, governments, multilateral institutions and development banks, with banks, investors and business, must help address the massive finance gap for nature ahead of the COP27 climate summit in November.Averting the growing climate crisis is still possible, but only by harnessing the power of nature alongside rapid emissions reductions. Businesses large and small have an opportunity to be part of the solutions, safeguarding not only their own future, but that of the planet.We must now do everything within our power to keep the temperature as low as possible because every 10th of a degree matters. Time is not on our side, and we need the entire world to act together at an exponential pace. This urgency requires massive efforts to dramatically reduce greenhouse gas emissions and manage, restore and protect nature and farmlands – starting immediately. The biggest effort needs to happen now, this decade. Governments need to guide a stable transformation of societies. Consumers need support and guidance to change their lifestyles, which will improve people’s lives in many ways. And, of course, businesses need to lead the way and drive the change.Register now for FREE unlimited access to Reuters.comOpinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Sustainable Business Review, a part of Reuters Professional, is owned by Thomson Reuters and operates independently of Reuters News.Johan RockströmJohan Rockström is Director of the Potsdam Institute for Climate Impact Research, Professor at the Institute of Earth and Environmental Science at Potsdam University, and Professor in Water Systems and Global Sustainability at Stockholm University
Environmental Science
HALIFAX, Nova Scotia — Just over a year ago, Canadian oceanographer Will Burt was in Fairbanks, Alaska, teaching college students about the effects of global warming on marine life when a former colleague approached him about a startup seeking to use the ocean to remove carbon from the atmosphere. “I didn’t have to think about it,” Burt said. Eight months later, Burt was on a fishing boat off the shores of Nova Scotia, running experiments with a group of researchers as part of a moonshot effort to curb climate change. “I feel like every scientist here on this ship has had a sense of, ‘Now, this is why I got into this,’” Burt said as researchers adjusted carbon measurement tools hanging off the side of the boat. The ocean plays a critical role in curbing climate change. Like forests and wetlands, it naturally recycles carbon dioxide from the atmosphere at a massive scale. Burt works for Planetary Technologies, a Canadian startup that’s attempting to harness and accelerate that potential by adding antacid powder to the ocean. The theory goes that by altering seawater chemistry, the ocean’s surface could absorb far more atmospheric carbon than it does naturally. The company is developing an approach that would turn the waste products from shuttered mines into an alkaline powder. They would deliver it into the water via existing pipes from wastewater treatment or energy plants to avoid having to build new infrastructure. The technique is one of a growing number of strategies aimed at leveraging the ocean, which covers 70 percent of earth’s surface, in the fight against climate change. In 2021, the National Academies of Science published a landmark report advocating for further research into ocean-based carbon removal methods, in light of the growing scientific consensus that reducing emissions alone will not be enough to stave off the devastating effects of climate change. The report highlighted everything from large-scale seaweed farming to shooting lasers to electrochemically change the water’s chemistry, while acknowledging that research on the viability and potential trade-offs of these strategies is nascent at best. “If we want to make fully informed decisions about the future of our ocean and climate, we need to complete some very critical research in the next decade,” Scott Doney, chair of the committee and professor of environmental science at the University of Virginia, said at the time. In the race to get there, Planetary Technologies has company. One startup intends to spread ground minerals over beaches in Long Island and the Caribbean, in the hope that they will gradually wash away and alkalinize the beaches there. Another method that’s gained traction involves using underwater pipes to pump up nutrient rich water from the ocean’s depths to promote phytoplankton growth on the surface. “When it comes to carbon removal, there’s no silver bullet. There has to be silver buckshot. And that means that we’re going to need a lot of these approaches to work,” said Wil Burns, an environmental policy professor at Northwestern University. In the mid-2000s, oceanographer Greg Rau started running small-scale beaker experiments in which he adjusted the alkalinity of seawater and then measured how much carbon was absorbed. Rau, then a researcher at the University of California, Santa Cruz, filed a few patents for what would later be called “ocean alkalinity enhancement,” but the method didn’t get much attention. “People were not running to my doorstep. Let’s put it that way,” Rau said. That changed in 2019 when Rau was approached by Planetary co-founders Brock Battochio and Mike Kelland “You can spend your whole career studying how much the ocean is going to acidify, but at some point you want to start thinking about how you avoid that, rather than just sitting and watching the ship sink,” Rau said. Planetary intends to recycle mine waste from a defunct asbestos mine in Quebec to produce pure magnesium hydroxide, which the company believes would help accelerate the ocean’s carbon uptake ability in the areas where it’s used. The strategy is inspired by the natural process of chemical rock weathering, where rain — which is slightly acidic — “weathers” or erodes the surface of rocks and minerals, and then transfers that alkalinity to the ocean via runoff. It’s a process that occurs with or without human intervention, but on geologic time scales. “We need something much more rapid than what nature can muster at this point,” Rau said. According to estimates by the National Academies of Sciences, even if the global community meets its emissions reduction goals, by 2050 it will still need to remove an additional 10 gigatons of CO2 annually to avoid devastating climate outcomes. Scientists have to walk a delicate line — design a method that’s scalable and effective enough to actually affect the climate without adversely affecting the environment in the process. “People, for better or worse, perceive the oceans as pristine, and they’re going to have some serious concerns about interventions of this nature,” said Burns, referring to a fear in the scientific community that any negative affects or public distrust of one ocean-based carbon capture method could create backlash against all other approaches. Much of that fear stems from a scandal that erupted in 2012. A Canadian company experimented with ocean fertilization by dumping 120 tons of iron-enriched dust into the ocean off the coast of British Columbia to stimulate phytoplankton growth. The experiment caused a plankton bloom so large it was reportedly visible from space. An international uproar ensued. While there was no evidence that the experiment did any harm, the international scientific community considered it a public relations disaster. “It just backfired massively. So this time, I think we should be really careful to get everyone on board,” said Lennart Bach, a marine biogeochemist at the University of Tasmania in Australia. In an attempt to pre-empt fears over safety, Planetary is partially funding research at Dalhousie University in Nova Scotia into oyster reproduction and phytoplankton growth. Oceanographer Hugh MacIntyre, who has studied phytoplankton for more than 35 years, said research is starting with the microscopic algae for a reason. “Every organism that you see in the ocean, whether it’s an orca or a fish, a starfish, a lobster, whatever — it eats something that ate something that ate the phytoplankton,” MacIntyre, a professor at Dalhousie University, said. So far, MacIntyre’s tests haven’t resulted in significant negative impacts to plankton growth, and he’s using a concentration of magnesium hydroxide that’s 10 times higher than what Planetary actually intends to use. “We’re going way on the extreme because we want to know at what point would it make a difference,” he said. MacIntyre said he’ll never be able to definitively prove that the antacid will have no harmful effects on marine life, but he can test how the plankton fare when pushed to extremes. “Ultimately, the question is, at what point are you confident enough that there’s not a problem?” he said. Tinkering with ocean chemistry raises complicated legal questions. While there’s no specific legal framework for ocean-based carbon removal in the U.S. or Canada, there are treaties that regulate such things as dumping waste into the ocean. Unless regulatory bodies come up with new rules and permitting processes that are designed with ocean-based carbon capture in mind, the status quo will, at best, pose artificial or unnecessary barriers to launching safe and responsible projects, and, at worst, create gaps in oversight that bad actors could take advantage of, said Romany Webb, an environmental lawyer and deputy director of Columbia University’s Sabin Center for Climate Change Law. “Those existing laws that we’re trying to apply to these techniques weren’t developed with these techniques in mind, and so often aren’t a particularly good fit for either facilitating these sorts of activities, or for ensuring that they’re conducted in a safe and responsible way,” Webb said. So far, Planetary Technologies has been fine-tuning the ability to measure their carbon uptake off the shores of Nova Scotia — no small feat in an ocean already saturated with massive amounts of carbon. But the company intends to start running small-scale ocean pilots — adding their antacid and measuring the change in carbon absorption — in Canada and the U.K. later this year. The e-commerce giant Shopify has already committed to purchasing 730 tons in future carbon credits from the company — a move designed to accelerate its efforts to perfect the method. Kelland said the amount of magnesium hydroxide used in each place will be well within existing regulatory limits. Ultimately, the company aims to capture a gigaton of carbon a year by 2045. Kelland and Battochio know there are many who say that achieving that goal and actually making a difference in the climate with this method is a long shot, but said they believe having “gigaton ambition” is necessary. “This business can be incredibly successful at a couple of million tons of carbon removal,” Kelland said. “But that’s not going to make the kind of difference we need to make in the world.” “Ultimately it’s going to be up to society as to whether we want to solve the climate change crisis and how bad we want to let it get. That’s it.”
Environmental Science
On Monday, in a low-lying tract of southern Georgia’s pine belt, a half-dozen workers planted row upon row of twig-like poplar trees. These weren’t just any trees, though: Some of the seedlings being nestled into the soggy soil had been genetically engineered to grow wood at turbocharged rates while slurping up carbon dioxide from the air. The poplars may be the first genetically modified trees planted in the United States outside of a research trial or a commercial fruit orchard. Just as the introduction of the Flavr Savr tomato in 1994 introduced a new industry of genetically modified food crops, the tree planters Monday hope to transform forestry. Living Carbon, a San Francisco-based biotechnology company that produced the poplars, intends for its trees to be a large-scale solution to climate change. “We’ve had people tell us it’s impossible,” Maddie Hall, the company’s co-founder and CEO, said of her dream to deploy genetic engineering on behalf of the climate. But she and her colleagues have also found believers — enough to invest $36 million in the 4-year-old company. The company has also attracted critics. The Global Justice Ecology Project, an environmental group, has called the company’s trees “growing threats” to forests and expressed alarm that the federal government allowed them to evade regulation, opening the door to commercial plantings much sooner than is typical for engineered plants. Living Carbon has yet to publish peer-reviewed papers; its only publicly reported results come from a greenhouse trial that lasted just a few months. These data have some experts intrigued but stopping well short of a full endorsement. “They have some encouraging results,” said Donald Ort, a University of Illinois geneticist whose plant experiments helped inspire Living Carbon’s technology. But he added that the notion that greenhouse results will translate to success in the real world is “not a slam dunk.” 🌳🌳 Living Carbon’s poplars start their lives in a lab in Hayward, California. There, biologists tinker with how the trees conduct photosynthesis, the series of chemical reactions plants use to weave sunlight, water and carbon dioxide into sugars and starches. In doing so, they follow a precedent set by evolution: Several times over Earth’s long history, improvements in photosynthesis have enabled plants to ingest enough carbon dioxide to cool the planet substantially. While photosynthesis has profound impacts on the Earth, as a chemical process it is far from perfect. Numerous inefficiencies prevent plants from capturing and storing more than a small fraction of the solar energy that falls onto their leaves. Those inefficiencies, among other factors, limit how fast trees and other plants grow, and how much carbon dioxide they soak up. Scientists have spent decades trying to take over where evolution left off. In 2019, Ort and his colleagues announced that they had genetically hacked tobacco plants to photosynthesize more efficiently. Normally, photosynthesis produces a toxic byproduct that a plant must dispose of, wasting energy. The Illinois researchers added genes from pumpkins and green algae to induce tobacco seedlings to instead recycle the toxins into more sugars, producing plants that grew nearly 40% larger. That same year, Hall, who had been working for Silicon Valley ventures like OpenAI (which was responsible for the language model ChatGPT), met her future co-founder Patrick Mellor at a climate tech conference. Mellor was researching whether trees could be engineered to produce decay-resistant wood. With money raised from venture capital firms and Hall’s tech-world contacts, including OpenAI CEO Sam Altman, she and Mellor started Living Carbon in a bid to juice up trees to fight climate change. “There were so few companies that were looking at large-scale carbon removal in a way that married frontier science and large-scale commercial deployment,” Hall said. They recruited Yumin Tao, a synthetic biologist who had previously worked at the chemical company DuPont. He and others retooled Ort’s genetic hack for poplar trees. Living Carbon then produced engineered poplar clones and grew them in pots. Last year, the company reported in a paper that has yet to be peer reviewed that its tweaked poplars grew more than 50% faster than non-modified ones over five months in the greenhouse. The company’s researchers created the greenhouse-tested trees using a bacterium that splices foreign DNA into another organism’s genome. But for the trees they planted in Georgia, they turned to an older and cruder technique known as the gene gun method, which essentially blasts foreign genes into the trees’ chromosomes. In a field accustomed to glacial progress and heavy regulation, Living Carbon has moved fast and freely. The gene gun-modified poplars avoided a set of federal regulations of genetically modified organisms that can stall biotech projects for years. (Those regulations have since been revised.) By contrast, a team of scientists who genetically engineered a blight-resistant chestnut tree using the same bacterium method employed earlier by Living Carbon have been awaiting a decision since 2020. An engineered apple grown on a small scale in Washington state took several years to be approved. “You could say the old rule was sort of leaky,” said Bill Doley, a consultant who helped manage the Agriculture Department’s genetically modified organism regulation process until 2022. On Monday, on the land of Vince Stanley, a seventh-generation farmer who manages more than 25,000 forested acres in Georgia’s pine belt, mattock-swinging workers carrying backpacks of seedlings planted nearly 5,000 modified poplars. The tweaked poplars had names like Kookaburra and Baboon, which indicated which “parent” tree they were cloned from, and were interspersed with a roughly equal number of unmodified trees. By the end of the unseasonably warm day, the workers were drenched in sweat and the planting plots were dotted with pencil-thin seedlings and colored marker flags poking from the mud. In contrast to fast-growing pines, hardwoods that grow in bottomlands like these produce wood so slowly that a landowner might get only one harvest in a lifetime, Stanley said. He hopes Living Carbon’s “elite seedlings” will allow him to grow bottomland trees and make money faster. “We’re taking a timber rotation of 50 to 60 years and we’re cutting that in half,” he said. “It’s totally a win-win.” Forest geneticists were less sanguine about Living Carbon’s trees. Researchers typically assess trees in confined field trials before moving to large-scale plantings, said Andrew Newhouse, who directs the engineered chestnut project at SUNY College of Environmental Science and Forestry. “Their claims seem bold based on very limited real-world data,” he said. Steve Strauss, a geneticist at Oregon State University, agreed with the need to see field data. “My experience over the years is that the greenhouse means almost nothing” about the outdoor prospects of trees whose physiology has been modified, he said. “Venture capitalists may not know that.” Strauss, who previously served on Living Carbon’s advisory board, has grown some of the company’s seedlings since last year as part of a field trial funded by the company. He said the trees were growing well, but it was still too early to tell whether they were outpacing unmodified trees. Even if they do, Living Carbon will face other challenges unrelated to biology. While outright destruction of genetically engineered trees has dwindled thanks in part to tougher enforcement of laws against acts of ecoterrorism, the trees still prompt unease in the forestry and environmental worlds. Major organizations that certify sustainable forests ban engineered trees from forests that get their approval; some also prohibit member companies from planting engineered trees anywhere. To date, the only country where large numbers of genetically engineered trees are known to have been planted is China. The U.S. Forest Service, which plants large numbers of trees every year, has said little about whether it would use engineered trees. To be considered for planting in national forests, which make up nearly one-fifth of U.S. forestland, Living Carbon’s trees would need to align with existing management plans that typically prioritize forest health and diversity over reducing the amount of atmospheric carbon, said Dana Nelson, a geneticist with the service. “I find it hard to imagine that it would be a good fit on a national forest,” Nelson said. Living Carbon is focusing for now on private land, where it will face fewer hurdles. Later this spring it will plant poplars on abandoned coal mines in Pennsylvania. By next year Hall and Mellor hope to be putting millions of trees in the ground. 🌳🌳🌳 To produce an income stream not reliant on venture capital, the company has started marketing credits based on carbon its trees will soak up. But carbon credits have come under fire lately and the future of that industry is in doubt. And to head off environmental concerns, Living Carbon’s modified poplar trees are all female, so they won’t produce pollen. While they could be pollinated by wild trees and produce seeds, Mellor says they’re unlikely to spread into the wild because they don’t breed with the most common poplar species in the Southeast. They’re also being planted alongside native trees like sweet gum, tulip trees and bald cypress, to avoid genetically identical stands of trees known as monocultures; non-engineered poplars are being planted as experimental controls. Hall and Mellor describe their plantings as both pilot projects and research trials. Company scientists will monitor tree growth and survival. Such measures are unlikely to assuage opponents of genetically modified organisms. Last spring, the Global Justice Ecology Project argued that Living Carbon’s trees could harm the climate by “interfering with efforts to protect and regenerate forests.” “I’m very shocked that they’re moving so fast” to plant large numbers of modified trees in the wild, said Anne Petermann, the organization’s executive director. The potential risks to the greater ecosystem needed to be better understood, she said. Ort of the University of Illinois dismissed such environmental concerns. But he said investors were taking a big chance on a tree that might not meet its creators’ expectations. “It’s not unexciting,” he said. “I just think it’s uber high risk.” © 2023 The New York Times Company
Environmental Science
Credit: Patrick Hendry on Unsplash A new study has mapped over 57,000 sites in the US that are likely contaminated with per- and polyfluoroalkyl substances (PFAS) – also known as “forever chemicals”. The study is published in Environmental Science & Technology Letters.PFAS are a risk to human healthPFAS are a group of over 12,000 chemicals first produced in the late 1930s. Referred to as “forever chemicals” due to their durability, they are useful in non-stick cookware coatings, waterproof clothing and firefighting foams. However, these qualities also make them almost impossible to destroy meaning that PFAS can accumulate in the environment, contaminating water, air and even our blood.Research in recent decades has revealed numerous health implications of PFAS such as increased risk of cancer, decreased fertility, low birth weight and high cholesterol levels. Despite this, there is a notable lack of data regarding the extent and severity of PFAS contamination in the US.Federal efforts to test for PFAS in the US have focused on large drinking water systems, excluding private wells and having high reporting thresholds – meaning that the scale of known PFAS contamination is likely an underestimate of actual contamination.Dr. Alissa Cordner, co-director of the PFAS Project Lab and senior author of the study, explained the importance of monitoring for PFAS contamination: “Testing for PFAS is essential in order to understand the scope of PFAS contamination across the globe, and it also is necessary to protect public health in specific communities. There can be disincentives for PFAS testing – for example, testing is expensive, there are currently no federal regulatory levels for PFAS in drinking water and so it's not always clear what action should be taken when PFAS are detected, and remediation can be extremely costly. However, we also know that PFAS appear to be toxic at extremely low levels of exposure, so it is essential that more testing is done to identify where PFAS contamination poses a risk to the public.”The researchers in the current study set out to build a map of presumed PFAS contamination sites in the US, based on likely sources of PFAS in the absence of costly large-scale testing. Cordner says these findings “will let decision-makers prioritize locations for future testing and regulatory action.”Mapping presumed contaminationThe researchers combined high-quality geocoded information (i.e., with precise geographical coordinates) on sites that are likely contaminated with PFAS to build the presumptive contamination map.“We used already published scientific studies and government research programs that have identified specific types of locations that were sources of PFAS contamination – for example, extensive testing at Department of Defense sites suggests that military bases are presumptive sites of PFAS contamination because of the use of fluorinated firefighting foams,” explained Cordner. “We then gathered all of the publicly available, high-quality, nationwide data we could on the different types of presumptive PFAS contamination sites, and we kept in only the data that was specific enough in terms of its geolocation data that we could use it to create a nationwide map.”In total 57,412 sites were identified – this included 49,145 industrial facilities, 4,255 wastewater treatment plants, 3,493 military sites and 519 major airports. This reflects the applications and implications of PFAS as they are used in industrial manufacturing, in extinguishing fuel-based fires and are released in contaminated effluent after wastewater treatment. An interactive web version of the map can be found here.These findings were validated by cross-referencing them with a list of 503 sites that are known to be contaminated with PFAS. 35% of these sites were observed by the map, and a further 37% were “expected” sites, but could not be mapped due to the limitations of the data – meaning the total accuracy was 72%.Is this still an underestimation?Nevertheless, the authors state that this map of presumptive PFAS-contaminated sites remains a vast underestimate. For example, around 23% of the identified industrial facilities had to be excluded as they lacked geographical information. There is also a lack of publicly available information on other locations where PFAS are commonly used such as firefighting training sites, fuel storage facilities and locations of railroad or airplane crashes.“[Our] model is designed to be conservative, so there are many types of industrial facilities that are possible sources of PFAS contamination, but we didn't feel confident that every single one of those facilities was a probable source of contamination – for example, dry cleaners, ski shops, septage businesses, etc. As more testing is done related to these industrial facilities, we may expand the types of industrial facilities included in our model,” Cordner elaborates.The researchers hope that this map of presumptive PFAS-contaminated sites can fill in the gaps left by the laborious nature of PFAS testing and identify hotspots to drive future monitoring and regulation of contamination. In the future, Cordner and colleagues hope that similar maps of presumptive PFAS exposure or PFAS-mediated illness could also be produced.
Environmental Science
SINGAPORE, Aug 8 (Reuters) - (This Aug 8 story has been corrected to clarify that the quote was from Anna Hogg, not Caroline Holmes, in paragraphs 3 and 4) Sea ice in the Antarctic region has fallen to a record low this year as a result of rising global temperatures and there is no quick fix to reverse the damage done, scientists said on Tuesday in a new study of the impact of climate change on the continent. The continent's minimum summer ice cover, which last year dipped below 2 million square kilometres (772,000 square miles) for the first time since satellite monitoring began in 1978, fell further to a new low in February, according to a study published in the journal Frontiers in Environmental Science. "It's going to take decades if not centuries for these things to recover. There's no quick fix to replacing this ice," said Anna Hogg, a professor at the University of Leeds and one of the study's co-authors, referring to melting icebergs and shelves. "It will certainly take a long time, even if it's possible," she told a briefing with journalists. This year's sea ice minimum is 20% lower than the average over the last 40 years, equivalent to a sea ice loss nearly 10 times the area of New Zealand, said Tim Naish, director of the Antarctic Research Centre at New Zealand's Victoria University of Wellington, who was not a participant in the study. "In some cases we are getting close to tipping points, which once crossed will lead to irreversible changes with unstoppable consequences for future generations," Naish said. Global warming driven by the burning of fossil fuels has made Antarctica more vulnerable to extreme events and the impact is "virtually certain" to get worse, the study said. Climate change will "lead to increases in the size and frequency" of heatwaves, ice shelf collapses and declines in sea ice, it said, drawing on recent evidence from scientific studies of the Antarctic ocean, atmosphere, cryosphere and biosphere. The precise impact of climate change on Antarctica and the surrounding ocean has been uncertain and scientists have struggled to measure how much global warming is affecting the thickness of Antarctic ice. But from phenomena such as the rapid decline in sea ice, it is "scientifically reasonable" to assume that extreme events are going to intensify as global temperatures rise, said Martin Siegert, a glaciologist at the University of Exeter and another co-author. Last year, an "atmospheric river" originating from Australia drove subtropical heat and moisture into the continent, causing unprecedented temperatures up to 38.5 Celsius (69.3 Fahrenheit) above normal, the largest variance from the norm the world has ever experienced. Siegert described the temperature increase as "absolutely astonishing", adding that if it had occurred during the Antarctic summer, instead of winter, it would have triggered melting on the surface of the East Antarctic ice sheet, which has so far been spared from melting. "Antarctica is fragile as an environment, but extreme events test that fragility," he said. "What we're deeply concerned about is the increase in intensity and frequency of extreme events and the cascading influences that they have in other areas." Our Standards: The Thomson Reuters Trust Principles.
Environmental Science
Natural gas stoves and ovens can leak harmful chemicals inside homes even when they're not in use.About 47 million U.S. households use such appliances, according to the federal Energy Information Administration.A study published Thursday in the journal Environmental Science and Technology found at least 12 hazardous air pollutants emitted from gas stoves in California, including benzene — a chemical known to cause cancer in some people with long-term exposure.The researchers behind the study — a group from the nonprofit energy research institute PSE Healthy Energy — took gas samples from 159 residential stoves in 16 counties throughout California. They found benzene in 99% of the samples. They also calculated a household's benzene exposure based on the size of the kitchen, the room’s ventilation level, how much of the chemical was present and whether the stoves were leaking when they were turned off. The results showed that the leakiest stoves exposed people to indoor concentrations of benzene that were up to seven times higher than the safe exposure level set by the California Environmental Protection Agency. Over time, such exposure might increase a person's risk of blood disorders or reproductive issues, although scientists are still learning about how benzene affects health. The chemical has more conclusively been linked to leukemia, multiple myeloma and non-Hodgkin lymphoma. The World Health Organization has said there’s no safe level of benzene exposure when it comes to cancer risk. But benzene isn't the only worrisome chemical that comes from stoves, nor are the emissions limited to California. Decades of research has suggested that gas stoves are a source of indoor air pollution. "Anywhere natural gas is leaked, hazardous air pollutants are likely being released, as well," a co-author of the new study, Kelsey Bilsback, a senior scientist at PSE Healthy Energy, said on a media call. Previous research has shown that gas stoves in California homes emit nitrogen oxides, which can irritate the eyes, the nose, the throat or the lungs and can cause some people to feel tired, dizzy or short of breath. Another co-author of the study, Drew Michanowicz, previously identified 21 hazardous air pollutants from gas stoves and outdoor gas lines at Boston homes. Several of the pollutants were volatile organic compounds: a large group of chemicals, including benzene, that may increase the risk of certain cancers, birth defects or cognitive impairment among people with long-term exposure.But Michanowicz said some of the lowest concentrations of pollutants in California were still about 10 times higher than the averages from his Boston study. The researchers aren’t sure why concentrations vary from one location to the next."We think it has something to do with where the gas is being sourced from," said Eric Lebel, another study co-author. "California has two major pipelines where it imports gas from: one coming from the Rockies and then one coming in from the north from Canada."Bilsback said benzene could enter a gas supply through a leak in one of the pipelines or at a storage facility where gas is held. From there, it could be released into the kitchen through a leaky stove.The presence of benzene in California homes was consistent regardless of their gas providers or brands of appliances, Lebel said. But stoves in the North San Fernando and Santa Clarita valleys had the highest levels, followed by those in greater Los Angeles."Benzene emissions from a gas stove, even while it’s off, can produce in some cases concentrations of benzene in your house that are equivalent to living with a smoker," Lebel said.Andrea De Vizcaya Ruiz, an associate professor of environmental and occupational health at the University of California, Irvine, who wasn't involved in the study, said that people can get exposed to small amounts of benzene when they fill up their cars' gas tanks or sit by a fireplace but that exposure to high amounts over long periods of time is worrisome. "It’s one of the most direct chemicals that induces cancer, because it transforms the cells in the bone marrow," she said.Pregnant women, infants and young children may be particularly susceptible to adverse health outcomes from long-term benzene exposure, De Vizcaya Ruiz said. But Lebel said it can be hard to tell whether your home has a leak. Gas companies add compounds to gas that give off a rotten egg smell so major leaks don't go undetected, but the scent usually isn't noticeable unless gas is leaking at high concentrations. In that case, De Vizcaya Ruiz said, people may also start to vomit, feel drowsy or confused or develop headaches."If you ever smell gas, you should immediately leave your house, call the gas company," Lebel said.De Vizcaya Ruiz said opening windows can better ventilate rooms in the short term, which helps mitigate potential exposure, but it won’t eliminate the risk or the root cause. People in California may want to consider calling their gas companies as a precaution to make sure there’s no leak, she added.One of the simplest fixes, Lebel said, is to replace a gas stove with an electric one."Just having a gas appliance in your house can be a potential health risk," he said. "Eliminating gas altogether is the only sure way to completely eliminate that risk."
Environmental Science
Researchers at the University of Toronto, Indiana University and University of Notre Dame have detected levels of toxic PFAS chemicals—short for per- and polyfluoroalkyl substances—for the first time in Canadian fast-food packaging, specifically water-and-grease repellent paper alternatives to plastic. Published today in Environmental Science and Technology Letters, the findings suggest that food packaging exposes people directly to PFAS, which have been linked to serious health effects such as increased cancer risk and immune system damage, by contaminating the food they eat. Further, once discarded packaging enters waste streams, PFAS enter the environment, where these “forever chemicals” will never break down. These health and environmental risks have prompted 11 U.S. states to ban PFAS from most food packaging, and two major restaurant chains to commit to becoming PFAS-free by 2025. “As Canada restricts single-use plastics in food-service ware, our research shows that what we like to think of as the better alternatives, such as paper wrappers and compostable bowls, are not so safe and ‘green’ after all. In fact, they may harm our health and the environment—from our air to our drinking water—by providing a direct route to PFAS exposure,” says Miriam Diamond, professor in the Department of Earth Sciences and School of the Environment at the University of Toronto and study co-author. For the study, the researchers collected 42 paper-based wrappers and bowls from fast-food restaurants in Toronto and tested them for total fluorine, an indicator of PFAS. They then completed a detailed analysis of eight of those samples with high levels of total fluorine. Fibre-based moulded bowls, which are marketed as “compostable”, had PFAS levels three to 10 times higher than doughnut and pastry bags. PFAS are added to these bowls and bags as a water- and grease-repellent. PFAS are a complex group of about 9,000 manufactured chemicals, few of which have been studied for their toxicity. A PFAS that is known to be toxic—6:2 FTOH (6:2 fluorotelomer alcohol)—was the most abundant compound detected in these samples. Other PFAS that were commonly found in all the Canadian fast-food packaging tested can transform into this compound, thereby adding to a consumer’s exposure to it. They detected several PFAS for the first time in food packaging, showing how difficult it is to track the presence of this large family of compounds. Critically, the researchers found that the concentration of PFAS declined by up to 85 per cent after storing the products for two years, contradicting claims that polymeric PFAS—a type composed of larger molecules—do not degrade and escape from products. The release of PFAS from food packaging into indoor air presents another opportunity for human exposure to these chemicals. “The use of PFAS in food packaging is a regrettable substitution of trading one harmful option—single-use plastics—for another. We need to strengthen regulations and push for the use of fibre-based food packaging that doesn’t contain PFAS,” says Diamond. ### Journal Environmental Science & Technology Letters Method of Research Data/statistical analysis Subject of Research Not applicable Article Title Per- and polyfluoroalkyl substances in Canadian fast food packaging Article Publication Date 28-Mar-2023 COI Statement The authors declare no competing financial interest.
Environmental Science
NEWPORT, Ore. – Dynamic and changing Arctic Ocean conditions likely caused three major mortality events in the eastern North Pacific gray whale population since the 1980s, a new study has found. During each of these die-offs, including one that began in 2019 and is ongoing, the gray whale population was reduced by up to 25% over just a few years, said Joshua Stewart, an assistant professor with Oregon State University’s Marine Mammal Institute and the study’s lead author. “These are extreme population swings that we did not expect to see in a large, long-lived species like gray whales,” Stewart said. “When the availability of their prey in the Arctic is low, and the whales cannot reach their feeding areas because of sea ice, the gray whale population experiences rapid and major shocks.” “Even highly mobile, long-lived species such as gray whales are sensitive to climate change impacts. When there are sudden declines in the quality of prey, the population of gray whales is significantly affected.” The findings were just published in the journal Science. Eastern North Pacific gray whales are one of the few populations of large whales that have recovered to what may be similar numbers that existed prior to commercial whaling. As the population has approached levels close to what their Arctic feeding areas can support, they have likely become more sensitive to environmental conditions due to competition for limited resources, Stewart said. The unfavorable Arctic conditions that led to two die-offs in the 1980s and the 1990s were not permanent, and the population quickly rebounded as conditions improved. “It turns out we didn’t really know what a healthy baleen whale population looks like when it isn’t heavily depleted by human impacts,” he said. “Our assumption has generally been that these recovering populations would hit their environmental carrying capacities and remain more or less steady there. But what we’re seeing is much more of a bumpy ride in response to highly variable and rapidly changing ocean conditions.” Eastern North Pacific gray whales, which currently number about 14,500, migrate more than 12,000 miles each year along the Pacific Coast, from the warm waters off the coast of Baja California, Mexico, in the winter months to the cold, productive waters of the Arctic to feed in the summer months. Researchers at the National Oceanic and Atmospheric Administration Southwest Fisheries Science Center in La Jolla, California, have been conducting long-term population monitoring studies of these whales since the 1960s, tracking abundance, birth and death rates and monitoring body condition using aerial images. This extensive research has made this population of gray whales the most closely studied large whale population on the planet, providing a unique window into the population dynamics of the species. “This research demonstrates the value of long-term data in understanding not only the species under study but also the environment it depends on,” said Dave Weller, director of the Southwest Fisheries Science Center’s Marine Mammal and Turtle Division. “When we began collecting data on gray whales in 1967, little did we realize the important role they would play in understanding the effects of climate change on an iconic sentinel species in the Pacific. This research would not have been possible without our reliable long-term record.” The eastern North Pacific gray whale population, which was hunted to near extinction before a whaling moratorium was enacted, has been viewed as a conservation success story because of the population’s rapid recovery in the post-whaling era. In 2019, when a high number of gray whale strandings began occurring along the Pacific coast, Stewart, a researcher at the Southwest Fisheries Science Center at the time, began looking more closely at the long-term data to see if he could learn more about what might be driving the unusual mortality event. By combining the long-term data sets on the gray whale population with extensive environmental data from the Arctic, Stewart and his collaborators determined that the two “Unusual Mortality Events” declared by NOAA in 1999 and 2019 were tied to both sea ice levels in the Arctic and the biomass of seafloor-living crustaceans that gray whales target for food. Stewart also identified a third die-off in the 1980s that followed a similar pattern but was not associated with higher numbers of strandings, likely due to lower reporting rates of stranded whales prior to the 1990s. The researchers found that years with less summer sea ice in the gray whales’ Arctic feeding areas provided increased foraging opportunities that benefited the population. However, in the long term, decreasing sea ice cover, a result of rapid and accelerating climate change, most likely will not be beneficial to gray whales. Benthic amphipods, the calorie-rich prey that gray whales prefer, are also sensitive to sea ice cover. Algae that grow underneath sea ice sink to the seafloor, enriching the amphipod population. Less ice leads to less algae reaching the seafloor, warmer water that favors smaller benthic crustaceans and faster currents that reduce habitat for gray whales’ preferred prey. “With less ice, you get less algae, which is worse for the gray whale prey,” Stewart said. “All of these factors are converging to reduce the quality and availability of the food they rely on.” For the gray whales, less prey availability ultimately leads to die-offs. The most recent event is still considered ongoing and has continued significantly longer than the two earlier events. “We are in uncharted territory now. The two previous events, despite being significant and dramatic, only lasted a couple of years,” Stewart said. “The most recent mortality event has slowed and there are signs things are turning around, but the population has continued to decline. One reason it may be dragging on is the climate change component, which is contributing to a long-term trend of lower-quality prey.” Gray whales have lived through hundreds of thousands of years of environmental change and have adapted over that time to changing conditions, making extinction due to climate change unlikely, Stewart said. “I wouldn’t say there is a risk of losing gray whales due to climate change,” he said. “But we need to think critically about what these changes might mean in the future. An Arctic Ocean that has warmed significantly may not be able to support 25,000 gray whales like it has in the recent past.” Coauthors of the study include Trevor W. Joyce of Ocean Associates; John Durban of the Marine Mammal Institute and Sealife Response, Rehabilitation and Research; John Calambokidis of Cascadia Research Collective; Deborah Fauquier of the NOAA Fisheries Office of Protected Resources; Holly Fearnbach of SR3; Jacqueline Grebmeier of the University of Maryland Center for Environmental Science; Morgan Lynn, Wayne Perriman of the Southwest Fisheries Science Center, NOAA Fisheries; Manfredi Manizza of the Scripps Institution of Oceanography, University of California, San Diego; and Tim Tinker of Nhydra Consulting and University of California, Santa Cruz. The Marine Mammal Institute is part of Oregon State’s College of Agricultural Sciences and is based at Hatfield Marine Science Center in Newport. About OSU's Hatfield Marine Science Center: The center is a research and teaching facility located in Newport, Ore., on the Yaquina Bay estuary, about one mile from the open waters of the Pacific Ocean. It plays an integral role in programs of marine and estuarine research and instruction, as a laboratory serving resident scientists, as a base for far-ranging oceanographic studies and as a classroom for students. In addition to Oregon State researchers and students, its campus includes research activities and facilities from five different state and federal agencies.
Environmental Science
"This is one of the least smelly carcasses,” said Todd Katzner, peering over his lab manager’s shoulder as she sliced a bit of flesh from a dead pigeon lying on a steel lab table. The specimens that arrive at this facility in Boise, Idaho, are often long dead, and the bodies smell, he said, like “nothing that you can easily describe, other than yuck.” A wildlife biologist with the US Geological Survey, a government agency dedicated to environmental science, Katzner watched as his lab manager rooted around for the pigeon’s liver and then placed a glossy maroon piece of it in a small plastic bag labeled with a biohazard symbol. The pigeon is a demonstration specimen, but samples, including flesh and liver, are ordinarily frozen, cataloged, and stored in freezers. The feathers get tucked in paper envelopes and organized in filing boxes; the rest of the carcass is discarded. When needed for research, the stored samples can be processed and sent to other labs that test for toxicants or conduct genetic analysis. Most of the bird carcasses that arrive at the Boise lab have been shipped from renewable energy facilities, where hundreds of thousands of winged creatures die each year in collisions with turbine blades and other equipment. Clean energy projects are essential for confronting climate change, said Mark Davis, a conservation biologist at the University of Illinois at Urbana-Champaign. But he also emphasized the importance of mitigating their effects on wildlife. “I’m supportive of renewable energy developments. I’m also supportive of doing our best to conserve biodiversity,” Davis said. “And I think the two things can very much coexist.” To this end, Katzner, Davis, and other biologists are working with the renewable energy industry to create a nationwide repository of dead birds and bats killed at wind and solar facilities. The bodies hold clues about how the animals lived and died, and that could help scientists and project operators understand how to reduce the environmental impact of clean energy installations, Davis said. The repository needs sustained funding and support from industry partners to supply the specimens. But the collection’s wider potential is vast, Davis added. He, Katzner, and other stakeholders hope the carcasses will offer a wide array of wildlife biologists access to the animal samples they need for their work, and perhaps even provide insights into future scientific questions that researchers haven’t thought yet to ask. In 1980, California laid the groundwork for one of the world’s first large-scale wind projects when it designated more than 30,000 acres east of San Francisco for wind development, on a stretch of land called the Altamont Pass. Within two decades, companies had installed thousands of wind turbines there. But there was a downside: While the sea breeze made Altamont ideal for wind energy, the area was also well used by nesting birds. Research suggested they were colliding with the turbines’ rotating blades, leading to hundreds of deaths among red-tailed hawks, kestrels, and golden eagles.
Environmental Science
A new report submitted by the Indian government to the United Nations offers a glimmer of hope when it comes to the fight against climate change: Over the past 14 years, the world's most populous nation has reduced its carbon emissions by a larger-than-expected 33%. According to the report filed to the United Nations Convention on Climate Change, India is well on its way to achieving the goal it established for itself in the Paris climate accord: To reduce its carbon emissions intensity by 45% from its 2005 levels by 2030. India's position stands in stark contrast with that of other nations, including the United States, which under President Donald Trump adopted explicitly anti-science policies when dealing with the COVID-19 pandemic as well as when dealing with climate change. This included yanking America out of the Paris climate accord and removing all information about climate change from the Environmental Protection Agency's website. The need for action on climate change was reinforced throughout Summer 2023. July 2023 was the hottest month in recorded history, with residents of Phoenix, Arizona suffering through weeks of temperatures at or above 110ºF (43.3ºC). Thousands of temperature records all over the planet have been broken since the heatwaves began on June 10. They have contributed to wildfires, droughts and other extreme weather events all over the planet. "It's a 'new abnormal' and it is now playing out in real time," Dr. Michael E. Mann, a professor of Earth and Environmental Science at the University of Pennsylvania, told Salon last month.
Environmental Science
image: Implications for management and public health view more  Credit: Daniel Herrera The next time you crack your backdoor to let your cat outside for its daily adventure, you may want to think again. For a cat, the outdoors is filled with undesirable potential. Like the risks of catching and transmitting diseases, and the uncontrollable drive to hunt and kill wildlife, which has been shown to reduce native animal populations and degrade biodiversity.  A new study by University of Maryland researchers has concluded that humans bear the primary responsibility, and that these risks can be significantly reduced by keeping cats indoors. The study’s analysis used data from the D.C. Cat Count, a Washington, D.C.--wide survey that deployed 60 motion-activated wildlife cameras spread across 1,500 sampling locations. The cameras recorded what cats preyed on and demonstrated how they overlapped with native wildlife, which helped researchers understand why cats and other wildlife are present in some areas, but absent from others. The paper was published on November 21, in the journal Frontiers in Ecology and Evolution. “We discovered that the average domestic cat in D.C. has a 61% probability of being found in the same space as racoons -- America’s most prolific rabies vector -- 61% spatial overlap with red foxes, and 56% overlap with Virginia opossums, both of which can also spread rabies,” said Daniel Herrera, lead author of the study and Ph.D. student in UMD’s Department of Environmental Science and Technology (ENST). “By letting our cats outside we are significantly jeopardizing their health.” In addition to the risk of being exposed to diseases that they can then bring indoors to the humans in their families (like rabies and toxoplasmosis), outdoor cats threaten native wildlife. The D.C. Cat Count survey demonstrated that cats that are allowed to roam outside also share the same spaces with and hunt small native wildlife, including grey squirrels, chipmunks, cottontail rabbits, groundhogs, and white footed mice. By hunting these animals, cats can reduce biodiversity and degrade ecosystem health. “Many people falsely think that cats are hunting non-native populations like rats, when in fact they prefer hunting small native species,” explained Herrera. “Cats are keeping rats out of sight  due to fear, but there really isn’t any evidence that they are controlling the non-native rodent population. The real concern is that they are decimating native populations that provide benefits to the D.C. ecosystem.” In general, Herrera found that the presence of wildlife is associated with tree cover and access to open water. On the other hand, the presence of cats decreased with those natural features but increased with human population density. He says that these associations run counter to arguments that free-roaming cats are simply stepping into a natural role in the ecosystem by hunting wildlife. “These habitat relationships suggest that the distribution of cats is largely driven by humans, rather than natural factors,” explained Travis Gallo, assistant professor in ENST and advisor to Herrera. “Since humans largely influence where cats are on the landscape, humans also dictate the degree of risk these cats encounter and the amount of harm they cause to local wildlife.” Herrera encourages pet owners to keep their cats indoors to avoid potential encounters between their pets and native wildlife. His research notes that feral cats are equally at risk of contracting diseases and causing native wildlife declines, and they should not be allowed to roam freely where the risk of overlap with wildlife is high – echoing previous calls for geographic restrictions on where sanctioned cat colonies can be established or cared for. Journal Frontiers in Ecology and Evolution Method of Research Observational study Subject of Research Animals Article Title Spatial and temporal overlap of domestic cats (Felis catus) and native urban wildlife Article Publication Date 21-Nov-2022 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Environmental Science
Antarctica risks 'cascades of extreme events' as Earth warms, study says Extreme weather in Antarctica, including ocean heat waves and ice loss, is set to become more intense unless urgent policy action reduces the burning of fossil fuels, a new study has found—the latest to sound the alarm on the damage climate change is unleashing. "It is virtually certain that continued greenhouse gas emissions will lead to increases in the size and frequency of events" as the world gets dangerously close to exceeding the 1.5C warming limit laid out in the 2015 Paris Agreement, according to a study published Tuesday in the journal Frontiers in Environmental Science. "We cannot rule out future cascades where extreme events may have wide-ranging linked impacts in multiple realms." Scientists have become increasingly alarmed on how the Antarctic ice has struggled to grow back after hitting an all-time low in February—a deviation so extreme from the normal that it's been dubbed a " six sigma event," or once-in-a-7.5-million-year phenomenon. The Arctic, too, is expected to be ice-free in summers by 2030, underscoring the rapid pace at which global warming is damaging the planet's ecosystems. The rising temperatures have also led to the hottest June and July on record with wildfires and heat waves ravaging Canada and several European countries this year. South America is also grappling with unprecedented winter temperatures, with readings in parts of Chile surpassing 30C. The world's most extreme heat wave was recorded in east Antarctica. Temperatures hit 38.5C above the seasonal normal there in 2022, according to the study that has reviewed climate extremes in Antarctica and the Southern Ocean. The study's authors have also warned that low sea ice events may become more frequent, and, similar to the Arctic, become self-perpetuating as more solar heat is absorbed and less is reflected back due to the reduced ice cover. The study concludes that Antarctica is likely to face considerable stress and damage in the coming decades. Twelve countries including the U.K., U.S., India and China pledged to preserve the continent's fragile environment through the Antarctic Treaty in 1959. The study says some countries risk breaching the terms of this agreement without urgent action to reduce emissions. "Nations must understand that by continuing to explore, extract and burn fossil fuels anywhere in the world, the environment of Antarctica will become ever more affected in ways inconsistent with their pledge," lead author Martin Siegert, a professor at the University of Exeter, said in a statement Tuesday. As the Antarctic sea ice melts, more areas of the continent may become accessible to ships and this would also require careful management, as well as biosecurity measures, to protect vulnerable sites. "Antarctic change has global implications," said Siegert. "Reducing greenhouse gas emissions to net zero is our best hope of preserving Antarctica, and this must matter to every country—and individual—on the planet." More information: Martin J. Siegert et al, Antarctic extreme events, Frontiers in Environmental Science (2023). DOI: 10.3389/fenvs.2023.1229283 2023 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
Environmental Science
Thailand’s iconic, gas-guzzling tuk-tuks are being replaced by a greener, more energy efficient model, offering travellers a more environmentally friendly way of getting around what is one of the world’s worst countries when it comes to air pollution.“The benefits are quite clear in terms of the environment”, says Krisada Kritayakirana, co-founder and CEO of start-up Urban Mobility Tech. “When you use traditional tuk-tuks, you can smell the gas and it sometimes could be unpleasant. With the electric tuk-tuks, basically you don’t have any noise and you don’t have any emission from tailpipes.”In 2021, the levels of the pollutant PM2.5m in Thailand were four times higher than World Health Organization guidance. Inhalation of PM2.5 can cause respiratory problems and heart issues. In 2019, pollution was to blame for more than 31,000 deaths in the country.“In big cities like Bangkok [or] Chiang Mai, the key source would be the incomplete combustion in diesel engine vehicles,” says Kannika Thampanishvong, senior research fellow on climate change policy and green growth at the Thailand Development Research Institute. As of 2019, there were almost 20,000 tuk-tuks in Thailand, typically powered by internal combustion engines.The demand for electric tuk-tuks is growing, Kritayakirana says. In 2022, their number increased from 263 to 498, according to the Electric Vehicle Association of Thailand. The government has been encouraging EV uptake since 2015.Fariha Essaji, a UK mother who used to live in Bangkok, used the electric tuk-tuks offered by UMT’s ride-hailing app Muvmi because they were cheaper than taxis and bigger than usual tuk-tuks so could accommodate a pushchair. But she says she appreciated the electric element given it made the vehicle less noisy.Muvmi plans to increase its fleet of electric tuk-tuks from 350 to more than 1,000 in Bangkok, climbing to 5,000 nationally within five years. Vichian Suksoir, deputy executive director of the innovation department at Thailand’s National Innovation Agency (NIA), says many hotels are now purchasing electric tuk-tuks to transport guests.But electric tuk-tuks alone won’t make much difference to air quality, says Dr Surat Bualert, an assistant professor focused on environmental science at Kasetsart University. “I think [electric] cars can improve air quality because the research and chemical analysis shows the major source of PM2.5 is transportation. Tuk-tuks cannot because the ratio of tuk-tuks is small compared to other vehicles.”About 10m cars are registered in Thailand and the government aims to make 30% of its auto production zero-emission by 2030. Suksoir says NIA has earmarked 100m baht ($3m or £2.5m) for the development of EVs and related technology in 2023.But EVs aren’t “the holy grail of making travel more sustainable”, says Ewan Cluckie, founder of Tripseed. They rely on grid power to charge, and in Thailand only 14.9% derives from renewable sources. “Until we can transition the charging capabilities entirely to renewable energy, there is still some reliance on fossil fuels,” he says. And until issues of cost and range can be addressed, a wider transition to electric tuk-tuks “will be held back”.Electric tuk-tuks can only travel short distances before needing to be charged, while the price to buy one is about 400,000 baht ($12,000), Cluckie says. The starting price for a traditional tuk-tuk is about 150,000 baht ($4,500).“People aren’t going to get much money secondhand for their two-stroke tuk-tuks, and certainly not enough to go out and buy a brand new electric tuk-tuk”, Cluckie says. “On a grander scale across the city, I think it’s going to take a lot of time”.But Kritayakirana says electric tuk-tuks will be cheaper in the long-run because petrol is much costlier than a battery. He believes EVs, including electric tuk-tuks, “are the future”.
Environmental Science
Scientists from the Massachusetts Institute of Technology developed a mobile desalination unit that weighs less than 10 kilograms and can create clean drinking water without the need for pumps or filters. The remarkable device results from a journey working on the physics behind underlying desalination processes that took ten years. The device is the size of an average suitcase, and it needs so little power (approximately 20 Wh per liter of seawater) that a tiny solar panel can energize it.  Another benefit of the unit is that it does not need filters like other small desalination devices. This means that the need for maintenance is significantly reduced, making it ideal for deployment in remote locations like islands or cargo ships.Removing the need for a filterAccording to Junghyo Yoon (first author of the underlying research paper), most conventional portable desalination systems need high-pressure pumps to drive water through filters, which are challenging to reduce in size without affecting the device's energy efficiency. Instead of filters, the team used a method known as ion concentration polarization (or ICP), which was developed by a group led by professor Jongoon Han (senior author of the paper) more than ten years ago. Rather than filtering water, the ICP method uses an electrical field to apply to membranes above and below a water channel. Negatively and positively charged particles, like viruses, salt molecules, and bacteria, are repelled by the membranes as they pass by. The charged particles are channeled into an additional stream of water, which is released later.Both suspended and dissolved particles are removed throughout the process, facilitating clean water to travel through the channel. ICP consumes less power than other procedures since it merely needs a low-pressure pump.However, ICP does not continuously remove all of the salts floating in the channel. Therefore, the team used a second method known as electrodialysis to eliminate any residual salt ions.Machine learning was used to uncover the best combination of ICP and electrodialysis modules. The optimum configuration incorporates a 2-stage ICP procedure, with water flowing through a total of six modules in the first stage and then through three in the second stage, followed by a single electrodialysis process. This reduced energy usage while guaranteeing the process remains self-cleaning.Although some charged particles could be trapped on the ion exchange membrane, they can be easily removed by simply reversing the polarity of the electric field. Future developmentsIn the future, Yoon aims further to improve the efficiency and usability of the device. At the same time, Han will focus on applying the lessons learned to go beyond simple desalination and developing technology to detect contamination in drinking water more quickly.As always, if you are interested in more details about the device and the underlying technology, be sure to check out the study published in the science journal Environmental Science & Technology listed below.Sources and further reading:Massachusetts Institute of Technology: MITPortable Seawater Desalination System for Generating Drinkable Water in Remote Locations - (Environmental Science & Technology)
Environmental Science
Low-income areas shown to experience hotter temperatures in L.A. county In recent years, the Los Angeles region—home to 9.8 million people—has experienced record-breaking heat waves and steadily increasing average temperatures due to climate change. While climate change is a global problem, its effects often impact disadvantaged communities more strongly than others. A new study shows that, in L.A. County, lower-income neighborhoods have hotter surface temperatures than higher-income neighborhoods. These differences can be up to 36 degrees Fahrenheit at noon on a summer day. The disparities, the study shows, are primarily due to higher levels of vegetation, which helps dissipate heat, in higher-income areas. Meanwhile, planting more trees and using sustainable maintenance and irrigation practices in lower-income areas could help bring surface temperatures down. In dense urban areas where planting trees is less viable, the authors suggest that increasing the reflectivity of surfaces—roofs, streets, and so on—could help to lower temperatures. The study, described in a paper titled "Unequal Exposure to Heatwaves in Los Angeles: Impact of Uneven Green Spaces" and published in the journal Science Advances on April 28, was conducted in the laboratories of Christian Frankenberg, professor of environmental science and engineering and a research scientist at JPL, which Caltech manages for NASA; and Paul Wennberg, R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering. "Raising awareness about the risks of heat waves, and acknowledging the unequal exposure across different socioeconomic groups, is increasingly important," Frankenberg says. "Studies like ours can inform urban planning choices to promote environmental justice and increase the resilience of our cities as summers get warmer." Led by Caltech research scientist Yi Yin, the team looked at high-resolution surface temperature measurements collected during the past four years. This data was taken by ECOSTRESS, a satellite instrument developed at JPL and installed on the International Space Station that measures Earth's surface temperature with a resolution of individual city blocks, enabling researchers to examine intracity spatial patterns. The researchers found that median household income had a strong negative correlation with surface temperature. In other words, having a higher median household income was very likely to indicate cooler surface temperatures. "Initially, this started as a personal interest—I was looking for housing and observed that the lush, green areas were more expensive to live in," Yin says. "This inspired me to conduct the study with the detailed temperature data from the ECOSTRESS instrument and compare it with median household income data. The correlation was shockingly strong." What is causing these disparities? After all, the sun shines evenly on the entire Los Angeles region, so why do surfaces in some areas get hotter than others? While geography, such as the distance from the cool ocean, plays a role, the researchers found the dominating factor in surface temperature disparities is the amount of water evaporated into the atmosphere. When water evaporates—changing from a liquid to a gas—it carries heat away from the surface. This is analogous to a person getting chilly when getting out of a swimming pool—the water evaporating from your skin is carrying heat away from you. The study finds more water is evaporating in affluent neighborhoods as a result of denser tree canopies, leading to a cooling effect. In the semi-arid L.A. climate, vegetation is primarily supported by irrigation rather than rainfall during dry seasons. In the study, the researchers recommend planting more trees in lower-income neighborhoods and prioritizing tree cover while shifting away from turf grass. Although turf grass increases the amount of water evaporation, it does not provide shade. "Though it can be difficult to plant and maintain trees in dense urban environments, especially those with limited open space and water availability, trees produce shade and improve the quality of life for residents," Yin says. "While irrigation in dry urban settings is often characterized as wasteful, this study illustrates the enormous benefits that come from such water use in reducing heat exposure," adds Wennberg. Another physical factor that can modify temperatures is how much sunlight is absorbed by surfaces. Asphalt parking lots, for example, absorb sunlight efficiently and consequently heat up quickly. In contrast, it is possible to use paints and other materials that reflect more of the sunlight and thereby reduce heating. Current Los Angeles city and county policies, for example, require reflective "cool roofs." More information: Yi Yin et al, Unequal exposure to heatwaves in Los Angeles: Impact of uneven green spaces, Science Advances (2023). DOI: 10.1126/sciadv.ade8501 Journal information: Science Advances Provided by California Institute of Technology
Environmental Science
International Space Station Air Pollutant Study Could Improve Spacecraft Design Complete the form below to unlock access to ALL audio articles. The concentrations of potentially harmful chemical compounds circulating in the International Space Station (ISS) exceed those found in the floor dust of an average American home, according to a new analysis of the “space dust” trapped by the station’s air filters. The new findings, which included the identification of these airborne contaminants and the consideration of possible sources, could have significant implications for the design of future manned spacecraft. The research is published in the journal Environmental Science and Technology Letters. Inside the ISS The ISS is a unique environment. The air inside the space station is constantly recirculated, with 8–10 changes per hour. This air is treated to make sure that CO2 and trace gaseous contaminants are removed before it is recirculated, however, it is unknown how effective the system is at removing airborne chemical contaminants. As the ISS has been inhabited by humans continuously for more than two decades, particularly close attention has been paid to the flammability of the contents of the ISS. This includes the use of bespoke, industrial flame retardants that would be less common in the living quarters of an average Earth-dweller. But on top of this, the ISS astronauts may also bring cameras, power tools, clothing and other pieces of everyday equipment into the space station. The high levels of ionizing radiation that the ISS is exposed to has the potential to accelerate the aging of certain materials, breaking down plastic objects into nanoplastics more rapidly than on Earth. In the ISS’s microgravity environment, these tiny particulates – as well as other degradation products from niche flame retardants and other equipment – can easily become airborne pollutants. The air filtration system within the ISS is equipped with screen-covered high-efficiency particulate absorbing (HEPA) filters that are intended to remove airborne particles before the air is recirculated. These screens accumulate debris during operation and so require weekly vacuuming to ensure that they are clear to operate efficiently. The debris collected during vacuuming is largely made up of clothing lint, hair, airborne particulate and other debris that is referred to as “spacecraft cabin dust”. To support studies looking at the environment inside the ISS, some of these vacuum bags have been brought back to Earth by astronaut crews returning from their expeditions. Persistent organic pollutants found in space dust Given the complexity and uniqueness of life on the ISS, scientists have hypothesized that the relative abundance of known airborne contaminants in the ISS could be markedly different to regular indoor environments on Earth. To investigate further, this study used gas chromatography-mass spectrometry (GC-MS) and liquid chromatography quadrupole time-of-flight mass spectrometry (LC-QTOF-MS) to analyze samples of the cabin dust that were returned to Earth. Several prominent, potentially harmful chemical contaminants were identified in the dust samples, including: polybrominated diphenyl ethers (PBDEs), hexabromocyclododecane (HBCDD), “novel” brominated flame retardants (BFRs), organophosphate esters (OPEs), polycyclic aromatic hydrocarbons (PAH), perfluoroalkyl substances (PFAS) and polychlorinated biphenyls (PCBs). Under the United Nations Environment Programme’s Stockholm Convention, PCBs, some PFAS, HBCDD and the penta- octa-, and deca-BDE commercial formulations of PBDEs are officially recognized as being persistent organic pollutants (POPs) and so are heavily regulated on Earth. Some OPEs are also under consideration for restriction by the European Chemicals Agency. Commercial products and industrial flame retardants are likely sources of contamination While the study authors write that their current analysis “cannot yield direct insight into the origins of dust contamination,” they do offer some speculations as to the likely sources of contamination. BFRs and OPEs are used in many countries to meet fire safety regulations in electronics and electrical equipment, as well as insulation, fabrics and foams. The PFAS compounds found in the cabin dust are also likely to reflect the use of certain waterproofing materials that were used at various points to prevent microbial growth in the ISS. Similarly, the elevated levels of BFRs could be a result of the astronauts’ practice of vacuuming the wall panels and acoustic insulation inside the ISS, which are likely to contain higher levels of these flame retardants. “Our findings have implications for future space stations and habitats, where it may be possible to exclude many contaminant sources by careful material choices in the early stages of design and construction,” said study co-author Stuart Harrad, a professor of environmental chemistry at the University of Birmingham. The concentration of the chemical contaminants found in the space dust was also compared against the median concentrations for these chemicals found in typical house dust in the United States. “While concentrations of organic contaminants discovered in dust from the ISS often exceeded median values found in homes and other indoor environments across the US and western Europe, levels of these compounds were generally within the range found on Earth,” Harrad said. Reference: Harrad S, Abdallah MAE, Drage D, Meyer M. Persistent organic contaminants in dust from the International Space Station. Environ Sci Technol Lett. 2023. doi:10.1021/acs.estlett.3c00448 This article is a rework of a press release issued by the University of Birmingham. Material has been edited for length and content.
Environmental Science
PCB has been banned in most countries since the 1970s, but that doesn't mean it no longer exists. Now, deep-sea researchers report that they have found PCB at the bottom of the Atacama Trench in the Pacific Ocean. During their expedition to the deep-sea trench, the research team retrieved sediment cores and analyzed them for PCB occurrences at five different locations in the trench. All the samples of surface sediment analyzed contained PCB. The study, led by Professor Anna Sobek from the Department of Environmental Science at Stockholm University and Professor Ronnie N. Glud, director of the Danish Center for Hadal Research at the University of Southern Denmark, has been published in the scientific journal Nature Communications. PCB is short for Poly-Chlorinated Biphenyls, which covers 209 different substances. They were introduced in the 1930s and have been used primarily in building materials and technical components, but are now banned in most countries and classified as a highly persistent environmental toxin. PCB can be carcinogenic and cause reproductive harm. Although the world's production of PCBs dropped significantly in the 1970s, the substances still pose an environmental threat. In 2018 researchers reported, for example, that half of the world's killer whale populations were weakened by PCB. Another study has found that scavenging amphipods in the deep sea contained large amounts of PCBs. "It is thought-provoking that we find traces of human activity at the bottom of a deep-sea trench; a place that most people probably perceive as distant and isolated from our society," says Professor Ronnie N. Glud, who has participated in more than 10 expeditions to deep-sea trenches around the world. These expeditions have helped to dispel the myth that deep-sea trenches are unaffected by what happens on the surface and have provided insight into the surprisingly rich, active, and varied life at the greatest depths of the ocean. The studies have also shown that deep-sea trenches accumulate large amounts of organic material, contributing to the oceans' ability to absorb carbon released into the atmosphere through fossil fuel burning. However, not only organic material accumulates in deep-sea trenches, which are also called hadal trenches. For example, the Danish Center for Hadal Research reported in 2021 that mercury also accumulates in the trenches' sediments, and in 2022, a similar announcement was made about black carbon, which is particles that are mainly formed by the combustion of fossil fuels. The concentration of PCBs in samples from the Atacama Trench is not alarmingly high, according to Ronnie N. Glud. He points out that much higher concentrations have been found in places like the Baltic Sea, North Sea, and Tokyo Bay. Concentrations 300-1500 times higher have been measured in the Baltic Sea. "These are places with a lot of human activity, so one would expect that. The Atacama samples do not show very high concentrations but considering that they were retrieved from the bottom of a deep-sea trench, they are relatively high. A priori no one would expect to find pollutants in such a place," says Ronnie N. Glud. PCBs are hydrophobic, meaning they are not very soluble in water. Instead, they bind to organic material that sinks to the bottom. "The Atacama trench is located in an area with relatively high production of plankton in surface waters. When the plankton dies, it sinks to the bottom of the ocean," explains Anna Sobek. In addition, large amounts of material are transported down the steep slopes and deposit in the deepest areas. Some of the organic material that reaches the bottom of the Atacama trench is eventually decomposed by microorganisms, and as a result, PCBs accumulate in sediment. PCBs are persistent compounds that are slowly redeposited over time, which is why increasing concentrations can be found in inaccessible areas such as the hadal trenches, even though they were largely banned worldwide in the 1970s. "Unlike coastal areas where PCB concentrations are typically higher in deeper sediment layers deposited 50 years ago, PCB concentrations in hadal sediments are highest in the upper sediment layers, indicating that PCBs have only recently reached the deeper trenches and that concentrations have not yet peaked: We may see higher concentrations in a few years," says Ronnie N. Glud. The deep-sea trenches are home to many different microorganisms and animals that have adapted to the extreme living conditions. Perhaps they are also home to organisms that can metabolize the pollutants that are deposited there. That is one of the focus points of Danish Center for Hadal Research and for this research, the center holds a solid stock of frozen sediment samples collected from expeditions to different deep-sea trenches in 2021 and 2022. "We are interested in finding out if PCBs are also present in other deep-sea trenches or if they are unique to the Atacama trench. We also want to investigate the bacteria that live down there and learn more about their function," says Ronnie N. Glud. The deep-sea trenches are located in the hadal zone of the ocean, which lies at depths of 6-11 km. There are about 27 deep-sea trenches, also called hadal trenches, named after the Greek god Hades, who ruled the underworld. Story Source: Journal Reference: Cite This Page:
Environmental Science
Scientific findings don't always translate neatly into actions, especially in conservation and resource management. The disconnect can leave academics and practitioners disheartened and a bit frustrated. "We want conservation science to be informing real-world needs," said Darcy Bradley, a senior ocean scientist at The Nature Conservancy and a former director of UC Santa Barbara's Environmental Markets Lab. "Most managers and practitioners also want to incorporate science into their work," added Cori Lopazanski, a doctoral student at UCSB's Bren School of Environmental Science & Management. Lopazanski and Bradley were particularly curious how much science was finding its way into the management plans of marine protected areas, or MPAs. These are areas of the ocean set aside for conservation of biodiversity, cultural heritage and natural resources. The pair led a study investigating the management plans for 555 marine protected areas to clarify how the documents incorporated recommendations for climate resilience. The team found that many plans contain forward-looking strategies, even when they didn't explicitly reference "climate change" or related terms. The heartening results appear in the journal Conservation Letters. This is the first study to examine this question in detail on an international scale. The authors considered marine protected areas of various sizes, locations and layouts across 52 countries, with plans written in nine languages. Their list included practically any marine reserve that barred extractive activities at least somewhere within its borders, including the Channel Islands National Marine Sanctuary, just off the coast of Santa Barbara. Previous studies mostly focused on the explicit language of management plans. This literal approach gave the appearance that marine protected areas weren't being managed effectively for climate change. In contrast, Lopazanski, Bradley and their co-authors searched the plans for strategies that promote resilience. The results appear worrying at first. Just over half of the plans in the study did not explicitly include strategies to tackle climate change impacts. In fact, about 22% didn't mention climate change at all. "You could mistakenly draw the conclusion that we have a long way to go to really prepare the world's MPAs for climate change," Bradley stated. However, a more holistic review revealed a different picture. Management plans overwhelmingly contained key principles for building resilience, even when they didn't explicitly mention climate change. Roughly speaking, 94% outlined long-term objectives, 99% included threat-reduction strategies, 98% had monitoring programs, and 93% incorporated adaptive management. Adaptive management evolves to keep up with changing circumstances. It's a continual process of evaluating what is and is not working, and correcting course to keep on target. It begins with setting objectives for the area: conservation goals, species and communities of interest, etc. Managers then assess what's happening in the area to develop strategies to meet these goals. The objectives and assessment then inform the MPA's design, including its size, shape and location. Once it's established, monitoring can begin to track indicators for the objectives. With a clear goal and active observation, managers can implement strategies and interventions, such as addressing pollution, removing invasive species, and restoring habitat. Adaptive management offers dynamic protection. "We don't have a ton of evidence about which types of climate strategies are going to be most effective well into the future because climate change impacts are a moving target," Bradley said. So she was thrilled to see how many management plans incorporated principles of adaptive management. Managing with the future in mind is particularly important in our changing world. In a recent study, Lopazanski and her colleagues found that marine heatwaves impact ecological communities regardless of whether they are protected inside an MPA. The results raise the question of whether marine protected areas will remain effective conservation tools. Lopazanski believes this critique misses the point. Marine protected areas will experience losses under climate change just like protected areas on land. That doesn't mean these parks, reserves and sanctuaries aren't worthwhile. "There are some things that marine protected areas do really well," she said. They're particularly effective at mitigating the impact of fishing and other extractive activities. That's why MPAs have to be one part of a more comprehensive conservation and management plan for our ocean biodiversity and marine resources. What's more, large marine heatwaves are a relatively new phenomenon, and dealing with that uncertainty is part of designing an effective MPA. "It's easy to criticize MPAs as a static strategy, ill-suited to deal with the dynamic nature of climate change," Bradley said, "But a deeper look at the plans reveals that they are more dynamic than they appear." The authors compiled many different management strategies in the paper, highlighting some they think are underutilized. They also peppered the study with examples and lessons from different MPAs. They were particularly impressed by the management plan of the Greater Farallones National Marine Sanctuary, off the coast of San Francisco. Its comprehensive plan included diverse strategies that targeted different climate change impacts and challenges facing that specific region. "This study can be a resource for managers who are looking to make their MPAs more resilient," Lopazanski said. In fact, utility was one of the study's key aims. This research was a collaboration between academic scientists and conservation practitioners supported by the Arnhold UC Santa Barbara-Conservation International Collaborative. It was intentionally designed to gather information that would be immediately actionable and useful for real-world MPA management. A document to bring academics and managers just a bit closer together. Story Source: Journal Reference: Cite This Page:
Environmental Science
Herman Daly, one of the founders of ecological economics, has died at the age of 84. His work questioning the pursuit of economic growth, and articulating the alternative of a steady-state economy, has been foundational to sustainability science. Herman Daly was born in Texas in 1938. As a child, at the age of eight, he was diagnosed with polio, which left him without the use of his left arm. After seven years of trying every conceivable treatment to regain the use of the atrophied limb, he decided to have it amputated at the age of 15. He would later say, “When you come up against an impossibility it is best to recognise it and switch your energy to good things that are still possible”1. Credit: School of Public Policy, University of MarylandAs a student at Rice University in the 1950s, he was interested in both the sciences and the humanities. He decided to study economics, thinking it would give him a foot in both. He soon discovered that this was not the case and that mainstream economics instead had “both feet in the air”. His life’s mission became to change this — to give economics a grounding in both the sciences and the humanities, in particular physics, ecology, and ethics.One of Herman’s first academic articles, published in 1968 and titled “On economics as a life science”, made a powerful analogy between biological organisms and economic systems2. In the article, Herman drew two diagrams, one for biological organisms and one for economies, showing how they both relied on flows of matter and energy, and both produced flows of degraded waste. This analogy is central to modern research on social metabolism, such as material and energy flow accounting3. In the same article, he also proposed that input–output analysis, which traces the flow of money between different sectors of the economy, could be extended to incorporate physical quantities. This approach is now known as environmentally extended input–output analysis, and it is the method used to calculate a wide range of environmental footprint indicators4.Herman’s incorporation of biophysical quantities into economics drew upon the work of his PhD supervisor, the Romanian economist Nicholas Georgescu-Roegen, who wrote The Entropy Law and the Economic Process5. Herman translated Georgescu-Roegen’s complex ideas into a more accessible form, exploring the implications of the laws of thermodynamics for the economy. He suggested that, “The first and second laws of thermodynamics should also be called the first and second laws of economics. Why? Because without them there would be no scarcity and without scarcity no economics”1.But Herman’s contributions were not just limited to biophysical concepts. He also made important contributions drawing on ethics and theology. As a Christian, Herman’s religious faith was an important part of his life. In his 1973 book Toward a Steady-State Economy6, he proposed the “ends–means spectrum” as a way to prioritize goals and recognize their dependencies. The spectrum ranges from ultimate means (the natural resources that sustain life and all other activity) to intermediate means (the machines and labour that transform natural resources into products and services) to intermediate ends (the goals that individuals and societies aim to achieve) to the ultimate end (that which is desired only for itself, and is not the means to some other end).Herman argued that economics was positioning itself too narrowly in the middle of the ends–means spectrum, failing to appreciate the scarcity of low-entropy matter and energy and failing to consider what the higher purpose of life might be. It was treating economic growth as the ultimate end, rather than as one means to an end. In Herman’s words, “Our refusal to reason about the Ultimate End merely assures the incoherence of our priorities. It leads to the tragedy of Captain Ahab, whose means were all rational, but whose purpose was insane. We cannot lend rationality to the pursuit of a white whale across the oceans merely by employing the most advanced techniques of whaling. To do more efficiently that which should not be done in the first place is no cause for rejoicing”7.The ends–means spectrum provides the philosophical framework for much of Herman’s work, but it also underpins a lot of other sustainability research that came after. This includes my own work on the provisioning systems that link biophysical resource use and social outcomes8, as well as Kate Raworth’s “Doughnut” of social and planetary boundaries9.Herman observed that mainstream economics, which focuses on the circular flow of money between households and businesses, completely omits the natural world. In reality, the economy is not an isolated system, as it is treated in mainstream economics, but a subsystem of the biosphere. All of the resources used by the economy come from the environment, and all of the wastes produced by it return to the environment. To represent this fact, Herman drew a diagram showing a square representing the economy, contained within a circle representing the biosphere, with flows of matter and energy connecting them10.Although simple to sketch, Herman’s diagram has profound implications. It shows that economic activity can be analysed — not only in terms of flows of money, but also in terms of flows of biophysical resources and social outcomes. Moreover, the finitude of the biosphere implies that there are limits to how large the physical economy within it can grow. Herman argued that we have in fact moved from an “empty world” to a “full world”11. The planetary boundaries framework, developed much later by Johan Rockström and colleagues, quantifies the relative size of the square and the circle in Herman’s diagram12. It shows that we are now living in a very full world, transgressing 6 of 9 planetary boundaries.Mainstream economics is primarily concerned with the goal of efficient allocation, arguing that environmental problems can be solved by “getting the prices right”. In one of his most-cited articles, Herman argued that the focus on efficient allocation was failing to solve environmental problems because these are the result of the scale of economic activity exceeding ecosystem limits, not of poor pricing within markets13. Increasing the prices of certain goods relative to others can reduce the use of bad products relative to better ones, but it cannot address absolute scarcity. Ultimately, there are limits on the resources that nature can provide and the pollutants that it can absorb.These limits led Herman to develop what is arguably his greatest contribution to sustainability science — the concept of a “steady-state economy”7. Drawing on the work of classical economists such as John Stuart Mill, Herman argued in favour of an economy where the goal is qualitative development, not quantitative growth. He defined a steady-state economy as one where material and energy use are stabilized and kept within ecological limits. Fairness is an explicit goal for such an economy: with non-growing resource use, inequality can only be addressed by the fairer distribution of existing resources. Herman discussed a number of the changes that would be needed to achieve a steady-state economy. These include caps on resource use, limits on income and wealth inequality, working-time reduction, re-regulation of international trade, full-reserve banking, a stable population, and new measures of progress to replace gross domestic product (GDP).Herman was critical of GDP because it does not distinguish between good and bad economic activity. He argued that growth could become “uneconomic” if its costs exceeded its benefits. To assess whether this was happening, he helped develop the Index of Sustainable Economic Welfare (also called the Genuine Progress Indicator), which adds the value of beneficial activity that is not counted in GDP (such as household and volunteer work), and subtracts the cost of harmful activity that we would prefer to avoid (such as crime, pollution, and the depletion of natural capital)14. The difference between the two indicators is striking: while global GDP has increased more than threefold since 1950, the Genuine Progress Indicator has flat-lined since the late 1970s15.Herman’s work was also foundational to one of the greatest debates in sustainability: weak sustainability versus strong sustainability. Advocates of weak sustainability claim that different forms of capital (in particular natural capital and built capital) are substitutable for each other and that sustainability can be achieved if the value of the total stock of capital does not decrease over time. Advocates of strong sustainability claim that substitution possibilities are limited and that sustainability can only be achieved if critical stocks of each form of capital are maintained. Herman argued in favour of strong sustainability, contending that the different forms of capital are complementary. In a highly cited article, he proposed three (now famous) rules for sustainable development: (1) exploit renewable resources no faster than they can be regenerated; (2) emit wastes no faster than they can be assimilated; and (3) deplete non-renewable resources no faster than renewable substitutes can be developed to replace them11.Herman received numerous major awards for his work, including the Blue Planet Prize, Heineken Prize for Environmental Science, and a Right Livelihood Award (sometimes called the Alternative Nobel Prize). He also co-authored the most widely used textbook in ecological economics16. However, despite these achievements, Herman’s work was largely ignored, and sometimes even derided, by his colleagues in mainstream economics. Given the difficult road that Herman pursued in his career, it is all the more remarkable that he was also an extremely kind and humble human being, always willing to see the best in other people, and always willing to engage with anyone who reached out to him. His contemporary, Joan Martinez-Alier, described him as “a good man, ‘una buena persona’ … [with] no sense of self-importance”. Although he helped to found the International Society for Ecological Economics in 1989, Martinez-Alier noted that, “he never cared to be president, or in any way to be in command”1.Instead, Herman led with his ideas. These ideas provide the foundation for post-growth research, including degrowth, Doughnut economics, and a well-being economy. But they have also been foundational to research on social metabolism, environmentally extended input–output analysis, planetary boundaries, provisioning systems, alternative measures of progress, and strong sustainability.The ecological economist Peter Victor recently wrote a biography of Herman Daly, which provides a valuable account of both his life and ideas1. Victor argues that while the ideas of most economists become less relevant over time, the opposite is happening with Herman’s work. As humanity confronts climate change, rising inequality and ecological breakdown, Herman’s ideas are becoming more and more important. His death is a loss for us all, but his ideas will continue to live on in a whole new generation of ecological economists and sustainability scientists. ReferencesVictor, P. A. Herman Daly’s Economics for a Full World: His Life and Ideas (Routledge, 2022).Daly, H. E. J. Polit. Econ. 76, 392–406 (1968).Article  Google Scholar  Haberl, H., Fischer-Kowalski, M., Krausmann, F., Weisz, H. & Winiwarter, V. Land Use Policy 21, 199–213 (2004).Article  Google Scholar  Hoekstra, A. Y. & Wiedmann, T. O. Science 344, 1114–1117 (2014).Article  CAS  Google Scholar  Georgescu-Roegen, N. The Entropy Law and the Economic Process (Harvard University Press, 1971).Daly, H. E. Toward a Steady-State Economy (W. H. Freeman, 1973).Daly, H. E. Steady-State Economics: The Economics of Biophysical Equilibrium and Moral Growth (W. H. Freeman, 1977).O’Neill, D. W., Fanning, A. L., Lamb, W. F. & Steinberger, J. K. Nat. Sustain. 1, 88–95 (2018).Article  Google Scholar  Raworth, K. Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist (Random House, 2017).Daly, H. E. Beyond Growth: The Economics of Sustainable Development (Beacon Press, 1996).Daly, H. E. Ecol. Econ. 2, 1–6 (1990).Article  Google Scholar  Rockström, J. et al. Nature 461, 472–475 (2009).Article  Google Scholar  Daly, H. E. Ecol. Econ. 6, 185–193 (1992).Article  Google Scholar  Daly, H. E. & Cobb, J. B. For the Common Good: Redirecting the Economy toward Community, the Environment, and a Sustainable Future 2nd Edn (Beacon Press, 1994).Kubiszewski, I. et al. Ecol. Econ. 93, 57–68 (2013).Article  Google Scholar  Daly, H. E. & Farley, J. Ecological Economics: Principles and Applications 2nd Edn (Island Press, 2011).Download referencesAuthor informationAuthors and AffiliationsSustainability Research Institute, School of Earth and Environment, University of Leeds, Leeds, UKDaniel W. O’NeillAuthorsDaniel W. O’NeillYou can also search for this author in PubMed Google ScholarCorresponding authorCorrespondence to Daniel W. O’Neill.About this articleCite this articleO’Neill, D.W. Herman E. Daly (1938–2022). Nat Sustain (2022). https://doi.org/10.1038/s41893-022-01041-0Download citationPublished: 29 December 2022DOI: https://doi.org/10.1038/s41893-022-01041-0
Environmental Science
Everyone knows that burning coal causes air pollution that is harmful to the climate and human health. But the ash left over can often be harmful as well. For example, Duke Energy long stored a liquified form of coal ash in 36 large ponds across the Carolinas. That all changed in 2014, when a spill at its Dan River site released 27 million gallons of ash pond water into the local environment. The incident raised concerns about the dangers associated with even trace amounts of toxic elements like arsenic and selenium in the ash. Little was known, however, about just how much of these hazardous materials were present in the ash water or how easily they could contaminate the surrounding environment. Fears of future spills and seepage caused Duke Energy to agree to pay $1.1 billion to decommission most of its coal ash ponds over the coming years. Meanwhile, researchers are working on better ways of putting the ash to use, such as recycling it to recover valuable rare earth elements or incorporating it into building materials such as concrete. But to put any potential solution into action, researchers still must know which sources of coal ash pose a hazardous risk due to its chemical makeup -- a question that scientists still struggle to answer. In a new paper published June 6 in the journal Environmental Science: Nano, researchers at Duke University have discovered that these answers may remain elusive because nobody is thinking small enough. Using one of the newest, most advanced synchrotron light sources in the world -- the National Synchrotron Light Source II at Brookhaven National Laboratory -- the authors show that, at least for selenium and arsenic, the amount of toxic elements able to escape from coal ash depends largely on their nanoscale structures. "These results show just how complex coal ash is as a material," said Helen Hsu-Kim, professor of civil and environmental engineering at Duke University. "For example, we saw arsenic and selenium either attached to the surface of fine grain particles or encapsulated within them, which explains why these elements leach out of some coal ash sources more readily than others." It's long been known that factors in the surrounding environment such as pH affect how well toxic elements can move from source to surroundings. In previous research, Hsu-Kim showed that the amount of oxygen in a toxin's surroundings can greatly affect its chemistry, and that different sources of coal ash produce vastly different levels of byproducts. But just because one source of coal ash is high in arsenic doesn't necessarily mean that high amounts of arsenic will leach out of it. Similarly, various sources of ash respond differently to the same environmental conditions. The problem is complex, to say the least. To take a different approach, Hsu-Kim decided to take an even closer look at the source itself. "Researchers in the field typically use x-ray microscopy with a resolution of one or two micrometers, which is about the same size as the fly ash particles themselves," Hsu-Kim said. "So if a single particle is a single pixel, you're not seeing how the elements are distributed across it." To shrink these pictures' pixels to the nanoscale, Hsu-Kim turned to Catherine Peters, professor of civil and environmental engineering at Princeton University, and her colleagues to acquire time on the National Synchrotron Light Source II. The futuristic machine creates light beams 10 billion times brighter than the sun to reveal the chemical and atomic structure of materials using light beams ranging from infrared to hard X-rays. Brookhaven's capabilities were able to provide the researchers a nanoscale map of each particle along with the distribution of elements in each particle. The incredible resolution revealed that coal ash is a compilation of particles of all kinds and sizes. For example, in one sample the researchers saw individual nanoparticles of selenium that were attached to bigger particles of coal ash, which is a chemical form of selenium that probably isn't very soluble in water. But most of the ash had arsenic and selenium either locked inside individual grains or attached at the surface with relatively weak ionic bonds that are easily broken. "It was almost like we saw something different in every sample we looked at," Hsu-Kim said. "The wide array of differences really highlights why the main characteristic that we care about -- how much of these elements leach out of the ash -- varies so much between different samples." While nobody can say for sure what causes the coal ash to develop its unique composition, Hsu-Kim guesses that it is likely mostly related to how the coal was originally formed millions of years ago. But it might also have something to do with the power plants that burn the coal. Some plants inject activated carbon or lime into the flue gas, which captures mercury and sulfur emissions, respectively. At 1000 degrees Fahrenheit, toxins such as arsenic and selenium in the flue are gaseous, and the physics that dictate how the particles will cool and recombine to form ash is uncontrollable. But regardless of the how, researchers now know that they should be paying closer attention to the fine details encapsulated within the end results. This work was supported by the U.S. Department of Energy (DE-FE0031748) and the National Institute of Environmental Health Sciences (5U2C-ES030851). This research utilized U.S. DOE Office of Science User Facility resources at the Stanford Synchrotron Radiation Lightsource facility operated by SLAC National Accelerator Laboratory (DE-AC02-76SF0051) and at the Hard X-ray Nanoprobe (HXN) Beamline at 3-ID of the National Synchrotron Light Source II facility operated by Brookhaven National Laboratory (DE-SC0012704). Story Source: Journal Reference: Cite This Page:
Environmental Science
Study: Toilet paper adds to ‘forever chemicals’ in wastewater Scientists have identified a surprising new source of “forever chemicals” awash in global wastewater: the ubiquitous paper product dangling next to most of the planet’s toilets. Toilet paper is the latest product that could be contaminating environments worldwide with cancer-linked per- and polyfluoroalkyl substances (PFAS), according to a study, published Wednesday in Environmental Science & Technology Letters. Notorious for their presence in jet fuel firefighting foam and industrial discharge, these so-called forever chemicals are linked to a variety of illnesses, such as testicular and kidney cancers. There are thousands of types of PFAS, many of which are also key ingredients in household items and cosmetics — and some of which end up flowing down the drain. The study authors, who had recently investigated the presence of a major type of PFAS in biosolids, decided to continue their quest with toilet paper. “We asked ourselves where is the chemical used, and one product is paper,” corresponding author Timothy Townsend, a professor of environmental engineering at the University of Florida, told The Hill in an email. “Hence the look at toilet paper,” he said. Paper production processes often include PFAS as additives during the wood-to-pulp conversion process, the authors explained. These compounds are used as a wetting agent to boost the efficiency of the pulping process — making paper mills a known source of environmental contamination, according to the study. Not only might standard toilet paper contain PFAS, but so too could rolls made from recycled paper — as they may be made with contaminated fibers, the authors found. The researchers asked a volunteer network of students and professors to collect toilet paper sold in North, South and Central America, as well as in Africa and Western Europe. In addition, they evaluated sewage sludge samples from eight wastewater treatment plants in Florida. They then extracted PFAS from both the paper particles and sludge solids and analyzed them for 34 different compounds, according to the study. The main substances they detected were “diPAPs” — precursor compounds that can convert into other kinds of PFAS such as the potentially carcinogenic PFOA. After combining their results with sewage data from other studies and accounting for per capita toilet paper usage, the scientists observed that toilet paper was responsible for about 4 percent of the most common type of diPAP in U.S. and Canadian sewage. But in Sweden and France, they saw this figure climb to 35 percent and 89 percent, respectively. “Despite the fact that North Americans use more toilet paper than people living in many other countries, the calculated percentages suggest that most PFAS enter the U.S. wastewater systems from cosmetics, textiles, food packaging or other sources,” the authors stated. The study, they concluded, identified toilet paper as a source of PFAS pollution in wastewater treatment systems — and a major source of contamination in certain places around the globe. While the scientists evaluated both nonrecycled and recycled toilet paper, Townsend said they did not assess other alternatives, such as bamboo-based rolls. They determined, however, that diPAP concentration did not differ based on recycled content. The authors expressed some optimism that consumer product choices and discard practices could ultimately help inform regulations aimed at curbing PFAS content in wastewater. “This reduction in PFAS is critical, since wastewater effluent and sludge are commonly reused for irrigation and/or land application,” the scientists stated. “Research has already shown that these two pathways pose a risk for human and environmental exposure to PFAS,” they added. While Townsend said that his team’s goal was to better understand PFAS sources, he expressed hope that such research could help influence future policy. “Decision makers will be better equipped to implement changes if we better understand the sources and fate of PFAS entering our wastewater treatment plants and landfills,” he said.
Environmental Science
The reality of the transition to an environmentally sustainable economy in New York City Economic transitions take time but do take place. New York City first became prominent as a trading city, with America's resources and farm goods delivered via the Erie Canal shipped out of the port of New York. Then, we became a manufacturing city, at one time making nearly all the clothing worn in America. Today, we are a center of finance, communication, fashion design, entertainment, the arts, health care, research, and education. We are no longer a shipping port or a center of manufacturing but a global capital. Slowly but certainly, we are on the path to environmental sustainability. Along with New York state, we have ambitious carbon reduction goals, and our utilities are struggling to make our energy grid more renewable and our energy system more efficient and reliable. The path is not direct since inflation, mismanagement, NIMBY, and pandemics are upending some of our plans. But not all of them. Local Law 97 requires large buildings over 25,000 feet to reduce their carbon emissions by 40% by 2030 and 80% by 2040. Most of New York City's large buildings are moving toward compliance, and only 10% have done nothing. The City's Department of Buildings held a hearing in late October to hear comments on the rules envisioned for enforcing the law and to present the operational definition of "good faith compliance" required by building owners to demonstrate they are working toward adhering to the law. If compliance requires raising housing costs for people on low or fixed incomes, we could be paying for carbon reduction with homelessness. These realities of the transition must be understood, and we should allow for exceptions or the development of subsidies. We must understand the trade-off choices we are making and not turn poor people or the elderly into victims of the sustainability transition. Energy and climate change may be the lead story, but it's not the only story. Since 2013, large food companies and restaurants have been required to separate and recycle their food waste. Today, people living in Brooklyn and Queens are required to recycle their food waste. Curbside pick up of food waste will be expanded from Brooklyn and Queens to Staten Island and the Bronx on March 25, 2024, and to Manhattan on October 7, 2024. At that point, food recycling will be required city-wide. This waste will be converted into gas and fertilizer via anaerobic digestion or directly into fertilizer via composting. It also won't end up in landfills leaching toxics into groundwater and methane into the atmosphere. And despite the shameful pandering of New Jersey's Governor Murphy and less prominent but equally obtuse politicos, congestion pricing is at long last arriving in New York City's Central Business district. Freight and cars will now be able to move along the streets more rapidly—polluting less and saving billions of dollars in lost productivity—and an extra billion dollars a year will be available to subsidize and improve mass transit. Lots of time and energy have been wasted on fears of unanticipated impacts from congestion pricing that will never materialize. My new favorite is that trucks will avoid passing through Manhattan to avoid the tolls. The cost of trucks sitting in Manhattan gridlock today dwarfs the cost of the tolls that are envisioned for the future. The reduction in travel time will be well worth the toll. Moreover, by improving mass transit, we make the city more energy-efficient and environmentally sustainable. This past April, New York City updated the city's sustainability plan with a comprehensive plan entitled: PLANYC: Getting Sustainability Done. New York City is a worldwide leader in the generations-long transition to urban environmental sustainability. The updated plan includes 32 initiatives ranging from achieving a 30% tree canopy cover to pursuing fossil fuel-free city operations, from assisting city building owners with solar installation and other clean energy projects to developing new markets for recycling. These initiatives will be implemented by over 50 practical, operational actions, largely within the control of city government, such as: expanding the tree risk management program, installing heat pumps in 10,000 NYCHA apartments, phasing out city investment in fossil fuel equipment, and "[e]valuat[ing] all City roofs undergoing repair work for climate infrastructure installation by 2025…Install[ing] solar energy, electric building infrastructure, green roofs, or other renewable energy on all viable City-owned property by 2035." The plan is ambitious and visionary and attempts to imbed environmental sustainability in multiple operations of the city's routine governmental program. While I would be impressed if half of what is outlined is achieved as planned, whatever is accomplished will be important steps in the right direction. New York City government is a huge and unwieldy operation subjected to multiple cross pressures and led by a mayor who is far more political than managerial. And "getting sustainability done" would be a huge challenge even for a mayor who was a genius at management. Nevertheless, the direction, intent, and accomplishments of the city's government are significant elements of NYC's transition to environmental sustainability. Fortunately for New Yorkers, the city and state governments are not going at this alone. The use of renewable energy and the adoption of electric vehicles is growing in New York City. For one hour in May 2023, 20% of New York State's electrical demands were met by solar power. Typically, over 25% of our electricity is drawn from hydropower. The city's major nonprofit and private sector institutions see the need for environmental sustainability and, like the city government, are looking for practical operational opportunities to move us toward a circular economy. Brooklyn-based Revel, (whose CEO, Frank Reig, is a graduate of the MPA in Environmental Science and Policy program I direct at Columbia's School of International and Public Affairs) is an electric vehicle innovator with electric ride shares, mopeds, and high-speed charging hubs with 40 chargers in two Brooklyn locations with plans to expand city-wide in 2024. Their bright blue ride-share Teslas and mopeds are hard to miss in my neighborhood. There are business opportunities in the transition to environmental sustainability, and many talented young entrepreneurs like Frank Reig are reaching for those opportunities. As long as I'm highlighting alums, I should mention that Jeff Prosserman, a graduate of Columbia's Sustainability Management Master's program, has founded a company called Voltpost, which is installing charging stations built into urban streetlamps. Other young people are dreaming and studying in universities all over the world and getting ready to turn their sustainability ideals into operational realities. But it is not only government, entrepreneurs, and individuals who are moving New York forward. New York's major institutions—its cultural landmarks, universities, hospitals, and many private businesses—are exploring ways to decarbonize and recycle. Columbia University and many other institutions are working to reduce waste and greenhouse gas pollution and slowly starting to invest in power systems and infrastructure to reduce their environmental impact. Here at Columbia, we are in the midst of a transition to electric vehicles including busses, maintenance vehicles, and even the President's personal vehicle (a shiny red Tesla parked by the President's House). The reality is that our culture is changing. Environmental impact has gone from the fringes to the center of our consciousness. When I started working in environmental policy back in 1975, few people cared about environmental quality. And people making decisions about energy, waste, manufacturing, and transportation rarely, if ever, factored environmental impacts into their decision-making. Today, it is routine for people in organizations to ask about environment and sustainability when they design products or services. It is not that every decision is dominated by environmental considerations, but fewer and fewer decision-makers ignore environmental impacts. That is a sea change, and it is the best guarantee that the transition that's begun will continue until it is completed. The pace may be slow, but it's steady—and I've heard that's what wins the race. Provided by State of the Planet This story is republished courtesy of Earth Institute, Columbia University http://blogs.ei.columbia.edu.
Environmental Science
image: The study shows that less intensively managed grassland has greater plant diversity. Pictured are cows fed on pasture in Arnside, Cumbria. view more  Credit: Markus Wagner Embargoed until: 05:01am GMT, Friday 25 November, 2022 Researchers have shown - for the first time - that less intensively managed British grazed grasslands have on average 50% more plant species and better soil health than intensively managed grassland. The new study could help farmers increase both biodiversity and soil health, including the amount of carbon in the soil of the British countryside. Grazed grassland makes up a large proportion of the British countryside and is vital to farming and rural communities. This land can be perceived as only being about food production, but this study gives more evidence that it could be key to increasing biodiversity and soil health. Researchers at the UK Centre for Ecology & Hydrology (UKCEH) studied 940 plots of grassland, comparing randomly selected plots which sampled the range of grassland management across Great Britain; from intensively- managed land with a few sown grassland species and high levels of soil phosphorus (indicating ploughing/reseeding and fertiliser and slurry application), to grassland with higher levels of species and lower levels of soil phosphorus. The plots were sampled as part of the UKCEH Countryside Survey, a nationally representative long-term dataset. The study counted the number of plant species in sample areas and analysed co-located soil samples for numbers of soil invertebrates and carbon, nitrogen and phosphorus levels. Researchers found that less intensively managed grassland had greater diversity of plant species and, strikingly, this correlated with better soil health, such as increased nitrogen and carbon levels and increased numbers of soil invertebrates such as springtails and mites. In the same study, the researchers used the same methods to examine the plant diversity and soil from grasslands on 56 mostly beef farms from the Pasture Fed Livestock Association (PFLA) - a farmer group that has developed standards to manage and improve soil and pasture health. The researchers found that plots of land from PFLA farms had greater plant diversity – on average an additional six plant species, including different types of grasses and herbaceous flowering plants, compared to intensively farmed plots from the Countryside Survey. In addition, grassland plants on these farms were often taller, a quality which is proven to be beneficial to butterflies and bees. Pasture Fed Livestock Association grasslands did not yet show increased soil health, but the research indicated that this may be due to a time lag between increasing numbers of plant species and changes in soil health, particularly on farms which have been intensively managed in the past. Lead author Dr Lisa Norton, Senior Scientist at UKCEH, says: “We’ve shown for the first time, on land managed by farmers for production, that a higher diversity of plants in grasslands is correlated with better soil health. This work also tells us that the Pasture Fed Livestock Association members are on the right track to increase biodiversity, though it may take longer to see improvements in soil health. “Grassland with different types of plants able to grow tall and flower is associated with improved soil health measures, and is beneficial for creepy crawlies below and above ground. Having this abundance of life in our grasslands can in turn support small mammals and birds of prey, and farmers have told us that they are seeing voles and mice in their fields for the first time.” Dr Norton adds: “My hope for the future is that our grasslands can be managed less intensively – with all the improvements in plant and animal biodiversity and soil health that brings - but still remain productive for farmers.” The study was published in the journal Ecology Solutions and Evidence on 25 November, 2022, and was funded by the UK Research and Innovation Global Food Security Programme. -Ends- Media enquiries For interviews and further information, please contact: Simon Williams, Media Relations Officer at UKCEH, via simwil@ceh.ac.uk or +44 (0)7920 295384 Jo Kelly, Campus PR, T: +44 (0)113 258 9880 M: +44 (0)7980 267756 E: jo@campuspr.co.uk  Notes for editors Paper DOI: 10.1002/2688-8319.12191 After the embargo lifts the paper will be published here: https://besjournals.onlinelibrary.wiley.com/doi/10.1002/2688-8319.12191 In 2020 the UK had the largest numbers of sheep and third largest number of cattle of all EU countries. About the UK Centre for Ecology & Hydrology (UKCEH) The UK Centre for Ecology & Hydrology is a centre for excellence in environmental science across water, land and air. Our 500-plus scientists seek to understand the environment, how it sustains life and the human impact on it, so that together, people and nature can prosper. We have a long history of investigating, monitoring and modelling environmental change, and our science makes a positive difference in the world. www.ceh.ac.uk / Twitter: @UK_CEH / LinkedIn: UK Centre for Ecology & Hydrology Method of Research Observational study Subject of Research Not applicable Article Title Can Pasture fed livestock farming practices improve the ecological condition of grassland in Great Britain? Article Publication Date 25-Nov-2022 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Environmental Science
U.S. President Joe Biden holds out his pen to U.S. Senator Joe Manchin (D-WV) as Senate Majority Leader Chuck Schumer (D-NY) and U.S. House Majority Whip James Clyburn (D-SC) look on after Biden signed "The Inflation Reduction Act of 2022" into law during a ceremony in the State Dining Room of the White House in Washington, August 16, 2022.Leah Millis | ReutersThe Biden administration this year signed a historic climate and tax deal that will funnel billions of dollars into programs designed to speed the country's clean energy transition and battle climate change.As the U.S. this year grappled with climate-related disasters from Hurricane Ian in Florida to the Mosquito Fire in California, the Inflation Reduction Act signed into law in August was a monumental development to mitigate the effects of climate change across the country.The bill is the most aggressive climate investment ever taken by Congress and is expected to slash the country's planet-warming carbon emissions by about 40% this decade and move the country toward a net-zero economy by 2050.The IRA's provisions have major implications for clean energy and manufacturing businesses, climate startups and consumers in the coming years. As 2022 comes to a close, here's a look back at the key elements in the legislation that climate and clean energy advocates will be monitoring in 2023.Incentives for electric vehiclesThe deal offers a federal tax credit worth up to $7,500 to households that buy new electric vehicles, as well as a used EV credit worth up to $4,000 for vehicles that are at least two years old. Starting Jan. 1, people making $150,000 a year or less, or $300,000 for joint filers, are eligible for the new car credit, while people making $75,000 or less, or $150,000 for joint filers, are eligible for the used car credit.Despite a rise in EV sales in recent years, the transportation sector is still the country's largest source of greenhouse gas emissions, with the lack of convenient charging stations being one of the barriers to expansion. The Biden administration has set a goal of 50% electric vehicle sales by 2030.The IRA limits EV tax credits to vehicles assembled in North America and is intended to wean the U.S. off battery materials from China, which accounts for 70% of the global supply of battery cells for the vehicles. An additional $1 billion in the deal will provide funding for zero-emissions school buses, heavy-duty trucks and public transit buses.U.S. President Joe Biden gestures after driving a Hummer EV during a tour at the General Motors 'Factory ZERO' electric vehicle assembly plant in Detroit, Michigan, November 17, 2021.Jonathan Ernst | ReutersStephanie Searle, a program director at the nonprofit International Council on Clean Transportation, said the combination of the IRA tax credits and state policies will bolster EV sales. The agency projects that roughly 50% or more of passenger cars, SUVs and pickups sold in 2030 will be electric. For electric trucks and buses, the number will be 40% or higher, the group said.In the upcoming year, Searle said the agency is monitoring the Environmental Protection Agency's plans to propose new greenhouse gas emissions standards for heavy-duty vehicles starting in the 2027 model year."With the IRA already promoting electric vehicles, EPA can and should be bold in setting ambitious standards for cars and trucks," Searle said. "This is one of the Biden administration's last chances for strong climate action within this term and they should make good use of it."Taking aim at methane gas emissionsSome pumpjacks operate while others stand idle in the Belridge oil field near McKittrick, California. Oil prices rose in early Asian trade on the prospect that a stalled Iran nuclear deal and Moscow's new mobilization campaign would restrict global supplies.Mario Tama | Getty ImagesThe package imposes a tax on energy producers that exceed a certain level of methane gas emissions. Polluters pay a penalty of $900 per metric ton of methane emissions emitted in 2024 that surpass federal limits, increasing to $1,500 per metric ton in 2026.It's the first time the federal government has imposed a fee on the emission of any greenhouse gas. Global methane emissions are the second-biggest contributor to climate change after carbon dioxide and come primarily from oil and gas extraction, landfills and wastewater and livestock farming.Methane is a key component of natural gas and is 84 times more potent than carbon dioxide, but doesn't last as long in the atmosphere. Scientists have contended that limiting methane is needed to avoid the worst consequences of climate change. The Harris Cattle Ranch feedlot, located along Interstate 5, is the largest producer of beef in California and can produce 150 million pounds of beef a year as viewed on May 31, 2021, near Harris Ranch, California.George Rose | Getty ImagesRobert Kleinberg, a researcher at Columbia University's Center on Global Energy Policy, said the methane emitted by the oil and gas industry each year would be worth about $2 billion if it was instead used to generate electricity or heat homes."Reducing methane emissions is the fastest way to moderate climate change. Congress recognized this in passing the IRA," Kleinberg said. "The methane fee is a draconian tax on methane emitted by the oil and gas industry in 2024 and beyond."In addition to the IRA provision on methane, the Biden Interior Department this year proposed rules to curb methane leaks from drilling, which it said will generate $39.8 million a year in royalties for the U.S. and prevent billions of cubic feet of gas from being wasted through venting, flaring and leaks. Boosting clean energy manufacturingThe bill provides $60 billion for clean energy manufacturing, including $30 billion for production tax credits to accelerate domestic manufacturing of solar panels, wind turbines, batteries and critical minerals processing, and a $10 billion investment tax credit to manufacturing facilities that are building EVs and clean energy technology.There's also $27 billion going toward a green bank called the Greenhouse Gas Reduction Fund, which will provide funding to deploy clean energy across the country, but particularly in overburdened communities. And the bill has a hydrogen production tax credit, which provides hydrogen producers with a credit based on the climate attributes of their production methods.Solar panels are set up in the solar farm at the University of California, Merced, in Merced, California, August 17, 2022.Nathan Frandino | ReutersEmily Kent, the U.S. director of zero-carbon fuels at the Clean Air Task Force, a global climate nonprofit, said the bill's support for low-emissions hydrogen is particularly notable since it could address sectors like heavy transportation and heavy industry, which are hard to decarbonize."U.S. climate policy has taken a major step forward on zero-carbon fuels in the U.S. and globally this year," Kent said. "We look forward to seeing the impacts of these policies realized as the hydrogen tax credit, along with the hydrogen hubs program, accelerate progress toward creating a global market for zero-carbon fuels."The clean energy manufacturing provisions in the IRA will also have major implications for startups in the climate space and the big venture capital firms that back them. Carmichael Roberts, head of investment at Breakthrough Energy Ventures, has said the climate initiatives under the IRA will give private investors more confidence in the climate space and could even lead to the creation of up to 1,000 companies."Everybody wants to be part of this," Roberts told CNBC following the passage of the bill in August. Even before the measure passed, "there was already a big groundswell around climate," he said.Investing in communities burdened by pollutionThe legislation invests more than $60 billion to address the unequal effects of pollution and climate change on low-income communities and communities of color. The funding includes grants for zero-emissions technology and vehicles, and will help clean up Superfund sites, improve air quality monitoring capacity, and provide money to community-led initiatives through Environmental and Climate Justice block grants.Smoke hangs over the Oakland-San Francisco Bay Bridge in San Francisco, California, U.S., on Wednesday, Sept. 9, 2020. Powerful, dry winds are sweeping across Northern California for a third day, driving up the risk of wildfires in a region thats been battered by heat waves, freak lightning storms and dangerously poor air quality from blazes.Bloomberg | Bloomberg | Getty ImagesResearch published in the journal Environmental Science and Technology Letters found that communities of color are systematically exposed to higher levels of air pollution than white communities due to redlining, a federal housing discrimination practice. Black Americans are also 75% more likely than white Americans to live near hazardous waste facilities and are three times more likely to die from exposure to pollutants, according to the Clean Air Task Force.Biden signed an executive order after taking office aimed to prioritize environmental justice and help mitigate pollution in marginalized communities. The administration established the Justice40 Initiative to deliver 40% of the benefits from federal investments in climate change and clean energy to disadvantaged communities. More recently, the EPA in September launched an office focused on supporting and delivering grant money from the IRA to these communities.Cutting ag emissionsThe deal includes $20 billion for programs to slash emissions from the agriculture sector, which accounts for more than 10% of U.S. emissions, according to EPA estimates.The president has pledged to reduce emissions from the agriculture industry in half by 2030. The IRA funds grants for agricultural conservation practices that directly improve soil carbon, as well as projects that help protect forests prone to wildfires.Farmer Roger Hadley harvests corn from his fields in his John Deere combine in this aerial photograph taken over Woodburn, Indiana.Bing Guan | Reuters
Environmental Science
A global treaty called the Minamata Convention requires gold-mining countries to regularly report the amount of toxic mercury that miners are using to find and extract gold, designed to help nations gauge success toward at least minimizing a practice that produces the world's largest amount of humanmade mercury pollution. But a study of baseline mercury emission estimates reported by 25 countries -- many in developing African, South American and Asian nations -- found that these estimates rarely provide enough information to tell whether changes in the rate from one year to the next were the result of actual change or data uncertainty. Key variables -- like how the country determines the amount of its gold production -- can result in vastly different baseline estimates. Yet, countries often don't report this range of possible estimates. Millions are at risk About 15 million artisanal and small-scale gold miners around the world risk their lives every day facing hazardous working conditions that include constant exposure to mercury -- a potent neurotoxin. Mercury vapors cause debilitating effects on the nervous, digestive and immune systems, lungs and kidneys, and may be fatal. Mercury is particularly harmful for children and pregnant women, whose developing fetuses are especially susceptible to the neurotoxic effects. An estimated 4 to 5 million of 15 million artisanal miners are women or children. "To make effective and impactful mercury interventions and policies, you must first make sure you have the baseline emission estimate right," said Kathleen M. Smits, chair of Civil and Environmental Engineering and Solomon Professor for Global Development in SMU's Lyle School of Engineering. "Providing more transparency in their reporting would help with that." Smits joined civil engineers from the University of Texas at Arlington and the U.S. Air Force Academy in the study recently published in the journal Environmental Science and Policy. The work was supported by the National Science Foundation. The research group analyzed 22 countries' national action plans (NAP), which contained their annual baseline estimates assembled under the Minamata Convention and posted on the organization's website. The team also looked at three additional countries with pertinent information posted to national government or non-governmental websites. Smits and her co-authors calculated the baseline estimates for Paraguay, if different variables were used. The South American country was selected for analysis in this study, due to the country's transparency of their reporting. Lacking key data in countries' baseline estimates Baseline mercury emission estimates seek to determine how many kilograms of mercury pollution are injected into the atmosphere each year from the practice of artisanal gold mining. To do that, countries calculate how much gold was found by miners -- and therefore an approximation of how much mercury was used to get it. Countries primarily collect that information using interviews with miners, gold and mercury traders and other key players in the gold mining business; ratios that calculate the mercury to gold ratio; previous research, and field visits to known mining locations. But the study cites key problems with the way those estimates are currently calculated: Smits said countries must do a better job of accounting for these variables if they want to draft more meaningful mercury reduction targets in their national action plans. "If you just take a look at the baseline mercury emission estimate process, it is clear that the NAP program will not achieve its goal of reducing mercury emissions if they continue with the current approach," Smits said, whose team spent six years working alongside miners in gold-mining countries for the study. Why do miners use toxic mercury to get gold? Artisanal and small-scale miners -- the term for individual miners, families or small groups with minimal or no mechanization to do the work -- sift through rocks in rivers and dump beads of mercury over the sediment, which clings to gold. They then light a match, using the flame to separate the mercury from the gold, a process that shoots toxic vapors into the air. It's a cheap method of mining gold, but mercury can leak toxins into the air and pollute water systems. The hazardous gold mining process accounts for roughly 40 percent of all human-made mercury emissions, making it the largest source for this type of pollution, the United Nations (U.N.) says. In 2013, the U.N. created the global treaty called the Minamata Convention to try to phase out artisanal and small-scale gold mining, as well as other mercury emission contributors. This treaty currently has 139 countries committing to its goal. To join its treaty, countries that regularly engage in artisanal gold mining are required to report baseline mercury emission estimates on a regular basis and offer a national action plan for how they will eventually reduce their country's footprint for mercury. -- Monifa Thomas-Nguyen Story Source: Journal Reference: Cite This Page:
Environmental Science
Eight-year-old Chelsea Symonds carries a bucket of collected rainwater in her family's yard in the drought-affected town of Murrurundi, New South Wales, Australia, on February 17, 2020.Loren Elliott/ReutersRainwater across Earth contains levels of "forever chemicals" unsafe to drink, a study suggests.Per- and polyfluoroalkyl substances (PFAS), linked to cancer, are pervading homes and environments.PFAS levels across the planet are unsafe, and the substances must be restricted, researchers say.Rainwater is no longer safe to drink anywhere on Earth by US contamination guidelines, according to a team of environmental scientists.That's because rainwater across the planet now contains hazardous chemicals called per- and polyfluoroalkyl substances (PFAS). In a paper published in the journal Environmental Science & Technology on August 2, researchers at University of Stockholm, which has been studying PFAS for a decade, found evidence that these substances have spread throughout the entire atmosphere, leaving no place untouched.There are thousands of PFAS, all human-made, used in food packaging, water-repellant clothing, furniture, carpets, nonstick coating on pots and pans, fire-extinguishing foams, electronics, and some shampoos and cosmetics. During manufacturing and daily use, they can be released into the air. They also leach into ocean water and get aerosolized in sea spray. From there, they spread through the atmosphere and fall back to Earth in rain.Commuters with umbrellas cross a road during heavy rains caused by Cyclone Asani, in Kolkata, India, on May 10, 2022.Rupak De Chowdhuri/ReutersThey're often called "forever chemicals" because they linger for a long time without breaking down, allowing them to build up in people, animals, and environments.PFAS have been found in Antarctica and in Arctic sea ice. Their prevalence across the planet is a hazard to human health, since peer-reviewed studies have linked them to some cancers, decreased fertility, reduced vaccine response, high cholesterol, and developmental delays in children.Like microplastics, it is difficult to identify all the long-lasting health effects of PFAS exposure because they include so many different compounds and they are so prevalent in the environment. The new paper suggests that everybody on Earth is at risk.Under EPA limits, 'rainwater everywhere would be judged unsafe to drink'A woman and her granddaughter stand beside a rainwater tank used for washing and cleaning in San Miguel Xicalco, on the outskirts of Mexico City, Mexico, on March 4, 2016.Henry Romero/ReutersPerhaps the most notorious among these substances are perfluorooctanoic acid (PFOA) and perfluorooctanesulfonic acid (PFOS). In June, based on new evidence about health impacts, the Environmental Protection Agency significantly tightened its guidelines for how much PFOA and PFOS can safely be present in drinking water.Previously, EPA had set the acceptable level for both substances at 70 parts per trillion. The new guidelines cut that by a factor of up to 17,000 — limiting safe levels to 0.004 parts per trillion for PFOA and 0.02 parts per trillion for PFOS.The University of Stockholm researchers assessed the levels of PFOA, PFOS, and two other PFAS in rainwater and soil across the planet, and compared them to regulators' limits. Both substances' levels in rainwater "often greatly exceed" EPA limits, the study authors concluded."Based on the latest US guidelines for PFOA in drinking water, rainwater everywhere would be judged unsafe to drink," Ian Cousins, the lead author of the study and professor at the University of Stockholm Department of Environmental Science, said in a press release."Although in the industrial world we don't often drink rainwater [directly], many people around the world expect it to be safe to drink, and it supplies many of our drinking water sources," Cousins added.The paper also found that soil across the globe was "ubiquitously contaminated" with PFAS. Because PFAS persist for so long and cycle through the planet's oceans, atmosphere, and soil so effectively, the researchers expect levels will continue to be dangerously high.Ultimately, the researchers conclude that PFAS have exceeded the safe "planetary boundary" for human health."It is vitally important that PFAS uses and emissions are rapidly restricted," they wrote.Read the original article on Business Insider
Environmental Science
At the surface, the Arctic Ocean is pure serenity: chunk after chunk of bright-white ice, lazily floating around. What you can’t see is that its underside is covered in green snot, à la the ectoplasm from Ghostbusters—an underwater forest of Melosira arctica, algae that grow into sticky, dangling “trees” several feet long. While not appetizing to you or me, Melosira arctica forms the foundation of the Arctic Ocean food chain. During the spring and summer, its individual photosynthetic cells grow quickly, absorbing the sun’s energy and forming long chains. These become food for small surface-dwelling critters known as zooplankton, which are in turn eaten by bigger animals, like fish. The clusters also detach and sink thousands of feet to feed sea cucumbers and other seafloor scavengers.But now this algal ecosystem—like literally everywhere else on the planet—is thoroughly infested with microplastics, which ride on currents and blow in from faraway metropolises to settle on ice and snow. This is likely to have major consequences not just for Arctic organisms, but the way that the ocean sequesters carbon from the atmosphere. A paper published today in the journal Environmental Science and Technology finds that, on average, this algae is laced with 31,000 plastic particles per cubic meter—thanks to its gelatinous tendrils. “The algae form long strands or curtain-like structures and produce a sticky mucus that likely helps to trap microplastic particles efficiently from their surroundings,” says marine biologist Melanie Bergmann of the Alfred Wegener Institute in Germany, lead author of the paper.Indeed, the concentration of microplastics (or particles smaller than 5 millimeters) in the algae is 10 times higher than the 2,800 particles the scientists found per cubic meter of water. Sea ice is even more contaminated: Bergmann’s previous research found 4.5 million particles per cubic meter. This astronomical figure is due to floating sea ice’s ability to “scavenge” particles from seawater as it freezes, all while getting dusted with atmospheric microplastics falling from above.As Melosira arctica grows on this ice, its stickiness attracts microplastics from the surrounding water. Later, when the ice melts, those trapped particles are liberated, releasing a concentrated dose of microplastics. A whopping 94 percent of the microplastics the researchers found in the algae were smaller than 10 microns, or a millionth of a meter. “Because it’s a filamentous algae, and the cells are quite small, it’s collecting all the small stuff preferentially,” says Deonie Allen, a coauthor of the paper and a microplastics researcher at the University of Birmingham and University of Canterbury. “And all the really small stuff ends up making the biggest impact on the ecosystem.”Photograph: Mario Hoppmann/Alfred Wegener InstituteThe smaller a particle is, the more organisms it can get into. Plastics can break down so small that they enter individual cells of either the algae or the zooplankton that feed on them. The researchers can’t yet say if all that microplastic is harming Melosira arctica. But additional lab research has found that plastic particles can be toxic for other forms of algae. “In experiments with very high doses of microplastics, small microplastics damaged and entered algal cells, leading to stress responses such as damage of chloroplasts and thus inhibition of photosynthesis,” says Bergmann. There’s another concern, too: If enough plastic gathers on the algae, it could block sunlight from reaching the cells, further interfering with photosynthesis and growth. “This study really does contribute to a growing body of research that shows that these microscopic organisms and these microscopic plastics can compound and become a really macroscopic problem,” says Anja Brandon, associate director of US plastics policy at the Ocean Conservancy, who wasn’t involved in the study. “This algae in the Arctic, and phytoplankton throughout the marine environment, make up the fundamental backbone of the marine food web.” But the proliferation of plastic could devastate that web. As summer temperatures rise and the Arctic’s sea ice deteriorates, more and more algae clumps can break free and sink, carrying those microplastics with them into new ecosystems. That could be why scientists are also finding gobs of the particles in Arctic Ocean sediments. “There’s a whole community right underneath where the ice is melting,” says Steve Allen, a microplastics researcher at the Ocean Frontiers Institute and coauthor of the new paper. The sinking algae is a kind of “conveyor belt” of food to benthic creatures like sea cucumbers and brittle stars, he says.In this sensitive ecosystem, nourishment is relatively scarce compared to, say, in a tropical reef. If a sea cucumber is already making do with limited amounts of food trickling down from the surface, it would be bad to load that food with inedible plastic. This is known as “food dilution” and has been shown to be a problem for other small animals, which fill up on microplastics while reducing their appetite for actual food. Jagged plastic particles can also cause severe scarring of the gut, as was recently shown in seabirds with a new disease known as plasticosis. And that’s to say nothing of the potential chemical contamination to an animal’s digestive system: At least 10,000 chemicals have been used to make plastic polymers, a quarter of which scientists consider to be of concern. Photograph: Julian Gutt/Alfred Wegener InstituteMicroplastic contamination of Melosira arctica could have serious effects on the carbon cycle, as well. As algae grows, it absorbs carbon, as plants do on land. When it sinks to the seafloor, it sequesters that carbon in the depths. But if microplastic inhibits their growth, the algae will absorb less of the stuff. Or if the pollutant makes the algae break apart more easily, that will give the scavengers in the water column more opportunities to consume it, thus keeping some of the carbon from reaching the seafloor. And if scavengers eat the plastic, even their waste may be less likely to make it to the bottom of the ocean: When scientists fed microplastics to zooplankton known as copepods in the lab, the particles made their fecal pellets slower to sink and easier to break apart. That’s bad both for carbon sequestration and for the animals that rely on this waste as a food source.All of this feeds into the dramatic transformation of the Arctic, which is now warming more than four times faster than the rest of the planet. Atmospheric plastics that settle on sea ice—especially bits of black car tires—absorb more of the sun’s energy and may accelerate melt. That exposes more dark ocean waters, which absorb more heat and melt more ice. Altogether, there’s less sea ice, and therefore less space for Melosira arctica to do its carbon-absorbing work—and more melting, which releases a tide of accumulated plastics.Bergmann thinks this situation will only get worse as a warmer Arctic leads to more human development, and therefore more plastic trash. “As the sea ice retreats, human activities in the region increase,” says Bergmann. “As a matter of fact, they already have—fisheries, tourism, shipping—which will perpetuate pollution.”
Environmental Science
New study indicates chemicals from grocery stickers may be leaching into foods. Here's what you need to know Although BPA is tightly regulated, related compounds aren't and are still used in food packaging The next time you're in the grocery store, you may want to take a look at how the fresh food is packaged. According to new research, toxic chemicals similar to bisphenol A (BPA) are leaching from certain labels through packaging, and into the meat, seafood, produce and other foods purchased in some Canadian and U.S. grocery stores. "We identified the thermal labels are a source [of BPA-like compounds] in our diet directly ... so far in the world, no one had identified that the packaging could be a source of bisphenol S to the diet," said Stéphane Bayen, a professor at McGill University in Montreal and senior author of the newly published study. Bisphenol S (BPS) and BPA have been studied for their possible effects on health. Research has shown their ability to disrupt hormones and have negative effects on growth, brain function, the reproductive system and the immune system. Bisphenols have many applications and are frequently used in the manufacturing of various plastics and thermal paper. The Great Lakes Climate Change Project is a joint initiative between CBC's Ontario stations to explore climate change from a provincial lens. You can read some of the recent stories from the project here: - CBC ExplainsBreathe easy. Research suggests 3 toxic pesticides are finally eliminated from air around the Great Lakes Over the past decade, Canada has tightened its BPA regulations in an effort to phase out its use, including making it illegal to manufacture, import, advertise or sell baby bottles that contain BPA. Meanwhile, BPS and other compounds highly similar to BPA remain unregulated and have been adopted as substitutes by the industry for various products, including thermal food labels — where you can find the price, best-before data, ingredients and other information on foods packaged in store. As we've done more research into bisphenols ... the safety levels have consistently been lowered as we discover more and more about how these compounds work.- Glen Pyle, molecular cardiologist at the University of Guelph Scientists have long warned that regulating BPA alone may not be making products any safer. Research at the University of Guelph in Ontario suggests BPS has similar effects to BPA on the heart, and literature reviews that synthesize available research have concluded BPS is equally or "more toxic." Though BPA free, the thermal labels examined in the study by Bayen and his colleagues found they contained and transferred high amounts of related compounds — including bisphenol S (BPS) — that are known to have similar effects on humans as BPA. "Only a few [researchers] detected bisphenol S in food before [but] the source was completely unknown," Bayen said. BPS level over 22 times higher than EU limit The McGill study measured the concentrations of BPS and other BPA substitutes in labels, packaging and products purchased in stores. The research was published in March in the journal Environmental Science & Technology, with funding from the Canadian Institutes of Health Research and the Canada Foundation for Innovation. Grocery stores often use thermal food labels, which contain BPS to allow the paper to change colour when exposed to heat. The McGill researchers collected a total of 140 samples of food packaging materials from grocery stores in Canada (Montreal and Victoria) and the U.S. The materials in question, such as thermal labels, are used in almost all grocery stores. They tested the materials and the food inside for several BPA-like compounds, then measured their migration from the labels into fish from each store experimentally. The results clearly showed that BPS and other BPA-like compounds were leaching into the food from the thermal labels, while other packaging did not appear to be a significant source. "The levels in which they found it ... exceeded the levels recommended by the European Union," said Glen Pyle, a molecular cardiologist at the University of Guelph who was not associated with the study. Pyle was part of the team that researched the effects of BPS on the heart. CBC reached out to Health Canada to comment on the latest research. In a statement, the federal department said the amounts of BPS in food are "currently monitored" and "are not considered to pose a health concern based on estimates of dietary exposure." However, the data used to reach this conclusion does not seem to include fresh food. The statement linked to a series of reports by the Canadian Food Inspection Agency (CFIA), which tested canned food for various bisphenols, including BPS, and generally found little to none — nothing like the levels detected by Bayen and colleagues. Health Canada did not comment on the levels of BPS measured in this study. Unlike Canada, the European Union specifically regulates the amount of BPS that can migrate from packaging into food. Samples collected for the McGill study far exceeded those limits, with BPS transfer measured at up to 23 times higher than the 50 nanogram per gram wet weight limit. Canada does prohibit the sale of food in packaging that may transfer harmful compounds to the contents. It is unclear what levels of BPS transfer would be in violation of that regulation. Pyle said those EU limits are evidence based and adjusted as new research emerges. "One of the interesting things that has occurred as we've done more research into bisphenols is the safety levels have consistently been lowered as we discover more and more about how these compounds work and the health risks they pose to humans." How to minimize exposure to BPS There are ways to reduce your exposure to BPS, said the experts who were interviewed. But they noted that thermal labels are widely used and a lack of regulation in Canada makes it difficult to know what contains the compound. Their recommendations include: - Bagging produce yourself rather than purchasing pre-packaged produce with thermal labels. - Picking up your meat from the butcher counter. - Bringing your own container or aluminum foil and asking for it to be used to package fresh meat or fish. - Asking to have the label placed under the Styrofoam tray instead of on top (as researchers found that the parts of fish directly under a label had higher concentrations of BPS and other chemicals). "Unfortunately, since the pandemic, we find every fresh food now is [packaged] in these trays with the thin film on top of it — meat, fish, seafood products — but now you can also find this for dairy products, for bread, sometimes for vegetables," said Bayen. This shift is an issue, Bayen said, since compounds like BPS seem to be able to migrate from thermal labels into all of these products. Different stores varied in the levels of BPS and similar compounds in their labels. These other compounds the researchers detected included several members of the bisphenol family that are not well known. "We should also have a look at these chemicals, but there's no information at all on on what would be a safe level ... so a lot more work has to be done," Bayen said. He feels the study also highlights some of the shortfalls of our current safety monitoring systems. "The way surveillance works is that we always look [for] what we know ... there is a need to to have all in our surveillance, to have tools that look for things that we didn't expect or we didn't know would be present."
Environmental Science
New research co-led by Simon Fraser University and the University of British Columbia shows that amplified global warming in the Canadian High Arctic drove a profound shift in the structure of a river network carved into a permafrost landscape in only 60 years. Documenting a powerful interplay among climate change, the freeze-thaw dynamics of polygonal ground and the delivery of surface water by floods as well as snow and ice melting, the team developed a new view of the physical controls governing the speed and pattern of river channel development in these fragile landscapes. "One of the key processes we identified in the evolution of stream networks is that their development is influenced by the way water flows through fields of roughly 10 metre-wide polygons, created through the freezing and thawing of the soil in Arctic regions," says Shawn Chartrand, assistant professor in the School of Environmental Science at Simon Fraser University, and lead author of research published today in Nature Communications. "This influence is also affected by the timing, magnitude and duration of flood events, as well as whether the underlying sediment particle substrates are frozen, or partially frozen." Chartrand is part of an international research team that arrived at the uninhabited island of Axel Heiberg at the start of one of the most intense summer warming events ever recorded. Their field research focused on the island's Muskox Valley, east of the Muller Ice Cap. Researchers combined air photographs from 1959 with field observations and state-of-the-art Light Detection and Ranging (LiDAR) data they collected in 2019 to understand how the Axel Heiberg Island landscape has evolved over a 60-year period. "Interconnected physical processes can deepen river channels and expand river networks, creating more surface area for heat exchange, which can increase local rates of permafrost thaw," says study co-author Mark Jellinek, professor of Earth, Ocean and Atmospheric Sciences at the University of British Columbia. "These cascading effects can enhance the release of greenhouse gases in the Arctic as organic soil carbon thaws and the permafrost retreats." Using the LiDAR data, the team produced a Digital Elevation Model (DEM) of a 400-metre section of the valley. "Through modeling of how water moves through the landscape, we found that flood waters routed through interconnected polygon troughs enhances the likelihood of erosion and channel development," says Chartrand. Flooding from the valley lake, and seasonal melt of the snowpack and ground ice contributes water which coalesces down valley, setting the conditions for coarse sediment transport and the development of channel networks along the valley floor. However, the timing of flooding during peak thaw can influence how much erosion occurs. "Warming air temperatures play a role here," he explains. "We predict that erosion and sediment transport is sensitive to whether floods occur before or after a period of elevated air temperatures, because this influences the depth to which sediment particle substrates are thawed, and thus effects whether the particles are transported by flood waters." Researchers say the challenge going forward will be to apply this data to produce predictive physical models that help to understand how Arctic river networks will evolve over future decades marked by both warming and intensifying climate variability. They point to added urgency as expanding river networks will carry greater sediment loads as well as nutrients and metals into fragile watersheds and fisheries with potentially significant consequences for coastal wildlife, waters and populations. The research team also included scientists from the Finnish Geospatial Research Institute, Laboratoire de Planétologie et Géosciences (UMR CNRS 6112), University of Western Ontario and the Jet Propulsion Laboratory. Story Source: Journal Reference: Cite This Page:
Environmental Science
A mixture of trees purifies urban air best Conifers are generally better than broadleaved trees at purifying air from pollutants. But deciduous tree may be better at capturing particle-bound pollution. A new study led by the University of Gothenburg shows that the best trees for air purification depend on the type of pollutant involved. Trees and other greenery in cities provide many benefits that are important for the well-being of residents. Leaves and needles on trees filter air pollutants and reduce exposure to hazardous substances in the air. But which trees purify the air most effectively? Researchers from the University of Gothenburg have collected leaves and needles from eleven different trees growing in the same place in the Gothenburg Botanical Garden’s arboretum (tree collection) to analyse which substances they have captured. “This tree collection provides a unique opportunity to test many different tree-species with similar environmental conditions and exposure to air pollutants,” says Jenny Klingberg, a researcher at the Gothenburg Botanical Garden. Harmful pollutants A total of 32 different pollutants were analysed, some of which are bound to particles of various sizes. Others are gaseous. There is a proven connection between exposure to air pollutants and increased risk of cardiovascular diseases and airway problems. This project has focused on polycyclic aromatic hydrocarbons (PAHs). In cities, traffic is the biggest source of these pollutants, which are released due to incomplete combustion in engines. “Our analyses show that different tree species have different abilities to absorb air pollutants. Conifers generally absorbed more gaseous PAHs than broadleaved trees. Another advantage of conifers is that they also act as air purifiers in winter, when air pollution is usually at its highest,” says Jenny Klingberg. Needles clean air for many years The researchers also saw that needles continued to absorb air pollutants for several years, which leaves cannot do for obvious reasons. But broadleaved trees had other advantages. They were more efficient at cleaning the air of particles, which is thought to be due to the leaves having a larger surface area to which particles can attach. “The various species differed more than we expected. Larch, which is a conifer that sheds its needles each autumn, was best in test. Larch trees absorbed the most particle-bound pollutants, but were also good at capturing gaseous PAHs,” says Jenny Klingberg. Needles and leaves do not, however, break down pollutants to any greater extent, even if sunlight can start that process. Thus there is a risk that the soil beneath the trees will be contaminated by pollutants when the leaves and needles shed and decompose. This places the ecosystem in the soil at risk of being affected, though this has not been investigated in the current study being published in the journal Ecological Indicators. “The pollutants do not appear to impact the trees’ photosynthesis; leaf chlorophyll content is just as high in the most polluted areas of Gothenburg compared with trees that grow in less polluted environments. But this likely looks different in cities with even worse air quality,” says project leader Håkan Pleijel, professor of applied environmental science at the University of Gothenburg. Careful urban planning is needed However, you should not simply start filling city streets with trees to improve air quality for residents. Several factors determine the benefit. An alley of trees in a narrow street canyon can reduce air flow, negatively affecting dispersion and dilution of the air pollutants and therefore increase concentrations of contaminants locally on busy streets. This means that on narrow streets sheltered from wind, lower-growing vegetation, like hedges, may be preferable. Careful urban planning is necessary, combining different tree species to optimise air purification and to take into account other functions and benefits of trees, according to the researchers. “This study contributes to improving our understanding of the ability of trees to clean the air and which species are best at absorbing air pollutants,” says Håkan Pleijel. This knowledge is important for urban planning when designing sustainable cities. While trees and greenery can contribute to better air quality in cities, at the end of the day the most important measure is to reduce emissions. Contact: Håkan Pleijel, professor of applied environmental science at the Department of Biological and Environmental Sciences at the University of Gothenburg, phone: +46 (0)733-10 07 00, email: hakan.pleijel@bioenv.gu.se Jenny Klingberg, researcher in applied environmental science at Gothenburg Botanical Garden, phone: +46 (0)703-52 97 72, email: jenny.klingberg@vgregion.se
Environmental Science
At the start of his third year of graduate school, Kazi Albab Hussain became a father. As a new dad and a PhD student studying environmental nanotechnology, plastic was on his mind. The year before, scientists had discovered that plastic baby bottles shed millions of particles into formula, which infants end up swallowing (while also sucking on plastic bottle nipples). “At that time,” Hussain says, “I was purchasing many baby foods, and I was seeing that, even in baby foods, there are a lot of plastics.”Hussain wanted to know how much was being released from the kinds of containers he’d been buying. So he went to the grocery store, picked up some baby food, and brought the empty containers back to his lab at the University of Nebraska—Lincoln. In a study published in June in Environmental Science & Technology, Hussain and his colleagues reported that, when microwaved, these containers released millions of bits of plastic, called microplastics, and even tinier nanoplastics.Plastics are complex cocktails of long chains of carbon, called polymers, mixed in with chemical additives, small molecules that help mold the polymers into their final shape and imbue them with resistance to oxidation, UV exposure, and other wear and tear. Microwaving delivers a triple whammy: heat, UV irradiation, and hydrolysis, a chemical reaction through which bonds are broken by water molecules. All of these can cause a container to crack and shed tiny bits of itself as microplastics, nanoplastics, and leachates, toxic chemical components of the plastic.The human health effects of plastic exposure are unclear, but scientists have suspected for years that they aren’t good. First, these particles are sneaky. Once they enter the body they coat themselves with proteins, slipping past the immune system incognito, “like Trojan horses,” says Trinity College Dublin chemistry professor John Boland, who was not involved in this study. Microplastics also collect a complex community of microbes, called the plastisphere, and transport them into the body.Our kidneys remove waste, placing them on the front lines of exposure to contaminants. They are OK at filtering out the relatively larger microplastics, so we probably excrete a lot of those. But nanoplastics are small enough to slip across cell membranes and “make their way to places they shouldn’t,” Boland says.“Microplastics are like plastic roughage: They get in, and they get expelled,” he adds. “But it’s quite likely that nanoplastics can be very toxic.”Once they’ve snuck past the body’s defense systems, “the chemicals used in plastics hack hormones,” says Leonardo Trasand, a professor at the NYU Grossman School of Medicine and the director of the Center for the Investigation of Environmental Hazards. Hormones are signaling molecules underlying basically everything the body does, so these chemicals, called endocrine disruptors, have the potential to mess with everything from metabolism to sexual development and fertility.“Babies are at greater risk from those contaminants than full-grown people,” Hussain says. So to test how much plastic babies are exposed to, Hussain’s team chose three baby-food containers available at a local grocery store: two polypropylene jars labeled “microwave-safe” according to US Food and Drug Administration regulations, and one reusable food pouch made of an unknown plastic.They replaced the original contents of each container with two different liquids: deionized water and acetic acid. Respectively, these simulate watery foods like yogurt and acidic foods like oranges.They then followed FDA guidelines to simulate three everyday scenarios using all three containers: storing food at room temperature, storing it in the refrigerator, and leaving it out in a hot room. They also microwaved the two polypropylene jars containers for three minutes on high. Then, for each container, they freeze-dried the remaining liquid and extracted the particles left behind.For both kinds of fluids and polypropylene containers, the most microplastics and nanoplastics—up to 4.2 million and 1.2 billion particles per square centimeter of plastic, respectively—were shed during microwaving, relative to the other storage conditions they tested.In general, they found that hotter storage temperatures cause more plastic particles to leak into food. For example, one polypropylene container released over 400,000 more microplastics per square centimeter after being left in a hot room than after being stored in a refrigerator (which still caused nearly 50,000 microplastics and 11.5 million nanoplastics per square centimeter to shed into the stored fluid). “I got terrified seeing the amount of microplastics under the microscope,” Hussain says.To test what these plastics do to our bodies once they’re consumed, the team bathed human embryonic kidney cells in the plastic roughage shed by the baby-food containers. (The team chose this kind of cell because kidneys have so much contact with ingested plastic.) After two days of exposure to concentrated microplastics and nanoplastics, about 75 percent of the kidney cells died—over three times as many as cells that spent two days in a much more diluted solution.While the concentration of plastic used in these solutions was higher than what a baby would be exposed to by eating from a microwaved food jar in real life, Hussain notes that the full extent of plastic particle accumulation over time—from food and from the air and surfaces—is unknown, and might be high. So, he says, it’s important to study the health effects of high levels of exposure.While Hussain’s team was the first to test the toxicity of plastics on cells using the particles released from commercially available food containers, a review published in the Journal of Hazardous Materials last year found that exposure to microplastics can cause cell death, inflammation, and oxidative stress. “Plastics are a huge problem for human health,” says Trasand. “This study just pushes the concern even further.”Micro- and nanoplastics aren’t the only particles leaking out of plastic containers and into food. When plastic is broken apart by heat, tons of chemical additives fly out as well. Boland notes that while the techniques used in Hussain’s experiment could not distinguish between plastic polymers and chemical additives, “both are probably toxic.” We don’t know whether chemical additives are as bad as nanoplastics (or worse), but “at the end of the day,” he says, “none of the stuff that’s emerging from these plastics is very good for anybody.”Judith Enck, a former EPA regional administrator and the president of Beyond Plastics, a policy and advocacy group against plastic pollution, stopped microwaving plastic 30 years ago. She thinks that you should, too: “My goodness, especially if you have kids or if you’re pregnant, do not put plastic in the microwave.” “It’s a pain in the neck,” she acknowledges, but “even this one study should be a wake-up call—not just to new parents but to the FDA. They need to be far more proactive.” Transand agrees: “The FDA is glacially behind.”To get a plastic product approved for food or beverage packaging, a manufacturer needs to submit a limited amount of self-reported data to the FDA. But the agency doesn’t have the resources to test the safety of all plastic products before they go on the market or to spot-check them once they’re available in stores.Polypropylene is considered safe for food contact—even in the microwave—by the FDA, which allows companies to use it for packaging things like baby food. Boland disagrees: “I don’t believe that there are microwave-safe plastics.” Trasand and Enck agree that while independent studies should continue testing how much plastic is being released from food packaging, there is already enough evidence to show that “microwave-safe plastic” isn’t really safe. “I think the FDA needs to tell companies that they can no longer say any plastic is microwavable,” says Enck.Broadly reducing human exposure to plastics will require government action and sweeping corporate change, says Trasand. After all, they’re in the air, in the water, and inside you. Enck doesn't think manufacturers are likely to make the first move. “Corporations will continue to use plastic for as long as they can, because it’s cheap. That motivates them more than anything,” she says.Even if a new technology emerged that could prevent plastic containers from shedding particles, Boland suspects that companies wouldn’t adopt it without being forced to do so by regulation. In principle, food companies and plastics manufacturers could be “opening themselves up to litigation for past products,” he says, since changing their packaging would imply that they had been knowingly producing something that released microplastics before.Enck says that one potential solution could be to create a third-party certification program connecting food companies to independent scientists who can test their products and report results to the FDA. On an individual level, there are still some things people can do: Opt for reusable glass and stainless steel. Don’t pour hot liquids into plastic containers. And, please, stop microwaving plastic.Boland says scientists should keep doing research to understand exactly what particles are being released from plastics under specific conditions. “If you can’t measure,” he says, “you can’t legislate.”
Environmental Science
Study investigates climate change's impact on intensity, frequency and duration of extreme-weather events In an article published in the Proceedings of the National Academy of Sciences Michael Mann, professor in the Department of Earth and Environmental Science in the University of Pennsylvania's School of Arts & Sciences, and colleagues from Clemson University, the University of California Los Angeles, and Columbia University investigate the effects of climate change on exacerbating compounding heat and drought situations. Their findings offer new insights into predicting their interplay, which will provide scientists and policymakers with a clearer and more holistic approach to preventing and preparing for extreme-weather events. "We wanted to see how the state-of-the-art climate models used in the most recent assessment reports of the Intergovernmental Panel on Climate Change address the episodes of heat waves and droughts that have given rise to some of the worst wildfires we've witnessed in recent history," Mann says. "We also wanted to get a better understanding of how often these events were occurring, their typical durations, and their intensity to improve not only our forecasting but approaches to mitigating further damage to human life." Compound drought and heat wave events and their effects The researchers document the deleterious effects of increasingly severe droughts and wildfires occurring in the past three years. "Two standout events," Mann says, "were the 2020 California wildfires and the 2019–20 Australian bush fire season, which lasted nearly one whole year and came to be known as the Black Summer. These are known as compound drought and heat wave (CDHW) events and refer to situations wherein a region experiences both prolonged hot temperatures and a shortage of water." These conditions can occur together and worsen each other's impacts, the researchers say, and could potentially lead to heat-related illnesses and deaths, water scarcity for drinking and agriculture, reduced crop yields, increased wildfire risk, and ecological stress. They also note that anthropogenic climate change—climate change that is driven by human activity—can contribute to the frequency and severity of these events. Projected impact of a worst-case versus moderate-case scenario The researchers compared two contrasting socioeconomic pathways: the high-end or worst-case scenario, wherein society fails to mitigate the effects of anthropogenic climate change, and a moderate scenario, wherein some conservative measures are put in place and efforts are made to abide by them. In the worst-case scenario, they found that by the late 21st century approximately 20% of global land areas are expected to witness approximately two CDHW events per year. These events could last for around 25 days and a fourfold increase in severity. "Comparatively, the average CDHW frequency over the recent observed reference period was approximately 1.2 events per year, lasting less than 10 days, with far less severity," Mann says. The most vulnerable geographical regions, such as eastern North America, southeastern South America, Central Europe, East Africa, Central Asia, and northern Australia, are projected to experience the largest increases in CDHW frequency by the end of the 21st century. "Interestingly, places like Philadelphia and some of the regions in the eastern U.S. are where we expect to see an increase in these sorts of events; urban environments in the summertime will witness the highest relative frequency of these events," Mann says. Critical need for proactive measures The researchers emphasize the profound threat posed by more frequent and intense CDHW events in the coming decades and the dependence the emissions pathway chosen has on the severity of these events. As climate change continues to unfold, addressing the escalating risks associated with CDHW events becomes crucial. This study contributes to the growing understanding of the projected changes in CDHWs and highlights the need for proactive measures, including emission reductions and adaptation strategies, to build resilience and safeguard vulnerable regions from the impacts of compound drought and heat wave events. "Our findings provide important scientific context for the record heat and wildfire that we're witnessing right now here in the United States," Mann says. "They underscore that we need to get off fossil fuels as quickly as possible to prevent a worsening of these dangerous combinations of heat and drought." More information: Kumar P. Tripathy et al, Climate change will accelerate the high-end risk of compound drought and heatwave events, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2219825120 Journal information: Proceedings of the National Academy of Sciences Provided by University of Pennsylvania
Environmental Science
Keeping California's oil in the ground will improve health but affect jobs, find study As society reckons with climate change, there's a growing call to keep fossil fuels right where they are, in the ground. But the impact of curtailing oil production will depend on the policies we implement to achieve this. An interdisciplinary team of researchers investigated the carbon emissions, labor and health implications of several policies to reduce oil extraction, with a special focus on how the effects vary across different communities in California. Their results, published in Nature Energy, illustrate the tradeoffs between different strategies. For instance, models banning oil extraction near communities produced greater health benefits across the state, but they also led to more job losses, with disadvantaged communities feeling about one third of both the costs and the benefits. With a goal to reach carbon neutrality by 2045, California is currently implementing some of the world's most ambitious climate policies. As the country's seventh largest oil-producing state and the world's fifth largest economy, California provides a unique setting to study supply-side decarbonization policies. It already has a carbon cap-and-trade program and is currently debating a setback policy that would ban new oil production near communities. Many considerations Petroleum production is a multifaceted endeavor. The greenhouse gas emissions from burning fossil fuels are the main driver of climate change. Extracting these resources also emits CO2 into the environment, in addition to air pollution and toxic substances. Any policies seeking to curb oil production will affect people for better and worse. The industry employed 25,000 Californians in 2019, and provides tax revenue to local governments. "Our analysis is trying to quantify what those tradeoffs look like as the state considers different policies," said co-author Kyle Meng, an associate professor in UC Santa Barbara's economics department and the Environmental Markets Lab (emLab) at the Bren School of Environmental Science & Management. "We're taking traditionally climate-focused policies and comparing them along local impacts, health benefits and employment costs," added co-lead author Paige Weber, an environmental economist at UNC Chapel Hill, previously an emLab post-doc. The authors developed a framework to analyze the impact of three policies: an excise tax (paid per barrel); a carbon tax (paid per ton emitted); and setbacks at 1000 feet, 2500 feet and 1 mile. Taxes increase the cost of production, curbing activity and driving down emissions. Setbacks essentially ban extraction in areas where people live. In a previous study, the authors found that production decreases because it might not be economical to drill somewhere else. To compare between the policies, each setback distance had a corresponding excise and carbon tax level that achieved the same emissions target in 2045. The authors started with a suite of models to predict oil production in California. Using historical data and economic theory, the team attempted to answer the following questions: Will they drill here? How much will a well produce? When will it shut down? The researchers then modeled the health impacts of oil production emissions as they spread across California's communities. Finally, they modeled the outcome that each policy would have on jobs and worker compensation. The authors were especially curious how these effects fell on people living in areas that meet California's definition of a disadvantaged community. They calibrated the health and labor consequences of each policy based on its ability to reduce carbon. "We ask, for the same greenhouse gas reduction, which policy has greater health benefits and fewer labor costs, and how are these benefits and costs distributed?" Meng explained. Always a tradeoff Setbacks offered the greatest air-quality improvements, especially to disadvantaged communities. If you move oil production away from where people live, they'll see health benefits. But there was a surprising tradeoff. When oil production is close to communities, so are the jobs it offers. "The same communities that benefit from cleaner air are also those facing labor market consequences," Meng said. During policy discussions, there's often disagreement between groups highlighting the health impact of oil production and those focused on the employment benefits. "They're often pitched as separate camps," Meng continued. "But our analysis shows that costs and benefits can be borne by the same communities." Carbon and excise taxes both work by raising production costs, but the two policies target different oilfields. An excise tax eliminates the most expensive operations first, and falls roughly in the middle in terms of job and health implications. "The cheapest way to reduce greenhouse gas emissions would be with a carbon tax because it goes after the most carbon-intensive oil extractors first," Weber said. But since it takes the smallest number of wells out of production per ton of carbon emissions reduced, a carbon tax offers the lowest total health benefits, while also leading to the lowest job losses. The authors believe their estimates of the health impacts are conservative. They focused solely on premature mortality, as other health impacts are more difficult to quantify. As a result, any action will likely improve the health of Californians more than what the study lays out. Similarly, the researchers expect they overestimated the labor impacts because their framework doesn't account for the possibility of re-employment. It assumes that every job lost results in unemployment. The path forward By 2045, California aims to reduce emissions in the transportation sector by 90% compared with 2019. And the Golden State is looking to many policies to achieve this. "It's a hotly debated issue right now because the governor just signed a law banning new oil drilling near communities," said co-lead author Ranjit Deshmukh, an assistant professor in UC Santa Barbara's Environmental Studies Program. The oil industry quickly circumvented this action by collecting enough signatures to place a referendum on the next ballot. "Unfortunately, even the largest setback distance did not reach the state's greenhouse gas reduction target," Weber said. "So, you'd need to combine a setback with another policy." The state currently has no plans to use an excise tax to reduce greenhouse gas emissions from oil extraction, the authors said. On the other hand, the state's cap-and-trade program functions much like a carbon tax. The only difference is that the market finds a price based on the cap, rather than it being set by the government. That said, the cap-and-trade program spans many sectors in the state, not just fossil fuel extraction. This paper captured employment and health impacts on a much finer resolution than previous studies. Looking at, say, county averages for health benefits can be misleading, the researchers explained. Consider Los Angeles county: There's a lot of variation between people living in Compton and Hollywood, or Long Beach and Lancaster. "A much finer resolution analysis is needed to accurately answer the question of how different communities bear the costs or get the benefits of this oil phase-out," Deshmukh said. The empirical aspect of their framework was also an innovation. Most other studies used only engineering models to forecast production. Using detailed historical extraction data gave the authors more confidence in the accuracy of their projections. The team has begun similar work investigating the health and labor impacts of phasing out oil refining in California. And they plan to extend their analysis on petroleum production to the rest of the country. They hope their work will guide policymakers towards an effective, equitable solution for curbing fossil fuel extraction. One that maximizes its benefits while reducing its drawbacks. More information: Ranjit Deshmukh, Equitable low-carbon transition pathways for California's oil extraction, Nature Energy (2023). DOI: 10.1038/s41560-023-01259-y. www.nature.com/articles/s41560-023-01259-y Journal information: Nature Energy Provided by University of California - Santa Barbara
Environmental Science
A mixture of trees purifies urban air best, shows study Conifers are generally better than broadleaved trees at purifying air from pollutants. But deciduous tree may be better at capturing particle-bound pollution. A new study led by the University of Gothenburg shows that the best trees for air purification depend on the type of pollutant involved. Trees and other greenery in cities provide many benefits that are important for the well-being of residents. Leaves and needles on trees filter air pollutants and reduce exposure to hazardous substances in the air. But which trees purify the air most effectively? Researchers from the University of Gothenburg have collected leaves and needles from eleven different trees growing in the same place in the Gothenburg Botanical Garden's arboretum (tree collection) to analyze which substances they have captured. "This tree collection provides a unique opportunity to test many different tree-species with similar environmental conditions and exposure to air pollutants," says Jenny Klingberg, a researcher at the Gothenburg Botanical Garden. Harmful pollutants A total of 32 different pollutants were analyzed, some of which are bound to particles of various sizes. Others are gaseous. There is a proven connection between exposure to air pollutants and increased risk of cardiovascular diseases and airway problems. This project has focused on polycyclic aromatic hydrocarbons (PAHs). In cities, traffic is the biggest source of these pollutants, which are released due to incomplete combustion in engines. "Our analyses show that different tree species have different abilities to absorb air pollutants. Conifers generally absorbed more gaseous PAHs than broadleaved trees. Another advantage of conifers is that they also act as air purifiers in winter, when air pollution is usually at its highest," says Jenny Klingberg. Needles clean air for many years The researchers also saw that needles continued to absorb air pollutants for several years, which leaves cannot do for obvious reasons. But broadleaved trees had other advantages. They were more efficient at cleaning the air of particles, which is thought to be due to the leaves having a larger surface area to which particles can attach. "The various species differed more than we expected. Larch, which is a conifer that sheds its needles each autumn, was best in test. Larch trees absorbed the most particle-bound pollutants, but were also good at capturing gaseous PAHs," says Jenny Klingberg. Needles and leaves do not, however, break down pollutants to any greater extent, even if sunlight can start that process. Thus there is a risk that the soil beneath the trees will be contaminated by pollutants when the leaves and needles shed and decompose. This places the ecosystem in the soil at risk of being affected, though this has not been investigated in the current study published in the journal Ecological Indicators. "The pollutants do not appear to impact the trees' photosynthesis; leaf chlorophyll content is just as high in the most polluted areas of Gothenburg compared with trees that grow in less polluted environments. But this likely looks different in cities with even worse air quality," says project leader Håkan Pleijel, professor of applied environmental science at the University of Gothenburg. Careful urban planning is needed However, you should not simply start filling city streets with trees to improve air quality for residents. Several factors determine the benefit. An alley of trees in a narrow street canyon can reduce air flow, negatively affecting dispersion and dilution of the air pollutants and therefore increase concentrations of contaminants locally on busy streets. This means that on narrow streets sheltered from wind, lower-growing vegetation, like hedges, may be preferable. Careful urban planning is necessary, combining different tree species to optimize air purification and to take into account other functions and benefits of trees, according to the researchers. "This study contributes to improving our understanding of the ability of trees to clean the air and which species are best at absorbing air pollutants," says Håkan Pleijel. This knowledge is important for urban planning when designing sustainable cities. While trees and greenery can contribute to better air quality in cities, at the end of the day the most important measure is to reduce emissions. More information: H. Pleijel et al, Differences in accumulation of polycyclic aromatic compounds (PACs) among eleven broadleaved and conifer tree species, Ecological Indicators (2022). DOI: 10.1016/j.ecolind.2022.109681 Provided by University of Gothenburg
Environmental Science
Americans tossed 292.4 million tons of trash into landfills in 2018, or about 5 pounds of garbage per person per day. Once in a landfill, much of that trash undergoes some wild chemistry, often polluting the surrounding area. But amid all the stinking refuse is potentially valuable material, and some environmentalists and engineers see landfills as a resource to be tapped. Landfill mining is the process of uncapping a landfill and sifting through its cells of garbage to reclaim any sort of e-waste, heavy metals, or other recoverable materials that can then be returned to manufacturers and recycled into new products. It sounds promising, but landfill mining has yet to take off widely. I asked some environmental scientists to explain why. Professor Emeritus of Environmental Science and Policy, University of Southern Maine and co-author of “Landfill mining: Case study of a successful metals recovery project,” published in the journal Waste Management There are millions of tons (and billions of dollars in value) of recoverable metals, plastics, and other materials currently stored in landfills. The simple reason mining does not occur is economics: for multiple reasons, the costs to mine a solid waste landfill are currently greater than the value of recoverable materials. The cost of any mining operation includes extraction of the target ore, processing to concentrate the ore (beneficiation), managing the associated wastes, transporting and selling the material, and finally closing and reclaiming the mine. These costs must be lower than the revenues made from selling the mined material. While the mining of solid waste landfills has many environmental benefits, it is subject to the same economic conditions as traditional mining. At a typical solid waste landfill, there is inadequate knowledge as to what, where, and how much target material is there. This makes assessing the cost to mine and the potential revenues very difficult. Landfills also present an additional cost: not knowing how much and what types of hazardous materials are present and where they are located. Their presence raises serious worker safety and environmental risks. Processing landfilled waste to concentrate the material is perhaps the highest cost. For example, with metals, the typical concentration of metal in solid waste is about 5 percent. The other 95 percent of trash has to be temporarily removed and subsequently moved back into the landfill. And, many metals are not isolated but are part of a multi-material article. The cost to remove all the non-metal components can exceed the market value of the steel. Finally, because of the significant environmental risks in mining a landfill, a landfill mining operation would be permitted only if there were sufficient contingency funds (or insurance). Sufficient funds would be required prior to the start of mining that would ensure the mining operator could cover the costs to properly close and reclaim the mined landfill and to remediate any environmental impacts. Senior Researcher at the Center for Mineral Technology in Rio De Janiero and co-author of “A comprehensive review of urban mining and the value recovery from e-waste materials,” published in Resources, Conservation and Recycling First, it is important to differentiate between urban mining and landfill mining. Urban mining is defined as the set of processes for the material obtained from secondary resources. Mostly from waste or post-consumer products. Mining from landfills is one of the possibilities of urban mining. Mining from landfills presents some limiting aspects, such as the content of organic material that can ‘contaminate’ recoverable materials and also the diversity of materials that turns it difficult to identify and classify materials. Analogously, to the mining of low-grade deposits, which are economically unfeasible. Finally, urban mining tends to gain space in new business models by the segregation of secondary resources at the source, such as from the selective collection and the carrying out of specific campaigns (e.g. e-waste, metal fractions). However, the main motivation for urban mining lies in the need to obtain materials classified as critical and available with frequency and quantity in secondary resources. For example, we have platinum in automotive catalysts, gold in printed circuit boards, and copper in wires and cables. Vice President and Landfill Practice Leader at HDR I recently helped facilitate a public open house for a proposed expansion of a lined landfill requiring mining of a closed but unlined legacy landfill. While most attendees understood the need and appreciated the care the community took to remove a liability and provide disposal capacity, there were a couple of residents who were insistent that if a portion of the landfill could be mined, why not mine the entire site and move the landfill to another part of the county? Their motivation was not recovery of recyclable materials or remediation, but righting a perceived wrong when the property was established as a landfill site over 40 years ago. This story showcases one of the issues facing landfill mining in the United States. In Europe, where they view landfills more negatively, they’ve been mining old landfill sites for decades. So why isn’t it more popular here? In simple terms, we have a separate set of motivations. Energy costs are still comparatively low because of the availability of fossil fuels and natural gas, so we don’t value the energy potential of waste buried in landfills. There is not a strong appetite for energy from waste in many locations that could benefit from the energy value of buried plastics and undegraded organic material. Landfill mining is expensive relative to the cost of developing a new landfill site or just transporting waste to a regional disposal facility in someone else’s backyard. The costs for excavation, physical screening, and managing odors and liquids can be significant barriers. The recycling market demand for steel, aluminum, or precious metals is not high enough in most instances to offset the costs of mining and cleaning. Americans have been separating valuable plastic and metals from the waste stream poorly for decades, despite long-running education efforts. Our waste could contain valuable material, but once in the ground, that value proposition is significantly lower. Environmental, social and governance pressure is ramping up but isn’t as mature as in other parts of the world. ESG will certainly affect our transition to a more circular and sustainable economy, but it likely won’t initiate widespread landfill mining and reclamation activities. Greenhouse gas emission reduction is not a strong motivator as well. Much of the degradation of organic materials from old landfills occurred in the first decade or two following disposal. Unless the mined waste is less than 10 years old (then why did you bury it in the first place?), the net greenhouse gas reductions after considering the equipment and fuel required may be de minimis. The popularity of landfill mining may increase over time, with some shifts in the factors above. In the meantime, we’ll continue to monitor Europe and Asia as they explore cost-effective methods. My 25 years of experience in landfills as a solid waste consulting engineer indicates that landfill mining is an intriguing proposition. But will it become a sizable portion of my practice in the next 15 years? Let’s just say I am not pinching my nose or holding my breath.
Environmental Science
The world goes through hundreds of billions of single-use coffee cups every year—and most aren’t recycled. So major coffee chains’ switch to paper cups is a good step, right? Not quite.A recently published study shows that paper cups can be just as toxic as conventional plastic ones if they end up littered in our natural environment. Seemingly eco-friendly paper cups are coated with a thin layer of plastic to keep their contents from seeping into the paper, and this lining can emit toxic substances. “There are chemicals leaching out of these materials,” says lead author Bethanie Carney Almroth, an associate professor of environmental science at the University of Gothenburg in Sweden.When trying to assess the environmental impact of takeaway coffee cups, most experiments have focused on plastic lids and polystyrene cups. Paper cups have long been spared scrutiny. To address this oversight, Carney Almroth and her colleagues tested the effects of paper and plastic cups on midge larvae, which are commonly used in toxicity tests. The cups were placed in temperate water or sediment and left to leach for up to four weeks. The larvae were then kept in aquariums containing the water or sediment tainted by the paper and plastic cups. Regardless of the source of the contamination, the larvae grew less in the sediment, and exposure to the tainted water also hindered their development.The ecotoxicologists didn’t perform chemical analyses to see which substances had leached from the paper cups into the water and sediment, though Carney Almroth suspects that a mix of chemicals caused the damage. But it’s hard to say more, given that it’s not known which materials are present. “This would all be much easier if companies were required to tell us what they use in their products,” she says.Coffee cups are made of a complex mixture of synthetic materials and chemicals. Manufacturers add processing aids, heat stabilizers, and other substances, many of which are known to be toxic. Even if plant-derived materials are used—such as polylactic acid, a material derived from corn, cassava, or sugarcane that’s used to coat paper cups—cup makers often add a number of other chemicals during processing.Chemical analyses can sometimes shed light on the composition of the substances present in a plastic or paper cup, but even these tests can’t always identify what’s there, says Jane Muncke, who is an environmental toxicologist by training and now managing director of the Food Packaging Forum, a Switzerland-based science communication organization. The exact substances are “unknown not only to the scientists who carry out these analyses, but also to the people who produce and sell the packaging.” During the manufacture of plastic-containing products, unintentional chemical reactions can take place between the materials used to create new substances.Chemicals can also be harmful because of the specific combinations they are used in, Muncke adds—something known as “mixture toxicity.” It thus makes little sense to regulate the amounts of individual substances in cups, she says, because you still can’t be sure what impact they’ll have.Improving recycling practices would be a logical step in trying to keep harmful chemicals from ending up in nature, but researchers say it’s best to retire disposable paper cups altogether. It’s difficult for most recycling centers to separate the plastic coating from the cup’s paper. In the UK, for instance, a mere handful of recycling centers take paper cups. Many coffee shops will collect them for recycling—but having to drop paper cups off takes the convenience out of a single-use product. Today, only four out of every 100 paper cups are recycled in the UK.Plus, leaching chemicals isn’t just a problem when paper cups are littered—it can begin when a cup is used. In 2019, a research group from India filled paper cups with hot water to see if plastic particles or chemicals were released. “What came as a surprise to us was the number of microplastic particles that leached into the hot water within 15 minutes,” Anuja Joseph, a research scholar at the Indian Institute of Technology in Kharagpur, wrote in an email. On average, there were 25,000 particles per 100 ml cup. The researchers also found traces of harmful chemicals and heavy metals in the water and plastic lining, respectively.“Reusable” cups aren’t necessarily much better when it comes to leaching, as they are often made of plastic; heat and wear accelerates leaching, and acidic drinks like coffee absorb chemicals more easily. The carbon footprint of reusable plastic cups is also disputable: A reusable cup has to be used between 20 and 100 times to offset its greenhouse gas emissions compared to a disposable one, according to some estimates. Blame the high amount of energy needed to make the reusable cup durable and the hot water needed to wash it. That said, a reusable plastic cup at least has the potential to last longer and is easier to recycle.For Carney Almroth, reusable plastic cups aren’t the answer; fewer raw materials should be extracted and processed into plastics, she believes. “But we also need to look at the alternatives that are put forth as we make a shift into something more sustainable to make sure that we’re not just replacing one product with another,” she says. Carney Almroth is part of a coalition of scientists contributing evidence to the negotiations for a global plastics treaty. Those talks will continue in Kenya this November.In the meantime, the search is on for safer and more sustainable solutions. Some companies have baked edible cups made of waffles or biscuits, or have used an origami-like technique to fold paper into cups. Both Carney Almroth and Muncke see the potential for companies to use established materials to shape a circular economy. Then the coffee shops could more easily replace their low-cost plastic and paper cups.Take glass, for instance, which keeps drinks warm for longer—its low thermal conductivity slows the heat in the liquid from dispersing in the cup—and it is chemically inert, meaning no leaching (even the glaze of a ceramic cup is slightly soluble and can leach out to some degree). But although glass is infinitely recyclable, it has a higher environmental footprint than plastic. It’s made from natural raw materials such as sand, which have to be mined and melted at very high temperatures.Stainless steel, a metal commonly used for reusable water bottles, is another contender. But coffee in steel cups cools faster than it would in ceramic and glass cups because the heat is transferred to the material and then to the palm of your hand. However, the material is more robust, making it good for on-the-go drinks.Regardless of which material proves successful, moving away from disposable cups will take innovative business models and approaches, says Muncke. By this, she means companies finding a viable way to rent out and collect reusable cups, wash them appropriately, make sure they’re not contaminated, and then put them back into circulation. “The difficult thing is changing people’s behavior and building all the infrastructure. And that costs a lot of money.” Convenience and cheapness will make disposable cups hard to overthrow.
Environmental Science
State school board votes 8-7 to restore climate change to latest science standards The Utah’s supplemental state standards for science and engineering education restored climate change by 1 vote late Thursday night Climate change was restored to Utah’s supplemental state standards for science and engineering education by one vote late Thursday night. The Utah State Board of Education voted 8-7 to accept the proposed standards as written by a committee of educators, scientists and parents. The vote, which capped a nearly 14-hour board meeting, eliminated all previous amendments to the science and engineering curriculum, one of which purged climate change language. Board member Brent Strate, who introduced the motion to accept the supplemental standards as written, said the board had approved science standards in 2019 that included climate change as part of environmental science instruction. “So to me, I feel very comfortable with what came out of the writing committee. My rule, what I always said when I was on the Standards Committee, if it’s an amendment, if it isn’t better, I’m not considering it. It could be equal and we’d have a long debate about it, but it has to be absolutely better or I’m not going to vote in favor,” Strate said. Board vice chairwoman Jennie Earl spoke against the motion. “I think we’ve done a lot of work and I think we can continue to do that,” she said. “Supplemental to me is that these are additional standards. There’s not a set of standards for any of those that we will be passing in this area. This is the first time we’ve had standards in this area.” Earl, a member of the Standards and Assessment Committee that passed the amended proposed standards to the board, made an amendment to remove language that called for students to be taught how to evaluate proposed designed solutions intended to reduce the impacts of climate change. It passed by a 3-2 vote. The board briefly considered postponing final consideration of the standards to a special meeting or resuming debate the following day but neither proposal had traction with the board. Ultimately, it voted to accept the writing committee’s report sans any amendments. On Thursday morning, several community members urged the elected board to restore climate change language to the supplemental standards. At least 100 others sent board members emails expressing their concerns. Christi Leman of Provo, the mother of two teenagers, told the board that the amendments “send a message to students that the state does not acknowledge evidence widely accepted by climatologists worldwide as well as by our own Legislature, our Department of Public Safety and our Division of Fire, Forestry and State Lands.” Leman said the changes would deprive students “of a safe place to learn and discuss their living environment as well as critically think about possible solutions to environmental problems which would be detrimental to our children as citizens of Utah and of the world.” Dr. Robert Cheatham, a pediatric intensive care physician, said he had never before addressed an elected body but he came to the board meeting “to express my extreme dismay at this new standard that’s going to replace the word climate change with cataclysmic events indicating that it’s more of a natural phenomenon that’s occurring.” Cheatham said amendments to the proposed standards represent a departure from what is widely accepted science. “I just ask that you consider the change as something that will affect our students’ ability to handle the world as it is in reality, and it’s going to become more and more of an issue for the generations facing us ahead,” he said. Rep. Gay Lynn Bennion, D-Cottonwood Heights, said she was particularly concerned about students being able to analyze and interpret data that can help them learn about the interactions between society and the climate. “Regardless of where you live, the Great Salt Lake is of international concern. This is true for all saline lakes throughout the world and this is because of higher temperatures year after year that are drying up these lakes. It is important that our students have a science background to understand these changes,” Bennion said. The board’s Standards and Assessment Committee, which is comprised of elected state school board members, has met twice in recent weeks to consider dozens of amendments to the proposed science standards and sent the amended standards to the full board for its consideration. Some changes purged mentions of climate change from the document while others sought to include alternative theories about how rocks formed, such as the impact of the Great Flood described in the Book of Genesis. The latter did not receive committee approval. During a committee meeting April 25, State School Board member Natalie Cline said she speaks for “many” people who do not agree with the “climate change agenda or narrative.” Member Sarah Reale said students are aware of the impacts of climate change and conditions due to the groundswell of public attention on the depleted Great Salt Lake, or that on occasion they do not get to have recess outdoors when the air quality is poor. “I don’t want to make it political. It’s facts. It’s evidence that they’re (students) seeing in front of their eyes, and I want to make sure that that’s included in our standards,” she said. The board voted to approve several other standards on Thursday: - College and Career Awareness, and Digital Literacy standards. - Supplemental standards for creative writing, professional and technical communication, human anatomy, genetics, geology, marine biology/oceanography, meteorology, and humanities.
Environmental Science
As rising seas disrupt toxic sites, study finds communities of color are at most risk As rising seas threaten to flood hundreds of toxic sites along the California coast, the risk of flood-related contamination will fall disproportionately on the state's most marginalized communities, finds a new study published today by researchers at UC Berkeley, UCLA and Climate Central. Under California's high-risk aversion scenario, which projects that sea levels could rise by more than 6 feet by the end of the century, the study identified 736 facilities at risk of coastal flooding and an additional 173 with projected groundwater encroachment. Residents living within 1 kilometer of at-risk sites were more likely than others to be people of color, to be living below the poverty line, to be unemployed or to experience another form of social disadvantage such as linguistic isolation. As part of the study, the researchers also released a new interactive online tool in English and Spanish that allows users to map toxic sites that are at risk of coastal flooding, either by county or by individual facility. Users can also overlay indicators of nearby residents' social vulnerability, including the percentage of people who are living below the poverty line, who are experiencing unemployment, or who don't have a high school degree. "Sea level rise is like a slow-moving storm that we can anticipate and prepare for," said Rachel Morello-Frosch, a professor of public health and of environmental science, policy and management at UC Berkeley and senior author of the paper. "As California invests in community resilience to climate change, it is essential that considerations of environmental justice are at the fore." Low-income communities and communities of color already face disproportionate exposure to myriad environmental pollutants, and the threat of additional exposures from sea-level rise will only exacerbate these inequities. Compared to their neighbors, socially vulnerable residents can also face more challenges to evacuate during a flood and often experience social stressors that can make them more susceptible to the health impacts of pollutant exposures. San Mateo and Alameda counties are projected to host the most at-risk hazardous sites by 2050, but by 2100, Orange County is projected to surpass both as oil and gas wells there and in Los Angeles County face rising coastal flood risks. "Again, climate change amplifies inequality," said lead author Lara Cushing, an assistant professor of environmental health sciences at the UCLA Fielding School of Public Health. "Sea level rise will present additional risks of contaminant releases to communities already living with pollution sources in their backyards." The study was conducted as part of the Toxic Tides project, which brought together a multidisciplinary research team with community advocacy organizations to understand how rising seas would impact hazardous sites, including refineries, industrial facilities, sewage treatment plants and cleanup sites. The team released a preliminary set of data and an earlier version of the online mapping tool in November 2021; the new study, which appears in the journal Environmental Science and Technology, includes additional analysis about the environmental justice implications of the findings. More information: Lara J. Cushing et al, Toxic Tides and Environmental Injustice: Social Vulnerability to Sea Level Rise and Flooding of Hazardous Sites in Coastal California, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c07481 Provided by University of California - Berkeley
Environmental Science
A Chapman University scientist and his colleagues have determined how the Earth responds as it heats up due to climate change. The scientists say a warming world calls for a new approach in detecting how much carbon dioxide comes out of ecosystems when the temperature changes -- which tells us how well plants and soil can alleviate damage by removing carbon pollution from the atmosphere. The study is the first to find the temperature-carbon dioxide release relationship at the landscape level. Their findings are published in the academic peer-reviewed journal Nature Ecology & Evolution. Plants that currently take up a quarter to a third of humanity's carbon emissions might not be able to maintain the rate of carbon dioxide removal, says Joshua Fisher, a climate scientist and associate professor of environmental science and policy at Chapman University's Schmid College of Science and Technology. "A big unknown in the future of the Earth is how ecosystems will respond to increasing temperature," says Fisher. "Our findings give us insight into the fate of the planet, and how we can measure those changes at large scales." Recent developments, including those by Fisher, have led to the use of satellites to monitor global photosynthetic activity and measure the concentrations of gas in plants and ground soil; but, similar tools have been unable to track respiration, or the "breathing" out of carbon dioxide, across biomes and continents. Respiration continues to be indirectly estimated as the difference between photosynthesis and the overall change in carbon dioxide, and "the spot measurements are not representative of the larger landscape," Fisher says. So, he and other scientists took to the trees -- well, monitoring stations among the trees. New carbon dioxide measurements were taken by a network of dozens of monitoring stations on towers across North America. The results gave great insight into future measurements over larger swaths of land. When they compared landscape measurements from the tower stations to the spot measurements done on the ground, they found the ground measurements show an overly sensitive relationship between carbon dioxide and temperature that does not exist when looking at the larger landscape. "Ground measurements said there's a lot of CO2 emission for small changes in temperature; but the landscape measurements said there's not a lot of CO2 emission for small changes in temperature," Fisher says. The team later used the findings to update mathematical models used to predict the relationships, and found that when they were improved with the findings, they performed better. "This is a very clever study that harnessed a myriad of measurements, models, and understanding of how they synergize together," says Fisher. "Our results continue to march us forward in deeper understanding of the Earth and what it may mean if we continue to change its climate." The study was funded by the NASA Terrestrial Ecology Interdisciplinary Science and Carbon Monitoring System, the Carnegie Institution for Science's endowment, Singapore's Ministry of Education, the RUBISCO SFA, which is sponsored by the Regional and Global Model Analysis Program in the Climate and Environmental Sciences Division of the Office of Biological and Environmental Research in the U.S. Department of Energy Office of Science, and NASA. Other members of the research team include lead author Wu Sun and Anna Michalak of Carnegie Institution for Science; Xiangzhong Luo, Yao Zhang, and Trevor Keenan of University of California Berkeley and Lawrence Berkeley National Laboratory; Yuanyuan Fang of the Bay Area Air Quality Management District; and Yoichi P. Shiga of the Universities Space Research Association. Story Source: Journal Reference: Cite This Page:
Environmental Science
Climate-friendly, nutrition-sensitive interventions could close the global dietary nutrient gap while reducing emissions How do we tackle world hunger, hidden hunger and climate change? It's possible to address both world hunger and hidden hunger while reducing agricultural greenhouse emissions at the same time. And we can do it without having to double food production or radically changing our diets to keep up with a growing global population. These are the surprising findings from a new study by researchers from Deakin University's School of Life and Environmental Sciences, published in the journal Nature Food. How could it work? "We can address hunger—that's insufficient caloric intake—and hidden hunger—vitamin and/or mineral deficiency—for a growing global population in 2030 and still comply with the greenhouse gas emissions budget required under the Paris Agreement," says postdoctoral researcher Özge Geyik. Dr. Geyik, Dr. Michalis Hadjikakou, Senior Lecturer in Environmental Science, and Professor Brett Bryan, Alfred Deakin Professor of Global Change, Environment, and Society, used a method called linear programming to minimize greenhouse gas emissions while meeting the nutritional requirements of the world in 2030 under several intervention scenarios related to agricultural productivity, food loss and waste, and international trade. This approach led to "optimal food baskets," as Dr. Geyik puts it, containing the climate-friendly and nutritious food sources that the world needs to produce more of. What's in those baskets? According to the findings, the foods that provide the required nutrients with the least emissions possible are vegetables, orange or red-fleshed roots and tubers, fruits and eggs. And, while increasing the global production of cereals has helped reduce world hunger, while causing environmental damage at the same time, Dr. Geyik believes we may no longer need to prioritize the production of more cereals. "What we need, instead, is to produce larger quantities of vegetables, roots and tubers, fruits and eggs, depending on what's missing in the current food supplies. And that means we may not need to embrace radical dietary changes that may be culturally less acceptable, such as removing animal products altogether, to meet climate targets," she says. Radical dietary change: It's not always the answer There's a growing interest in sustainable food systems amongst researchers, United Nations agencies, international non-government organizations (NGOs) and the public. Some research and advocacy groups have suggested wholesale dietary shifts, such as flexitarian, vegetarian, or vegan diets, as the answer. Dr. Geyik argues this may not necessarily be the case, especially when it comes to addressing hunger and hidden hunger. "Radical change in human diets—that have been shaped over millennia—may be incompatible with the urgency of addressing zero hunger and climate action in parallel," she says. "Our findings offer alternative transition pathways to sustainable food systems, ones with a focus on food as an essential need and an indispensable part of cultures. They show that we need to be multitasking along the way and address agricultural productivity and food loss and waste at the same time as developing innovative policies targeted towards sustainable nutrition. "Policy and research efforts need to prioritize producing the foods that can provide missing nutrients with the least greenhouse gas emissions possible to complement efforts for improving agricultural productivity and reducing food loss and waste." Dr. Geyik says, "It's win-win for both the environment and the growing world population." Key findings: - We could easily comply with the Paris Agreement if food production and trade policies prioritize foods that provide nutrients with the least emissions and then combine this with agricultural productivity improvements and reducing food loss and waste - Our plates need to look different than they do today. Global production of cereals has doubled between 1990–2020. While this helped eradicate hunger, we need to shift the focus away from micronutrient-poor cereals. What we need, instead, is to produce larger quantities of fruits and vegetables - We may not need to embrace radical dietary changes that may be culturally less acceptable, such as removing animal products altogether, to meet climate targets - We don't need to double food production to keep up with a growing global population More information: Özge Geyik et al, Climate-friendly and nutrition-sensitive interventions can close the global dietary nutrient gap while reducing GHG emissions, Nature Food (2022). DOI: 10.1038/s43016-022-00648-y Journal information: Nature Food Provided by Deakin University
Environmental Science
Repelling the Attack on Environmental, Social, Corporate Governance (ESG) Management It is no surprise that a political movement led by a corporate “manager” who made “you’re fired” a central principle of his practice of management is attacking contemporary management that seeks to navigate the complex world that organizations confront in 2023. Long before he entered politics, Donald Trump gave me a vivid example I could use in management classes to demonstrate the failure of what I call macho management. While the people he fired on his television show, The Apprentice, were not really his employees, he made termination of staff appear to be a valuable tool of management. Firing people might make for interesting TV, but it is not good management practice. Effective and competent managers treat termination as a failure of human resource and operations management: Either you hired the wrong person, or you so mismanaged a talented person that the excellent employee stopped performing. ESG management has been overblown by ideologues of the left and right. As with any management practice, there are few absolutes, and ESG principles must be crafted to deal with specific situations. Scoring companies on ESG practices is a little ridiculous, but ignoring or opposing ESG management is even worse. The, practice of management has advanced dramatically over the past century. Accounting, financial control systems, management information systems, just-in-time inventory control, international commerce, operations management, team management, and a host of other innovations have enabled managers to enhance productivity while dealing with an increasingly complex business environment. ESG management is simply another tool for managers seeking to deal with our rapidly changing world. The basics of ESG are straightforward. The first principle is to focus management’s attention on the organization’s impact on the natural environment. Virtually all human activities have negative impacts on natural ecosystems, so the goal is not to eliminate impacts but reduce them to a minimum. Pollution is a form of waste, and if one applies the principles of total quality management and industrial ecology to production processes, a central goal is to reduce waste in order to reduce cost. In the case of pollution—or waste that impacts an organization’s neighbors—cost can also include liability incurred by damaging someone else’s property. These liability costs can extend to an organization’s supply chain as well. Being careless about an organization’s environmental impact is an indicator of inadequate management. Just as a construction project riddled with injuries and death is an indication of a poorly run operation, any operation that creates unnecessary risk from pollution indicates poor management. Under macho management, pollution-belching smokestacks are a sign of industrial might. Under this approach, charging ahead without worrying about impact is a sign of strength: “In order to make an omelet you have to break some eggs.” The concept of “breakage” is baked into financial control systems and is assumed to be a routine cost of business. Under environmental sustainability management, precision, control, and care replace the sloppy habits of the early industrial era. An agricultural giant like Land O’ Lakes uses drones, satellite technology, artificial intelligence, and robotics to precisely apply water, fertilizer, pesticide, and herbicide to the plants in its fields. This reduces costs but also reduces pollution of nearby groundwater and streams. The E in ESG is about environmental care and concern. The second principle in ESG is that organizations need to be mindful of their impact on the local community. This is not a new concept; we see it when we compare the two banks in the classic Christmas movie “It’s a Wonderful Life.” The Bailey Brothers Building and Loan is of and for the community, while Mr. Potter’s bank was only in it for the money. Here in New York, a tone-deaf Amazon.com was unable to site its HQ2 in Long Island City when community leaders rebelled against a multi-billion-dollar subsidy for one of the world’s richest companies. The third principle in ESG is about corporate diversity in operations and governance. Writing on this issue this past March, I observed that: “An organization that privileges one race, gender, religion, sexual orientation, or national origin over another reduces the pool of talent it can draw on to staff and manage the operation. We are in a brain-based economy. The high value-added parts of the economy and the greatest profits are in the organizations or parts of organizations that are creative, analytic, and innovative. There’s more money in software than hardware. As products become commodities, they are subject to competitive forces that tend to limit profits. That is why IBM stopped making personal computers. A diverse board and diverse workers will provide the benefit of more brainpower and different life experiences to address organizational challenges. A less diverse organization tends to stimulate insularity and group think. Being awake and aware of the value of diversity is an indicator of management excellence. In a global competition for innovation, customers, and profits, a diverse team that is built on the best talent is likely to beat the team that is more homogeneous but less talented.” The argument against ESG management is that these factors have nothing to do with generating revenues or reducing expenditures. That they distract companies from increasing profits and are therefore breaking the contract between shareholders and management. To some conservative politicos, they are extraneous and left-wing ideological principles. I do not deny the ideological element of ESG advocacy. It annoys me, but it’s definitely real. Of course, the ideological opposition to ESG by conservatives is more than annoying. It is destructive. My argument is that regardless of the politics, ESG management is about effective management in the 21st century brain-based economy on a planet with over eight billion people. As Paul Simon once wrote: “One man’s ceiling is another man’s floor.” New Yorkers like Paul and me live in apartments and understand crowding. And this planet has gotten crowded. The need for precision and care in management is growing because the impact of mistakes is growing. There was a time when you could dump garbage in the ocean knowing it would decompose and biodegrade. After the invention of plastics and chemicals that were durable and long-lasting for commercial use, waste no longer degraded in the environment and its disposal and treatment became more complex and costly. We benefit enormously from new chemical technologies, but their use often creates environmental issues that must be addressed. If we are going to continue to advance our economy through the development of new technologies, we must learn how to manage those technologies, so they do not cause harm to people and the planet. In many cases, the argument seems to be less against ESG management than about using ESG factors to guide investment. I think that investors that require ESG management before they will invest are taking a shortcut that is bound to disappoint them. You can have excellent management, including factoring in ESG concerns, but if you’re facing bad market conditions or pushing a terrible line of products, all the ESG practices in the world won’t save you. Investments should never rely on single indicators, and ESG itself is only one element of management. I believe it is necessary, but far from sufficient. The measurement of organizational use of environment, social impact, and corporate governance practices is still in its infancy. We are at about the same place that financial accounting was in the mid-1930s. On the environmental side of the equation, we have not yet developed generally accepted environmental sustainability metrics. The same is true of measures capturing the use of corporate governance, diversity, and community impact principles. Assigning companies ESG scores and then using those scores to guide investment decisions makes no sense. But delegitimizing ESG factors is at least as bad as misusing and misunderstanding their measurement. We need to get better at understanding these issues and managing organizations in ways that reduce environmental damage, enhance host community impact, and increase organizational brainpower. Most senior managers with business and law backgrounds do not understand these issues. The graduate programs I direct at Columbia University in Sustainability Management (established in 2010) and Environmental Science and Policy (established in 2002) have now educated over three thousand sustainability professionals who do understand ESG issues. Programs at Arizona State, Yale, Bard, the New School, NYU, Harvard, American University, UC Santa Barbara’s Bren School of Management, Duke, LSE, and The University of Toronto (among others) have educated thousands more. These new sustainability professionals have the training needed to turn ESG from aspirational goals to organizational deeds. We are at the start of a new era of management. But we have a lot to learn. Progressives place too much faith in our ability to manage sustainability, and conservatives fail to grasp the importance of these issues to the corporate bottom line. It is sad or perhaps comical when state legislators who know little about management and even less about science try to legislate against what they have decided is “woke” management or “woke” investment. But it is also dangerous to overestimate our ability to manage according to sustainability principles. We are learning, and we are getting better. But we have a long way to go, and a little humility is definitely called for. I am optimistic about our progress but caution against overconfidence. The attack on ESG must be repelled, and the best defense is results and improved organizational performance.
Environmental Science
Moss-covered forest ditches could provide another tool to combat climate change According to a study by the Natural Resources Institute Finland (Luke), the University of Tampere and the University of Helsinki, ditches in forestry-drained peatlands release less methane into the atmosphere than what has previously been estimated. The study showed that methane emissions are particularly low in moss-covered ditches. The proportion of such ditches from all forest ditches is increasing, as ditch network maintenance will decrease when the granted forestry subsidies end. Some 5.9 million hectares of Finnish peatlands have been drained for forestry, accounting for roughly 17% of Finland's area. While drainage has caused carbon dioxide and nitrous oxide emissions, it has also significantly reduced methane emissions from the peat soil. In place of soil, ditches have, however, become significant sources of methane emissions, which is accounted in the national greenhouse gas inventory as part of the land use sector. From Tier 1 emission factors to more advanced emission estimates The current estimate of methane emissions of ditches from Finland's forestry-drained peatlands presented in the greenhouse gas inventory is based on the Tier 1 emission factor of the Intergovernmental Panel on Climate Change (IPCC). However, the studies used as the basis for Tier 1 emission factors represent Finnish conditions poorly, as only two of the 11 study areas are located in Finland. This is why the dataset had to be expanded significantly. "We conducted chamber measurements for methane emissions and collected previous measurement data regarding different types of ditches from a total of 21 study areas in Finland," says Antti Rissanen, Academy Research Fellow at the University of Tampere. Furthermore, the Tier 1 emission factor does not address that different types of ditches may have different levels of methane emissions. "Based on previous studies, we could make the assumption that methane emissions from ditches depend on the type of ditch and especially on the type of vegetation in the ditch," says Rissanen. Low emissions from moss-covered ditches The study showed that moss-covered ditches generate very low methane emissions, being only one eighth of emissions from moss-free water-covered ditches and Tier 1 emissions. Therefore, the Tier 1 emission factor significantly overestimates methane emissions from moss-covered ditches. These results can probably be explained by microbial activity. "Methanotrophs, methane-consuming bacteria, live in and on mosses, which consume methane before it is released into the atmosphere. It is also possible that the organic compounds excreted by mosses restrain the activity of methanogenic microbes that generate methane," says Rissanen. Methane emissions from Finland's forest ditches are lower than previously estimated The study also estimated the area of ditches of forestry-drained peatlands in Finland. In addition, the percentage of both moss-covered and moss-free ditches was estimated. "We estimated that two thirds of all ditches are moss-covered and only one third are moss-free. The high percentage of moss-covered ditches can probably be explained by the significant decrease in ditch network maintenance in recent years," says Leena Stenberg, Research Scientist at Luke. As a result of the low emissions and high percentage of moss-covered ditches, the study's results of methane emissions from ditches of forestry-drained peatlands were roughly 8,600 tons of methane per year, being as much as 63% lower than in the current greenhouse gas inventory (approximately 23,200 tons). Converted into the commonly used unit, carbon dioxide equivalents (CO2eq), the new emission value is roughly 0.4 million tons of CO2eq lower than the previous one. The research group proposes that the national emission factors based on the results be used in the greenhouse gas inventory, as they represent emissions from Finland's forest ditches better than the Tier 1 emission factors. The work is published in the journal Frontiers in Environmental Science. More information: Antti J. Rissanen et al, Vegetation impacts ditch methane emissions from boreal forestry-drained peatlands—Moss-free ditches have an order-of-magnitude higher emissions than moss-covered ditches, Frontiers in Environmental Science (2023). DOI: 10.3389/fenvs.2023.1121969 Provided by Natural Resources Institute Finland (Luke)
Environmental Science
When It Rains, It Pours. Why? Michela Biasutti, a researcher at the Columbia Climate School’s Lamont-Doherty Earth Observatory, focuses on the variability of rain across across many time scales across the world, the workings of monsoons, and the influence of climate change. She recently joined the Environmental Science and Policy program at Columbia’s School of International and Public Affairs as a climatology professor. Biasutti introduces students to the workings of the atmosphere and the oceans, their complex interactions, and how they collectively and individually affect climate. Biasutti recently discussed her professional experience and research interests, and gave some career advice. The interview has been edited for length and clarity. Could you tell me a little bit about your research? I’m interested especially in how frequent or intense rainfall is, what determines the timing of the monsoons, when rain arrives to a certain place and when it retreats. Of course, I am also interested in climate change. What’s going to happen in a warmer world? Where do we see more robust changes in frequency and in intensity, and why? We’re finding some interesting results in terms of distinguishing between what happens in the core of very rainy regions as opposed to more marginal places where the rainy season might be shorter and more variable. This might have some implication for how you deal with the expected changes in rainfall, if we can trust the models that make these projections. I have also worked on understanding the source of drought in the ’70s and ’80s in the Sahel region of Africa. I specifically looked into understanding how much of it was due to the fact that the industrialized countries were emitting a lot of sulfate aerosols. When you burn fossil fuels, you don’t only get CO2 out of it, you also get particulate matter. What was the effect of this pollution on the drought in Africa? If I’m lucky and some of our submitted proposals are funded, then I want to also learn more about the climate of the much deeper past. What sparked your curiosity about weather and precipitation? How did you first become interested in climate? I started out studying physics, but when I got to the end of my degree studying elementary particles, I realized that to make progress on that research front, especially as a young investigator, I would have to be part of these huge collaborative groups where each person works on something very small. I was already interested in the environment and in doing something that would have some practical application. I happened to meet the right people who steered me towards climate work, where there was still room for large contributions by small groups. Once you start, it’s a very fascinating subject. At that time, in the mid-90s, we kind of thought the Kyoto Protocol was on its way, and it felt like we were just about ready to put a check mark on climate change—that it was going to get solved. So the question was more: “Now that we can predict El Niño, we can really try to have predictability at annual time scales, or maybe decadal time scales. What do we know about natural variability?” Instead, climate change became this crisis that is kind of sucking the air out of the room of most other research. It is so important to be part of the solution. But it wasn’t the original motivation. Do you have any thoughts about how to communicate the science behind climate change to skeptics? In my own personal experience, information alone is not going to convince everyone. I think what’s important at this point is to say, “We can build a better future, all of us, by doing the right things.” We can build jobs that are competitive on a global scale, we can build the wind turbines and solar panels. We can reinvest in our infrastructure. I was invited recently to a conversation at the Italian consulate with the Italian minister for environment and energy security. That is a right-wing government that is dragging their feet. But even in that conversation, I got a sense that the stated reasons to be slow on action were, “We don’t want to hurt the economy,”’ rather than “The science is not solid.” I got the sense that the discussion has shifted. Tell me about your teaching. Lamont professors don’t typically teach, since most of our time is spent on research, but in the last few years I’ve started teaching in the M.A. in Climate and Society program. Before that I taught for one year in the M.S. in Sustainability Science program. In a way, I’m going through all the masters programs associated with the Climate School that address the intersection between inquiries such as “What is the natural system? How do we adapt to it? How do we limit the damage we do?” They’re very similar programs at their cores. Do you have any personal or professional advice for current, past, or future students? This may sound very obvious, but build a network. It is what will bring you the necessary knowledge for a problem that is all about complex systems where no one person can know everything you need to know. We need to be in this for the long run, so find ways to not lose faith. I often go back to read works about the American civil rights movement and how they kept going. You have to do it not because you expect to win that battle that year. This is how I want to be in the world, and how I want to lead, as much as I can. I’m going to try to find joy in the practice. For me, it can be just the wonder of the natural world. As much as I’m worried about the climate, I’m also still very much fascinated by it, and there’s a sense of discovery that will still bring you joy. The learning and the connections are the most important things for me. Vanessa Lincoln is an associate with the Environmental Science and Policy program, and an alumna.
Environmental Science
Arsenic contaminates private drinking water wells across the western Great Basin In the arid and drought-stricken western Great Basin, sparse surface water means rural communities often rely on private groundwater wells. Unlike municipal water systems, well water quality in private wells is unregulated, and a new study shows that more than 49 thousand well users across the region may be at risk of exposure to unhealthy levels of arsenic in drinking water. Led by researchers at DRI and the University of Hawai'i Cancer Center and published February 16th in Environmental Science and Technology, the study used data from groundwater wells across the western Great Basin to build a model to predict the probability of elevated arsenic in groundwater, and the location and number of private well users at risk. According to the study, the Carson Desert basin (including the town of Fallon, Nevada), Carson Valley (Minden and Gardnerville, Nevada), and the Truckee Meadows (Reno), have the highest population of well users at risk. The new study builds on previous research showing that 22% of 174 domestic wells sampled in Northern Nevada had arsenic levels exceeding the EPA guideline. "What we are finding is that in our region, we have a high probability for elevated arsenic compared to most other regions in the country," said Daniel Saftner, M.S., a hydrogeologist at DRI and lead author of the study. "And we are seeing that geothermal and tectonic processes that are characteristic of the Great Basin contribute to the high concentrations of naturally occurring arsenic in the region's groundwater." The region's mountains are also primary sources of arsenic. "As the arsenic-rich volcanic and meta-sedimentary rocks that form the mountains erode, sediment is transported to the valleys below," says Steve Bacon, Ph.D., DRI geologist and study co-author. Water percolating through the valley floor then carries arsenic into the groundwater. Deeper, older groundwater and geothermal waters tend to have a higher arsenic concentration and can migrate upward along faults and mix with shallow groundwater. "We really wanted to better understand the unique geologic factors that contribute to high arsenic in this study," Saftner says. "It's important for us to think about the role of the environment as it pertains to human health—where we live can influence what our long-term health looks like." To train and test the predictive model, the research team used data collected through the Healthy Nevada Project, including water samples from 163 domestic wells primarily located near Reno, Carson City, and Fallon. These data were supplemented with 749 groundwater samples compiled from the USGS National Water Information System. The model uses tectonic, geothermal, geologic, and hydrologic variables to predict the probability of elevated arsenic levels across the region. Although the U.S. EPA has set an arsenic concentration guideline of 10 µg/L for public drinking water, previous research has shown a range of health effects from long-term exposure to levels above 5 µg/L. Using this concentration as the benchmark, the model and map show that much of the region's groundwater—particularly in western and central Nevada—is predicted to have more than a 50% probability of elevated arsenic levels. "Community members can use our arsenic hazard map to see what the risk is at their location, which might motivate them to test their well water," says Monica Arienzo, Ph.D., associate research professor at DRI and study co-author. "Then, if they have high levels of arsenic or other contaminants, they can take steps to reduce their exposure, such as installing a water treatment system." The findings from this study are potentially useful for a range of different applications. "The results can be useful for water utilities or water managers who tap similar shallow aquifers for their water supply," says Saftner, "as well as irrigation wells that source water from these aquifers." The research team plans to use their model to take a closer look at the health impacts of prolonged arsenic exposure. "Through the Healthy Nevada Project, genetic data and health records are paired with environmental data to help determine whether there are associations between the levels of arsenic in a community's groundwater and specific health outcomes," stated Joe Grzymski, Ph.D., research professor at DRI and principal investigator of the project. More information: Daniel M. Saftner et al, Predictions of Arsenic in Domestic Well Water Sourced from Alluvial Aquifers of the Western Great Basin, USA, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c07948
Environmental Science
JUNEAU, Alaska -- Thousands of tourists spill onto a boardwalk in Alaska’s capital city every day from cruise ships towering over downtown. Vendors hawk shoreside trips and rows of buses stand ready to whisk visitors away, with many headed for the area’s crown jewel: the Mendenhall Glacier. A craggy expanse of gray, white and blue, the glacier gets swarmed by sightseeing helicopters and attracts visitors by kayak, canoe and foot. So many come to see the glacier and Juneau’s other wonders that the city’s immediate concern is how to manage them all as a record number are expected this year. Some residents flee to quieter places during the summer, and a deal between the city and cruise industry will limit how many ships arrive next year. But climate change is melting the Mendenhall Glacier. It is receding so quickly that by 2050, it might no longer be visible from the visitor center it once loomed outside. That’s prompted another question Juneau is only now starting to contemplate: What happens then? “We need to be thinking about our glaciers and the ability to view glaciers as they recede,” said Alexandra Pierce, the city’s tourism manager. There also needs to be a focus on reducing environmental impacts, she said. “People come to Alaska to see what they consider to be a pristine environment and it’s our responsibility to preserve that for residents and visitors.” The glacier pours from rocky terrain between mountains into a lake dotted by stray icebergs. Its face retreated eight football fields between 2007 and 2021, according to estimates from University of Alaska Southeast researchers. Trail markers memorialize the glacier's backward march, showing where the ice once stood. Thickets of vegetation have grown in its wake. While massive chunks have broken off, most ice loss has come from the thinning due to warming temperatures, said Eran Hood, a University of Alaska Southeast professor of environmental science. The Mendenhall has now largely receded from the lake that bears its name. Scientists are trying to understand what the changes might mean for the ecosystem, including salmon habitat. There are uncertainties for tourism, too. Most people enjoy the glacier from trails across Mendenhall Lake near the visitor center. Caves of dizzying blues that drew crowds several years ago have collapsed and pools of water now stand where one could once step from the rocks onto the ice. Manoj Pillai, a cruise ship worker from India, took pictures from a popular overlook on a recent day off. “If the glacier is so beautiful now, how would it be, like, 10 or 20 years before? I just imagine that,” he said. Officials with the Tongass National Forest, under which the Mendenhall Glacier Recreation Area falls, are bracing for more visitors over the next 30 years even as they contemplate a future when the glacier slips from casual view. The agency is proposing new trails and parking areas, an additional visitor center and public use cabins at a lakeside campground. Researchers do not expect the glacier to disappear completely for at least a century. “We did talk about, ‘Is it worth the investment in the facilities if the glacier does go out of sight?’" said Tristan Fluharty, the forest’s Juneau district ranger. “Would we still get the same amount of visitation?” A thundering waterfall that is a popular place for selfies, salmon runs, black bears and trails could continue attracting tourists when the glacier is not visible from the visitor center, but “the glacier is the big draw,” he said. Around 700,000 people are expected to visit this year, with about 1 million projected by 2050. Other sites offer a cautionary tale. Annual visitation peaked in the 1990s at around 400,000 to the Begich, Boggs Visitor Center, southeast of Anchorage, with the Portage Glacier serving as a draw. But now, on clear days, a sliver of the glacier remains visible from the center, which was visited by about 30,000 people last year, said Brandon Raile, a spokesperson with the Chugach National Forest, which manages the site. Officials are discussing the center's future, he said. “Where do we go with the Begich, Boggs Visitor Center?" Raile said. “How do we keep it relevant as we go forward when the original reason for it being put there is not really relevant anymore?” At the Mendenhall, rangers talk to visitors about climate change. They aim to “inspire wonder and awe but also to inspire hope and action," said Laura Buchheit, the forest's Juneau deputy district ranger. After pandemic-stunted seasons, about 1.6 million cruise passengers are expected in Juneau this year, during a season stretching from April through October. The city, nestled in a rainforest, is one stop on what are generally week-long cruises to Alaska beginning in Seattle or Vancouver, British Columbia. Tourists can leave the docks and move up the side of a mountain in minutes via a popular tram, see bald eagles perch on light posts and enjoy a vibrant Alaska Native arts community. On the busiest days, about 20,000 people, equal to two-thirds of the city’s population, pour from the boats. City leaders and major cruise lines agreed to a daily five-ship limit for next year. But critics worry that won’t ease congestion if the vessels keep getting bigger. Some residents would like one day a week without ships. As many as seven ships a day have arrived this year. Juneau Tours and Whale Watch is one of about two dozen companies with permits for services like transportation or guiding at the glacier. Serene Hutchinson, the company's general manager, said demand has been so high that she neared her allotment halfway through the season. Shuttle service to the glacier had to be suspended, but her business still offers limited tours that include the glacier, she said. Other bus operators are reaching their limits, and tourism officials are encouraging visitors to see other sites or get to the glacier by different means. Limits on visitation can benefit tour companies by improving the experience rather than having tourists “shoehorned” at the glacier, said Hutchinson, who doesn't worry about Juneau losing its luster as the glacier recedes. “Alaska does the work for us, right?" she said. “All we have to do is just kind of get out of the way and let people look around and smell and breathe.” Pierce, Juneau’s tourism manager, said discussions are just beginning around what a sustainable southeast Alaska tourism industry should look like. In Sitka, home to a slumbering volcano, the number of cruise passengers on a day earlier this summer exceeded the town’s population of 8,400, overwhelming businesses, dragging down internet speeds and prompting officials to question how much tourism is too much. Juneau plans to conduct a survey that could guide future growth, such as building trails for tourism companies. Kerry Kirkpatrick, a Juneau resident of nearly 30 years, recalls when the Mendenhall’s face was “long across the water and high above our heads.” She called the glacier a national treasure for its accessibility and noted an irony in carbon-emitting helicopters and cruise ships chasing a melting glacier. She worries the current level of tourism isn't sustainable. As the Mendenhall recedes, plants and animals will need time to adjust, she said. So will humans. “There’s too many people on the planet wanting to do the same things,” Kirkpatrick said. "You don’t want to be the person who closes the door and says, you know, ‘I’m the last one in and you can’t come in.’ But we do have to have the ability to say, ‘No, no more.’”
Environmental Science
Undermining of institutions and lack of local policies hinder fire management in Amazonia An article published in the International Journal of Disaster Risk Reduction discusses Amazon Rainforest wildfire governance with local community participation in the so-called tri-national border region between Madre de Dios in Peru, Acre state in North Brazil, and Pando, one of Bolivia's nine departments (subnational administrative divisions). The region is sometimes referred to as MAP. "Living in Acre, we have little or no influence on environmental management policy. We see scientists from other regions or other countries talk about the Amazon and highlight the importance of policy issues, but we who live here are left out of this governance. The fact that our study presents a local perspective on the region's vulnerabilities and capabilities is highly positive," said sociologist Gleiciane Pismel, first author of the article and a researcher at Brazil's National Disaster Surveillance and Early Warning Center (CEMADEN). To garner the perceptions of local and regional stakeholders, the researchers conducted an online survey during the COVID-19 pandemic (in 2020-21) involving 111 practitioners, policymakers, representatives of NGOs and scientists. Some 60% of the participants considered deforestation the main factor contributing to wildfires in the Amazon, followed by the use of burning in agricultural management (58%), and drought (39%). These are accurate perceptions in light of other recent studies showing an increase in forest fires associated with the advance of deforestation that endangers conservation of the Amazon Rainforest and its socio-biodiversity. For the respondents to the online survey, the main vulnerabilities in governance to contain fires and their impact on the region are deficiencies in institutions and control bodies associated with staff cuts and underinvestment. Besides weakening of institutions, instability in national and local public policies figured prominently among perceived governance failures. To a large extent, the policies and measures implemented reflected national actions that did not take local peculiarities into account. Other issues included lack of participation by local communities and socio-cultural aspects of the use of fire, especially in areas of pasture and cropland in the vicinity of environmental protection areas. "The significant risks to ecosystem services, apart from deforestation, include degradation by fire, clearcutting and fire edge effects, as we showed in our article in Science. On the other hand, few studies have analyzed governance issues relating to fire and burning. These are urgent emerging issues for the Amazon region. We assembled a multidisciplinary team of researchers in all three countries to analyze them," Liana Anderson, last author of the article and a researcher at CEMADEN, told us. The article to which Anderson referred is "The drivers and impacts of Amazon forest degradation", a cover feature of the January 2023 issue of Science showing that about 38% of the Amazon Rainforest is currently degraded owing to fire, illegal logging, edge effects, and increasingly frequent extreme drought due to climate change. MAP-FIRE Project The current study is part of the Multi-Actor Adaptation Plan to cope with Forests under Increasing Risk of Extensive fires (MAP-FIRE), launched in March 2019 to focus on fire governance in the MAP region as a project of the Tropical Ecosystems and Environmental Science Lab (TREES) at the National Space Research Institute (INPE). "The project engages with actors in the frontier territory, such as NGOs, public administrators, and interdisciplinary and transdisciplinary scientists in Brazil, Peru and Bolivia, to conduct discussions of risk management. Disasters in the region don't respect political or administrative boundaries so that interdisciplinary and transdisciplinary methods are needed to create and strengthen risk mitigation actions," said Marchezini. In the study, the group analyzed perceptions of wildfire governance vulnerabilities and capabilities in the MAP region in terms of knowledge of the risks, monitoring, education and communication, and disaster prevention and responses. Vulnerabilities and capabilities were classified as economic, educational, environmental, organizational, political, legal, socio-cultural and technological. The project design was based on workshop discussions held during the pandemic with 668 participants from the three countries involved. "Three different perspectives are needed to analyze the socio-environmental issues. These are provided by the MAP-FIRE project and the article. The team is multidisciplinary, and we have researchers from all three countries, including Galia Selaya from Bolivia and Eddy Mendoza from Peru," Pismel said. Wildfires in the Amazon are a growing threat to forest ecosystem services and biodiversity. They can also become cross-border disasters, owing mainly to the adverse effects of smoke on human health, transportation and the regional economy. Soot microparticles, which are easily inhaled, contributed to a rise in hospital admissions due to respiratory problems in five states in the Brazilian Amazon between 2010 and 2020, for example. According to a March 2021 press release issued by Oswaldo Cruz Foundation (FIOCRUZ), an arm of Brazil's Health Ministry, 174 people were admitted to hospital per day for treatment of respiratory difficulties in Pará state and 57 per day in Mato Grosso state in the period. Wildfires and burnings in the Amazon are costly to the SUS (Sistema Único de Saúde, Brazil's national health service). Public hospitals and clinics in the region spent BRL 1 billion on the treatment of patients with respiratory disorders due to exposure to smoke from forest fires in the ten-year period. Alternatives According to the authors, the respondents to the survey supported measures to improve wildfire governance such as organizational capacity building, community involvement, investment in socio-environmental management as permitted by the prevailing conditions in each country and the MAP region, and better staffing of organizations in both quantitative and qualitative terms. Municipal civil defense units in Brazil are staffed by one or two people, mostly as political appointments and without professional valorization. Funding for disaster prevention and risk management does not feature in the budgets of most municipalities. The authors also stress the importance of drafting public policies and laws that take local conditions into account and distributing responsibilities and resources at the national, regional, and municipal levels. "Inhabitants of the region who are exposed to fire, or use it, aren't involved in any governance systems that can help them make decisions or become more well-informed. It's important to improve integration between institutions, as well as communication between and inside them. Environmental education is necessary, as is inclusion of topics relating to fire in school syllabuses, connecting very clearly with the reality of the people who live there," Anderson said. To fill the gap in educational material, the researchers have produced a teacher training textbook, which is available online. "Producing this educational material was the means to introduce the topic into the classroom, especially given the ongoing superabundance of fake news. In Brazil, we've witnessed losses of infrastructure and institutional capacity, alongside the undermining of knowledge by disinformation. When disinformation becomes widespread in a society, it's very hard not only to recoup trust and confidence in the institutions that produce knowledge and science but also to combat the false information so as to make room for the facts. Disinformation usually travels faster than the truth," Anderson said. More information: Gleiciane O. Pismel et al, Wildfire governance in a tri-national frontier of southwestern Amazonia: Capacities and vulnerabilities, International Journal of Disaster Risk Reduction (2023). DOI: 10.1016/j.ijdrr.2023.103529 Journal information: Science Provided by FAPESP
Environmental Science
Q&A: Environmentally Sustainable Growth This story was originally published in SIPA News. In his new book, Environmentally Sustainable Growth: A Pragmatic Approach, Professor Steven Cohen offers a positive vision of an environmentally sustainable future and outlines realistic paths toward a renewable resource-based economy. One of the longest serving faculty members at Columbia’s School of International and Public Affairs, Cohen is director of the MPA Program in Environmental Science and Policy as well as the MS in Sustainability Management program. In the Q&A below, Cohen discusses the impact of globalization on policy regulation, how the government can influence consumers’ and producers’ behavior to achieve a more environmentally beneficial economy, and more. What inspired you to write this book now? I wanted to make it clear that the transition to an environmentally sustainable economy was an important goal. We are not going to solve the climate problem, but we are going to make it less bad and hopefully we will adapt to the warmer world we unfortunately will be living in. How do we solve political polarization around the climate crisis? When trying to solve polarization we have to think about what we have in common. The environment makes it easy because we are all biological creatures. Nobody wants to be poisoned. We need to look for those areas of common ground and build on them. How do we get the Global North to care about the climate impact its policies are having in the developing world? The developing world needs to protect itself from these impacts, but they’re not going to protect themselves by hoping that somehow the developed world is going to pay for it. They need to have other solutions. Part of what has happened historically when we think about foreign aid is that it is dwarfed by remittances of people who emigrate from one country to another, generate surplus wealth, and send some of it home. That’s one way that capital formation takes place, and that has been taking place in the developing world. How do you see globalization playing a role in environmental sustainability? Overall, humanity has benefited from technologies because they affect economic life, which affects culture. Our economy is based more and more on services and creativity and less on manual labor and manufacturing. So if you think about environmental sustainability, part of the answer to this is to not assume globalization is going to go away, but to learn how to assure sustainability throughout supply chains. How can policy regulation impact businesses? One of the things that regulation does is that it provides incentives for innovation. We are now requiring large buildings in New York to be more energy efficient and to eventually get off fossil fuels. That is spurring the development of alternative energy technologies. And regulation doesn’t just impact companies — its costs and benefits are felt throughout society. The [U.S. Office of Management and Budget] has been looking at the cost and benefits of air pollution regulations for 50 years; for every dollar you put into air pollution control, you get a $15 benefit. So the creation of a cleaner environment provides benefits that are worth the costs. How do we influence the behavior of corporations and individuals to be more environmentally beneficial? The government has a role to play. The role is not to assume that you are going to turn companies into good actors. You have to require them to be good actors. Companies report finances honestly because they have no choice. They’re honest because in order to raise capital in the public market place they must adhere to the financial reporting rules required by the Security and Exchange Commission. This reporting requirement will soon include indicators of environmental risk. Both the SEC and the European Union are in the process of issuing carbon and environmental risk disclosure requirements. Now companies will need to share honest and accurate financial and environmental performance reports. What is the government doing to get companies to be more honest in this sector? The government requires quarterly financial reports by publicly traded companies who want to maintain access to the stock market to raise capital. In the next month or so the SEC will begin requiring a carbon and environmental risk disclosure report on a quarterly basis. This specific government reporting requirement is going to dictate what companies report and how to measure climate risk. Currently many companies issue voluntary ESG reports and the sustainability metrics used are far from uniform. The SEC rule will move us closer to uniform sustainability metrics. What is the main takeaway you want people to get out of your book? We are on the path to address the climate crisis and we are moving in the direction of an environmentally sustainable economy. The cost of economic development that doesn’t pay attention to the planet’s well-being is becoming higher, so we will begin to transition to a circular economy. I believe that eventually, instead of mining minerals from the ground, we are going to mine from our waste stream because it will be cheaper to do that than to mine from the ground. Government, companies, and communities are paying much more attention to environmental sustainability, diversity, equity and inclusion and social justice. Despite our current political problems, these large-scale cultural and social changes give us a reason to believe we can successfully transition to an environmentally sustainable economy. This interview, by Giulia Campos MIA ’24, has been condensed and edited for clarity.
Environmental Science
The World Meteorological Organization (WMO) recently presented its second report on the status of global water resources. According to this report, large parts of the world experienced drier conditions in 2022 than those recorded on average for the equivalent periods over the last 30 years. "Nearly 40 percent of the territories examined were suffering from drier than normal conditions," said Professor Robert Reinecke of Johannes Gutenberg University Mainz (JGU). "This means that the flow rate of many rivers worldwide was significantly below what would normally be expected. Added to this, the levels of moisture in the soil were frequently indicative of the effects of the heatwaves we have experienced while the need for greater use of water has resulted in the groundwater table becoming lower than in the reference period." Reinecke, who joined the JGU Institute of Geography in May 2023, has made a major contribution to the new WMO report -- in collaboration with Dr. Hannes Müller Schmied of Goethe University Frankfurt and the Senckenberg Leibniz Biodiversity and Climate Research Center Frankfurt (SBiK-F) as well as the Global Runoff Database Center (GRDC) in Koblenz. Together they supplied simulation data based on hydrological modeling, participated in the development of the corresponding methodology, and provided scientific validation of the report's key statements. With the WMO acting as coordinating body, the report results from the expertise provided by 11 international modeling groups. The State of Global Water Resources 2022 report was published on October 12, 2023. Scientifically validated findings on the global water situation The first State of Global Water Resources Report for 2021 was presented in late November 2022 at the WMO headquarters in Geneva. The report is to appear annually and provide an overview of the status of the Earth's water resources. The effects of climatic fluctuations and changes can often also be seen by what happens to our water: Heatwaves coupled with droughts can make wildfires more likely and these can then spread more rapidly due to the lack of soil moisture, to give only one example. "The WMO report is thus also designed to provide politicians and the industry with knowledge so as to identify regions that are at risk of experiencing water emergencies or are already in crisis," added Reinecke. Among the data shown in the 2022 report is information on the discharge rate of rivers, the levels of groundwater, soil moisture, and evaporation. However, generation of the data basis itself is problematic in that there are currently insufficient global statistics available. "Thus, we need to undertake simulation modeling," explained Reinecke, a specialist in modeling techniques. There is a particular lack of data on the situation regarding groundwater. Even Germany cannot provide complete figures as to the related circumstances. However, there is no doubt that the dry conditions in 2022 had considerable impact in Germany, too. Just as in the case of the River Po in Italy, the water levels of the Rhine fell dramatically over longer periods, with the associated consequences for river traffic. France suffered from insufficient precipitation, resulting in difficulties when it came to providing the cooling required by nuclear power plants. South America experienced severe drought conditions while, despite increasing precipitation, groundwater levels in the important Murray-Darling Basin in Australia continued to drop below normal. New Earth System Modeling group at Mainz University Robert Reinecke was appointed to a junior professorship at JGU's Institute of Geography in May 2023. Here he will establish an Earth System Modeling group. After studying computer science at TU Darmstadt, being awarded a doctorate by Goethe University Frankfurt, and undertaking research in California, he was employed at the International Center for Water Resources and Global Change (ICWRGC). He subsequently undertook research as a postdoc at the Institute of Environmental Science and Geography of the University of Potsdam. His research focuses on global groundwater in the context of global hydrological modeling. He also investigates surface water/groundwater interaction, the human impact on groundwater resources, and the impact of climate change on the hydrological cycle. Story Source: Cite This Page:
Environmental Science
The future of disease tracking is going down the drain — literally. Flushed with success over detecting coronavirus in wastewater, and even specific variants of SARS-CoV-2, the virus that causes COVID-19, researchers are now eyeing our collective poop to monitor a wide variety of health threats. Before the pandemic, wastewater surveillance was a smaller field, primarily focused on testing for drugs or mapping microbial ecosystems. But these researchers were tracking specific health threats in specific places — opioids in parts of Arizona, polio in Israel — and hadn’t quite realized the potential for national or global public health. Then COVID-19 hit. The pandemic triggered an “incredible acceleration” of wastewater science, says Adam Gushgari, an environmental engineer who before 2020 worked on testing wastewater for opioids. He now develops a range of wastewater surveillance projects for Eurofins Scientific, a global laboratory testing and research company headquartered in Luxembourg. A subfield that was once a few handfuls of specialists has grown into more than enough scientists to pack a stadium, he says. And they come from a wide variety of fields — environmental science, analytical chemistry, microbiology, epidemiology and more — all collaborating to track the coronavirus, interpret the data and communicate results to the public. With other methods of monitoring COVID-19 on the decline, wastewater surveillance has become one of health experts’ primary sources for spotting new surges. Hundreds of wastewater treatment plants across the United States are now part of COVID-19 testing programs, sending their data to the National Wastewater Surveillance System, or NWSS, a monitoring program launched in fall 2020 by the U.S. Centers for Disease Control and Prevention. Hundreds more such testing programs have launched globally, as tracked by the COVIDPoops19 dashboard run by researchers at the University of California, Merced. In the last year, wastewater scientists have started to consider what else could be tracked through this new infrastructure. They’re looking at seasonal diseases like the flu, recently emerging diseases like bird flu and mpox, formerly called monkeypox, as well as drug-resistant pathogens like the fungus Candida auris. The scientists are even considering how to identify entirely new threats. Wastewater surveillance will have health impacts “far broader than COVID,” predicts Amy Kirby, a health scientist at the CDC who leads NWSS. But there are challenges getting from promise to possible. So far, such sewage surveillance has been mostly a proof of concept, confirming data from other tracking systems. Experts are still determining how data from our poop can actually inform policy; that’s true even for COVID-19, now the poster child for this monitoring. And they face public officials wary of its value and questions over whether, now that COVID-19 health emergencies have ended, the pipeline of funding will be cut off. This monitoring will hopefully become “one of the technologies that really evolves post-pandemic to be here to stay,” says Mariana Matus, cofounder of Biobot Analytics, a company based in Cambridge, Mass., that has tested sewage for the CDC and many other health agencies. But for that to happen, the technology needs continued buy-in from governments, research institutions and the public, Matus and other scientists say. How wastewater testing works Wastewater-based epidemiology has a long history, tracing back at least to physician John Snow’s 1850s observations that cholera outbreaks in London were connected to contaminated water. In the 1920s and ’30s, scientists began to take samples from sewage and study them in the lab, learning to isolate specific pathogens that cause disease. These early researchers focused on diseases that spread through contaminated water, such as polio and typhoid. Today, automated machines typically retrieve sewage samples. The machines used to collect waste beneath maintenance hole covers are “like R2-D2 in terms of size” or smaller, says Erin Driver, an environmental engineer at Arizona State University in Tempe who works on collection methods. Driver can plug this machine, or a larger version used for sampling at wastewater treatment plants, into a water pipe and program it to pull a small amount of sewage into an empty bottle at regular intervals, say, once an hour for 24 hours. She and colleagues are developing smaller versions of the automated sampler that could be better suited for more targeted sampling. What happens in the lab to that bottle of waste depends on what scientists are testing for. To test for opioids and other chemicals, scientists might filter large particles out of the sample with a vacuum system, extract the specific chemicals that they want to test, then run the results through a spectrometer, an instrument that measures chemical concentrations by analyzing the light the chemicals give off. To determine levels of SARS-CoV-2 or another virus, a scientist might separate liquid waste from solid waste with a centrifuge, isolate viral genetic material, and then test the results with a PCR machine, similar to testing someone’s nose swab. Or, if scientists want to know which SARS-CoV-2 variants are present, they can put the material through a machine that identifies a variety of genetic sequences. Would the coronavirus even show up in waste? In the panicked early days of the pandemic, an urgent basic question loomed. “Will this even work?” remembers Marlene Wolfe, an environmental microbiologist at Emory University in Atlanta. While polio is spread through fecal matter, there were early hints that the coronavirus mostly spreads through the air; scientists initially weren’t even sure that it would show up in sewage. On the same day in 2020 that the San Francisco Bay Area went on lockdown, Wolfe and colleagues at Stanford University, where she was based at the time, got a grant to find out. The team was soon spending hours driving around the Bay Area to collect sewage samples, “navigating lockdown rules” and negotiating special permissions to use lab space, she says. “We were anxiously waiting to see if our first samples would show a positive result for SARS-CoV-2,” Wolfe says. Not only did the sewage samples test positive, Wolfe and her colleagues found that coronavirus levels in the Bay Area’s wastewater followed the same trends as reported cases, the team reported in December 2020 in Environmental Science & Technology. When case counts went up, more virus appeared in the sewage, and vice versa. Early projects in other parts of the country showed similar results. Subscribe to Science News Get great science journalism, from the most trusted source, delivered to your doorstep. More than three years later, data on reported cases have become much less reliable. Fewer people are seeking out lab-based PCR tests in favor of easier-to-access at-home tests — with results often not reported. Wastewater trends have become the best proxy to provide early warnings of potential new COVID-19 surges, such as the increased spread this summer, to health officials and the public alike. Opening the tracking floodgates In summer 2022, wastewater tracking got a new chance to prove itself. Mpox was rapidly spreading globally, including in the United States. But tests were limited, and the disease, which was spreading primarily through intimate contact between men, quickly drew social stigma, leading some people to hesitate in seeking medical care. Within a few weeks of the start of the U.S. outbreak, Wolfe and her colleagues, as well as research teams at Biobot and other companies, had developed tests to identify mpox in sewage. Just as scientists had seen with COVID-19, mpox trends in wastewater matched trends in official case numbers. In California, wastewater results even suggested that the disease may have spread farther than data from doctors’ offices suggested, Wolfe and collaborators reported in February in the New England Journal of Medicine. Like COVID-19, mpox doesn’t transmit through the water, but sewage testing still picked up the virus. The early results from that summer outbreak convinced some health officials that wastewater technology could be used for many diseases, no matter how they spread, Matus says. Scientists are starting to find more and more infectious diseases that can be tracked in sewage. “Honestly, everything that we’ve tried so far has worked,” says Wolfe, who is now a principal investigator of WastewaterSCAN, a national sewage testing project led by researchers at Stanford and Emory. The project team currently tests samples for six different viruses and is working on other tests that it can send out to the more than 150 sites in its monitoring network. Through an informal literature review of pathogens important for public health, scientists at Biobot found that previous research had identified 76 out of 80 of them in wastewater, stool or urine, suggesting that those pathogens could be monitored through sewage. The list ranges from the chicken pox virus to the microbes that cause sexually transmitted diseases like chlamydia to the tickborne bacteria that cause Lyme disease. Finding focus With this much opportunity, the question on many researchers’ minds is not, “What can we test for?” but “What should we test for?” In January, a report put out by the National Academies of Sciences, Engineering and Medicine came up with three criteria. The pathogen should threaten public health. It should be detectable in wastewater. And it should generate data that public health agencies can use to protect their communities. Given all the threats and hints of what can be found in wastewater, the first two criteria don’t narrow the field too much. So for now, researchers are taking cues from state and local public health officials on which pathogens to prioritize. Biobot is working on tests for common diseases like the flu, RSV, hepatitis C and gonorrhea. And the CDC has its eye on some of the same common pathogens, as well as strategies for tracking antimicrobial resistance, a threat that has increased during the pandemic as health systems have been under strain. Even if they choose the perfect targets, though, researchers also have to figure out how to generate useful data. For now, that’s a sticking point. How to use the data Tracking pathogens is one thing. But determining how the results correspond to actual numbers of sick people is another, even in the case of COVID-19, where researchers now have years of detailed data. As a result, many public health officials aren’t yet ready to make policy decisions based on poop data. In New York City over the last three years, for example, the local government has poured more than $1 million into testing for COVID-19, mpox and polio in sewage from the city’s water treatment plants. But the city’s health department hasn’t been using the resulting data to inform local COVID-19 safety measures, so it’s unclear what’s being done with the data. Health officials are used to one swab per person, says Rachel Poretsky, a microbiologist at the University of Illinois Chicago. She also heads wastewater monitoring for the city of Chicago and the state of Illinois. Public health training relies on identifying individual sick people and tracing how they became ill. But in wastewater surveillance, one data point could represent thousands of sick people — and the data come from the environment, rather than from hospitals and health clinics. What to do next when positive results turn up isn’t as obvious. Numbers collected from the health care system always represent patients, so a spike indicates a surge in cases. In the case of sewage data, however, environmental factors like weather, local industries and the coming and going of tourists also can create “weird outliers” that resist easy interpretation, Poretsky says. For instance, a massive rainstorm might dilute samples, or chemical runoff from a factory might interfere with a research team’s analytical methods. Data interpretation only gets more complicated when scientists begin testing wastewater for an increasing number of health threats. Every pathogen’s data need to be interpreted differently. With coronavirus data, for example, wastewater tests consistently come back positive, so interpreting the data is all about looking for trends: Are viral concentrations going up or down? How does the amount of virus present compare with the past? A spike in a particular location might signal a surge in the community that hasn’t yet been picked up by the health care system. The community might respond by boosting health resources, such as opening vaccine clinics, handing out free masks and at-home tests, or adding staff to local hospital emergency departments. Mpox, on the other hand, has infected far fewer people, and positive tests have been rare after last summer’s outbreaks ended. Now, researchers are simply watching to see whether the virus is present or absent in a given sewershed. “It’s more about having an early warning,” Matus says. If a sewershed suddenly tests positive for mpox after negative results for the last few months, health officials might alert local doctors and community organizations to look out for anyone with symptoms, aiming to identify any cases and prevent a potential outbreak. Another complicated pathogen is C. auris, a fungus that has developed resistance to common drugs. It can spread rapidly in health care settings — and be detected in sewage. Researchers from Utah and Nevada reported in February in Emerging Infectious Diseases that it was possible to track C. auris in the sewage from areas experiencing outbreaks. If hospitals or health officials could identify the presence of this fungus early, that information could guide public health actions to curb outbreaks, says Alessandro Rossi, a microbiologist at the Utah Public Health Laboratory in Salt Lake City. But interpreting the warnings isn’t as clear-cut for C. auris as for viruses. The fungus can grow in sewage after it leaves health care facilities, Rossi says. The pathogen has “the potential to replicate, form biofilms and colonize a sewershed.” In other words, C. auris can create its own data interference, potentially making wastewater results seem worse than they really are. Moving wastewater into the future Most current testing programs are reactive. By looking at health threats one at a time using specific PCR tests, the programs mostly confirm that pathogens we already are worrying about are getting people sick. But some scientists, like Wim Meijer, envision a future in which wastewater monitoring wades into the unknown and alerts us to unusual disease outbreaks. The microbiologist, of the University College Dublin, heads Ireland’s wastewater surveillance program. Ideally, in this ahead-of-the-curve future, after detecting something alarming in sewage, his team could closely collaborate with health officials to study the pathogen and, if necessary, start combating the threat. One idea for turning the tech proactive is to prepare for new health threats that we can see coming. For example, Meijer and his colleagues are interested in screening Ireland’s sewage for the H5N1 bird flu, but they are not yet doing this testing. Another approach takes advantage of genetic testing technology to look at everything in our waste. Kartik Chandran, an environmental engineer at Columbia University who has mapped sewers’ microbial ecosystems with this technique, describes it as “trying to shine the light more broadly” rather than looking where the light is already shining brightest. Such an approach might identify new pathogens before sick people start going to the doctor’s office, potentially leading to an earlier public health response. But with health officials still unsure of how best to use wastewater data, much more basic research is needed first. “People think wastewater surveillance is the answer to everything, and clearly that’s not true,” says Kirby, of the CDC, reflecting concerns from the state and local officials that she collaborates with at NWSS. Before diving ahead into proactive surveillance, Kirby and her colleagues are working to set up basic wastewater standards and protocols for health agencies. Priorities include evaluating how sewage trends correlate to cases for different pathogens and developing standards for how to use the data. The wastewater surveillance field also needs to keep growing if the goal is to monitor and contribute to global health, with more sites contributing data and more scientists to analyze it. All of this work requires sustained funding. The CDC’s program so far has been funded by COVID-era legislation and will run out of money in 2025. While wastewater surveillance is more cost-effective than other types of testing, it still requires a lot of resources. Washington’s state health department, for example, paid Biobot more than $500,000 for a one-year sewage testing contract, while the CDC has paid the company more than $23 million since 2020 for its work with NWSS. For the last few years, wastewater surveillance has been a giant, messy group project. Scientists have collaborated across fields and locations, across private and public institutions, through Zoom calls and through poop samples shipped on ice. They’ve shown that waste might hold the key to a new way of tracking our collective health. A lot of unanswered questions remain, and it could be some time before your local sewer can tell you exactly what disease risks you might be facing. But COVID-19 pushed thousands of experts to look into their toilets and start asking those questions. “Now, everyone’s a believer,” says Driver, of ASU. “Everyone’s doing the work.”
Environmental Science
Bridge between hydrophobicity and hydrophilicity of flax fiber offers breakthrough in multipurpose oil-water separation The large number of oily wastewater discharges and oil spills are bringing about severe threats to environment and human health. Corresponding to this challenge, a number of functional materials have been developed and applied in oil-water separation as oil barriers or oil sorbents. These materials can be divided into two main categories which are artificial and natural. Natural materials such as green bio-materials are generally low cost and abundant with biological degradability, which are also regarded as promising alternatives for oil-water separation and have been paid increasing attention. Many kinds of biomass materials, such as cotton fabrics, plant fibers, and kapoks, had been used for oil-water separation. To further improve the oil-water separation performances of biomass materials, many of them were artificially coated a functional layer with special wettability on their surfaces. However, these modified flax fibers merely have either hydrophobic or hydrophilic property, without the ability of switching between each other (or switchable wettability). Such a limitation may hinder their practical applications in oil-water separation. Functional flax fibers with switchable surface wettability are thus desired. In this study, the researchers from University of Calgary, University of Regina, Concordia University, Canadian Light Source and McElhanney Inc. aimed to develop a functional flax fiber with switchable wettability for multipurpose oil-water separation. The flax fiber was coated with ZnO-hexadecyltrimethoxysilane (HDTMS) nanocomposites through a plasma-grafted poly (acrylic acid) (PAA) layer which acted as the binding agent. The as-prepared PAA-ZnO-HDTMS flax fiber was hydrophobic initially and could be switched to hydrophilic through UV irradiation. Its hydrophobicity could be easily recovered through being stored in dark environment for several days without UV irradiation. This study entitled "Functional flax fiber with UV-induced switchable wettability for multipurpose oil-water separation" is published in Frontiers of Environmental Science & Engineering. To optimize the performance of the PAA-ZnO-HDTMS flax fiber, the effects of ZnO and HDTMS concentrations on its switchable wettability were investigated. The developed PAA-ZnO-HDTMS flax fiber was comprehensively characterized through contact-angle measurement, SEM imaging, and synchrotron-based FTIR and X-ray analyses. The optimized PAA-ZnO-HDTMS flax fiber had a large water contact angle (~130°) in air and an extremely small oil contact angle (~0°) underwater initially. After UV treatment, the water contact angle was decreased to 30°, while the underwater oil contact angle was increased to more than 150°. The mechanism of the acquired UV-induced switchable wettability was investigated. It could be concluded that the ZnO-HDTMS nanocomposites immobilized to the flax fiber surface endowed the UV-induced switchable wettability to the asprepared PAA-ZnO-HDTMS flax fiber. During the modification process, the silanol groups of HDTMS bonded with hydroxyl groups on the surfaces of flax fiber and ZnO NPs. Thus, the alkyl groups of HDTMS exposed on the surface of the fresh PAA-ZnO-HDTMS flax fiber, thus the flax fiber exhibited hydrophobic property. Nano-ZnO as a photo-responsive semiconducting material, electronhole pairs could be generated on its surface during UV irradiation. These holes could interact with the lattice oxygen of nano-ZnO to produce oxygen vacancies, which could then adsorb the surrounding water in the atmosphere to generate hydroxyl groups. These hydroxyl groups changed the surface property of modified flax fiber from hydrophobicity to hydrophilicity. When the PAA-ZnOHDTMS flax fibers were stored in a dark environment, ambient oxygen could replace the hydroxyl groups, reconverting the flax fiber surface from hydrophilic to hydrophobic. Based on this UV-induced switchable wettability, the developed PAA-ZnO-HDTMS flax fiber was applied to remove oil from immiscible oil-water mixtures and oil-in-water emulsion with great reusability for multiple cycles. Thus, the developed flax fiber could be further fabricated into oil barrier or oil sorbent for oil-water separation, which could be an environmentally-friendly alternative in oil spill response and oily wastewater treatment. More information: Xiujuan Chen et al, Functional flax fiber with UV-induced switchable wettability for multipurpose oil-water separation, Frontiers of Environmental Science & Engineering (2022). DOI: 10.1007/s11783-022-1588-6 Provided by Higher Education Press
Environmental Science
When it comes to the United States phasing out PFAS, the “forever chemicals” are true to their nickname in more ways than one. It’s not going to be straightforward or swift to eliminate these substances from countless industries, even though they have been potentially linked to myriad health issues. Found in products like food packaging, clothes and firefighting foam, PFAS have contaminated drinking water sources nationwide since becoming commercially available in the middle of the last century, building up in the environment where they won’t break down for a very long time. A recent study concluded that rainwater, surface water and ground soil across the globe is extensively contaminated with these chemicals to a point that cannot be reversed without expensive, advanced technological intervention. “This stuff is toxic at incredibly low levels and it’s persistent — it stays there for hundreds of years in the groundwater, thousands of years,” said Graham Peaslee, a Notre Dame professor and researcher who’s tested many products for PFAS in his lab. “And that means the next generations will be drinking it, and that’s not the kind of legacy we want to leave our kids.” It’s a familiar story that has played out before, from DDT to PCBs. A hazardous chemical is widely used, its adverse health and environmental effects are revealed far after the fact, scientists and other concerned parties ring the alarm, and the substance in question finally garners federal attention, sometimes in the form of improved regulation or, more rarely, a full-stop ban. We’re well within the third act of that script when it comes to PFAS, with many researchers and consumers calling on industries and institutions to phase these chemicals out of their products, manufacturing processes and general use, and instead pursue safer alternatives that serve similar functions. The Environmental Protection Agency recently issued two updated interim drinking water health advisories for PFOA and PFOS — two legacy, or “long chain,” and well-studied PFAS that have been phased out of manufacturing in the U.S. but are still used in other parts of the world and products or materials that contain them can be imported. The agency also issued advisories for two newer, “short chain” PFAS known as PFBS and “GenX chemicals” that were developed to replace the legacy substances yet are still problematic from a health and environmental standpoint. Those EPA advisories don’t carry the force of law, PFAS are largely unregulated and nothing is stopping manufacturers from using the chemicals in their supply chains, which are often murky to begin with. Companies face limited pressure — at least at the federal level — to get them out of their supply chains. Multiple states, though, have taken their own legislative steps toward phasing PFAS out or outright banning them in certain products. Beyond the regulatory world, researchers are leading the way with a vision of what it means to address PFAS contamination at its source. Some companies are also voluntarily taking steps to help make that happen. It’s realistically going to take several more decades, Peaslee said, before we can truly get a handle on PFAS. But that doesn’t mean that efforts to stop further contamination by getting it out of existing manufacturing practices and products will be fruitless. What are PFAS, and why are they considered hazardous? The term “PFAS” stands for per- and polyfluoroalkyl substances. It refers to a family of thousands of different chemicals that have a wide range of commercial and industrial uses. These substances are particularly good at repelling things — their dual hydrophobic and hydrophilic properties help them resist water, plus oils and stains. These qualities help make products waterproof, stain-proof or non-stick, in addition to their use as in industrial lubricants. PFAS have been detected in goods ranging from cosmetics to period underwear to anti-fogging cloths and sprays for glasses, among many others. A 2020 study identified them across 200 different use categories. Only a handful of those thousands of chemicals have been well-studied to determine their impacts on human health. Many experts argue for approaching PFAS as a class of chemicals — as in assuming that less studied members of the chemical family may have health and environmental impacts akin to those that have been better researched, and making decisions around their use accordingly. Existing evidence suggests that high levels of exposure to PFAS – among those that have been better studied – may lead to increased cholesterol levels, decreased vaccine responses in children, higher risk of preeclampsia in pregnant people and increased risk of kidney and testicular cancer, and other outcomes, according to the Agency for Toxic Substances and Disease Registry. Graphic by Megan McGrew/PBS NewsHour In other words, limited research so far suggests that these chemicals can affect multiple systems in the body, said Courtney Carignan, an environmental epidemiologist and assistant professor at Michigan State University. “It seems that the property that makes them useful — that they’re very persistent and they have this one part of them that really likes water and the other part that does not — also seems to be what makes them problematic in the body,” Carignan said. Legacy PFAS like PFOA and PFOS were known to take years to leave the body, whereas the shorter chain ones more often in use today are shown to be expelled more along the timeframe of months. For consumers, labels can be confusing or misleading — a product may boast its “PFOA-free” status, for example, but that’s just one chemical within the PFAS family. Both legacy and shorter chain types persist in the environment and can have human health impacts regardless of how long they take for your body to eliminate them, which is why many experts maintain that there’s no world in which continuing their use is justified. “I’ve never met the good PFAS, and there are no such things,” Peaslee said. “They are all long-lived, they all bioaccumulate, a good number of them are shown to be toxic and the rest we just haven’t measured yet.” How do PFAS get into our bodies? Humans can be exposed to PFAS via ingestion, such as by drinking contaminated water or eating fish in which these chemicals have bioaccumulated. Inhalation is another route, and it can happen via indoor air — for example, if the furniture or carpeting in your home or office has been treated with PFAS to prevent stains — or outdoor air, particularly if you live close to a factory that emits PFAS through its stacks. When it comes to major sources of PFAS contamination in the U.S., “the biggest culprit to date” has been firefighting foam, also known as AFFF, Peaslee said. As of 2021, the Department of Defense was investigating nearly 700 military installations where this foam was used extensively, often during training operations, where it had ample opportunity to permeate the environment. Multiple institutions have made the switch to PFAS-free firefighting foam in recent years, or are at least in the process of doing so. Congress has ordered the Department of Defense, for example, to switch to PFAS-free firefighting foam by October 2024. But Peaslee noted that the transition isn’t quite that simple — for one thing, countless gallons of the older, fluorinated foam are still on the shelves at fire stations nationwide, and each container could contaminate hundreds of millions of gallons of water. Safely disposing of it is a massive task. The turnout gear that firefighters wear when they respond to fires is also often treated with PFAS in order to help it resist moisture and heat, and many are concerned that wearing and handling it could put them at additional risk. A independent committee facilitated by the National Fire Protection Association has recently drafted new proposed safety standards for that gear, which are open to public comment. Though exposure through consumer products is a reasonable concern, there are two even larger facets of the problem, said Shari Franjevic, who leads the GreenScreen For Safer Chemicals program at the nonprofit Clean Production Action. One is how that product came to exist in the first place – people who might work at a plant where PFAS are produced or heavily used are typically among the most most exposed to the hazardous chemicals. The other is where it will end up once it’s discarded, which is a problem for those who live nearby and are exposed through contaminated drinking water. Once a product that contains PFAS is thrown away, it can contaminate the environment in the form of leachate that eventually passes through our wastewater treatment systems, which were not designed to remove those chemicals, Carignan said. “I can wrap my hotdog or hamburger in this packaging, and the grease will never come through it,” Peaslee said, explaining the cycle. “That’s good, except that when we throw that wrapper away, 100% of that PFAS will come off in a landfill in 60 days, and then we’re all drinking it.” Getting PFAS out of products Plenty of products contain PFAS on purpose in order to perform a specific function. But to Franjevic and the GreenScreen program, there’s a distinction between intentionally added PFAS and those that most likely resulted from cross-contamination during the manufacturing process. She argues that “turning off the tap on PFAS” means prioritizing getting the chemicals out of products into which they’ve historically been added on purpose. GreenScreen helps companies by examining whether chemicals in their products have the potential to harm human health, like PFAS, and works on how to either swap them out with safer alternatives or reduce exposure if their use is absolutely essential. This comprehensive, hazard-first approach helps prevent manufacturers from going down the well-trod path of using substitutes that still come with a slew of their own health and environmental concerns. In the PFAS world, many researchers point to those shorter chain chemicals currently still in use that were considered solid replacements for legacy PFAS as an example of that phenomenon. Meanwhile, a plastic part that’s used in a broader product might not contain PFAS by design, but could still have detectable amounts of the forever chemicals when tested. That could be because the manufacturer uses a PFAS-containing release agent that helps each part pop out of its mold faster to speed up the production process, Franjevic said. Supply chains are often long, and there’s plenty of room for cross-contamination. In her view, it’s a first-things-first type of situation: Give companies a realistic pathway toward getting intentionally added PFAS out of their products, and then address impurities. “To notch down impurities now to really, really low thresholds puts almost an unfair burden [on manufacturers], and it’s not prioritizing where the biggest impact is,” she said. “And so we’re trying to be pragmatic about, ‘How do we really create the change we need to see in the world?’” Several states have passed legislation aimed at getting toxic chemicals out of consumer products, including Washington. After establishing what’s hazardous and what’s a viable alternative, the state can take steps to restrict the use of chemical of concern or mandate that consumers be notified if a product contains it, explained Rae Eaton, a chemist in the Hazardous Waste and Toxics Reduction Program at the Washington State Department of Ecology. Eaton works on a program that evaluates short-term food packaging — think takeout clamshell containers, bowls that hold hot soup or paper sandwich wrappers. PFAS are used in some of those materials to keep food from sticking to or soaking through its container before that packaging is discarded. “We’re using chemicals that can last for hundreds of years, sometimes for products that get used for 45 minutes, and then they go in the trash or they go in your compost,” Eaton said. Eaton noted that some compostable or recyclable food packaging contains PFAS, which is not good news for the industrial compost sites they’re designed for. She and her colleagues have released two reports on takeout-style packaging that analyzed a range of existing products and the purposes they serve, then detailed which alternative materials could be feasibly used in place of PFAS. It’s not a complete analysis of every alternative on the market, she said, but it does include a range of accessible options that are already in use. Some of those alternatives may be wax or clay-coated materials, ones that use polylactic acid (PLA), a biodegradable polymer that can break down under commercial composting conditions, or even switching to reusable packaging. Companies can use her team’s analysis as a resource on how to feasibly move away from PFAS-containing products and toward safer, more sustainable options. PFAS will be banned in nine types of food packaging in Washington by September 2024. Eaton said her team is now researching alternatives for longer-term food packaging, including microwaveable popcorn bags, baking paper and pet food bags, and actively soliciting input from businesses that make them, particularly if they already don’t use PFAS. What can governments and individuals do? In 1987, the Montreal Protocol aimed to phase out hazardous substances — including CFCs, or chlorofluorocarbons — that were known at the time to be depleting the ozone layer in Earth’s atmosphere. Today, that international agreement is largely considered a success — as of 2019, nations phased out 98 percent of ozone-depleting substances, and the hole in the ozone layer that prompted international cooperation was getting smaller, according to the UN Environment Program. But there’s no comparable international agreement or imperative on PFAS. Some environmentally minded companies and governments have led the charge on working to ban or phase out some of these chemicals. But it’s less clear how long it will take others to catch up – and change will depend on decision-makers committing to the effort. “There’s a combination of challenges that we have to overcome, [including] technical challenges to try and find replacements that work, but also the vested economic interests that we have to have to tackle,” said Ian Cousins, a professor in the department of environmental science at Stockholm University. He’s a leading proponent of a framework that depends on defining when and where the use of PFAS is actually essential. Plenty of companies are already interested in and working toward making a proactive pivot away from PFAS. But the U.S. regulatory system largely lacks teeth on this issue, and it’s not clear that federal officials will mandate that American companies stop using PFAS in their products and supply chains anytime soon. For now, when it comes to companies that aren’t taking initiative, a little consumer pressure can go a long way, Franjevic said. She encouraged concerned consumers to contact companies they care about and ask if their products contain PFAS or any other harmful chemicals, like phthalates. Corporations tend to track those types of requests, and when they get to a certain number, she added, they may take action. “If they get enough people asking, they will do the work,” Franjevic said. “It’ll get on their radar. So ask.”
Environmental Science
International Space Station may be more polluted than most American homes Concentrations of toxic contaminants found in dust on the International Space Station (ISS) surpass those found in floor dust in many U.S. households, a new study has found. Levels of organic pollutants in dust samples from ISS air filters were higher than the median values found in U.S. and Western European homes, according to the study, published on Tuesday in Environmental Science and Technology Letters. “Our findings have implications for future space stations and habitats, where it may be possible to exclude many contaminant sources by careful material choices in the early stages of design and construction,” co-author Stuart Harrad, a professor of environmental chemistry at the University of Birmingham, said in a statement. Contaminants identified in this so-called “space dust” included polybrominated diphenyl ethers (PBDEs), hexabromocyclododecane (HBCDD), “novel” brominated flame retardants (BFRs), organophosphate esters (OPEs), polycyclic aromatic hydrocarbons (PAH), per- and polyfluoroalkyl substances (PFAS), and polychlorinated biphenyls (PCBs). Certain commercially-formulated PBDEs are classified as persistent organic pollutants under the United Nations Environment Program’s Stockholm convention, as are PCBs, some types of PFAS and HBCDD, the authors noted. Meanwhile, some PAH have been classified as human carcinogens, and some OPEs are under consideration for restriction by the European Chemicals Agency. BFRs and OPEs are used in many countries to meet fire safety regulations in applications like electrical equipment, building insulation, furniture fabrics and foams, the researchers explained. PAH are emitted in combustion processes associated with hydrocarbon fuels, while PCBs were used in building and window sealants and in electrical equipment. PFAS, also known as “forever chemicals,” are key ingredients in certain kinds of firefighting foams, as well as in many consumer products, such as non-stick pans, waterproof apparel and cosmetics. The presence of PBDE in the ISS dust samples could be coming from inorganic fire retardants, which are used to prevent fabrics and webbing from igniting, the authors hypothesized. The ISS houses “a unique indoor environment inhabited by humans for over 20 years since its launch in November 1998,” the scientists noted. The vulnerability of spacecraft to fire means that “very careful attention is paid to the flammability of ISS contents,” they explained. The air inside the ISS is constantly recirculated, with eight to 10 changes per hour, according to the study. While the system does eliminate carbon dioxide and gaseous trace contaminants, it is unknown to what degree it can remove chemicals like flame retardants. In addition to fire retardants, the researchers also identified the presence of “off-the-shelf” items on board — such as cameras, MP3 players, tablets, medical devices and clothing — as possible sources of many of the chemicals they detected. Although levels of these contaminants exceeded those of many U.S. and Western European households, Harrad stressed that the concentrations of these chemicals “were generally within the range found on Earth.” Nonetheless, Harrad and his colleagues expressed hope that their findings could be an asset as policymakers plan for a future that increasingly reaches beyond the bounds of Earth. “The results do have implications for future space stations and habitats, where it may be possible to exclude many contaminant sources by careful material choices in the early stages of design and construction,” they concluded. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Environmental Science
Gas stoves in California homes are leaking cancer-causing benzene, researchers found in a new study published on Thursday, though they say more research is needed to understand how many homes have leaks.In the study, published in Environmental Science and Technology on Thursday, researchers also estimated that over 4 tons of benzene per year are being leaked into the atmosphere from outdoor pipes that deliver the gas to buildings around California — the equivalent to the benzene emissions from nearly 60,000 vehicles. And those emissions are unaccounted for by the state.The researchers collected samples of gas from 159 homes in different regions of California and measured to see what types of gases were being emitted into homes when stoves were off. They found that all of the samples they tested had hazardous air pollutants, like benzene, toluene, ethylbenzene and xylene (BTEX), all of which can have adverse health effects in humans with chronic exposure or acute exposure in larger amounts.Of most concern to the researchers was benzene, a known carcinogen that can lead to leukemia and other cancers and blood disorders, according to the National Cancer Institute.The finding could have major implications for indoor and outdoor air quality in California, which has the second highest level of residential natural gas use in the United States.“What our science shows is that people in California are exposed to potentially hazardous levels of benzene from the gas that is piped into their homes,” said Drew Michanowicz, a study co-author and senior scientist at PSE Healthy Energy, an energy research and policy institute. “We hope that policymakers will consider this data when they are making policy to ensure current and future policies are health-protective in light of this new research.”Homes in the Greater Los Angeles, the North San Fernando Valley, and the San Clarita Valley areas had the highest benzene in gas levels. Leaks from stoves in these regions could emit enough benzene to significantly exceed the limit determined to be safe by the California Office of Environmental Health Hazards Assessment.This finding in particular didn't surprise residents and health care workers in the region who spoke to The Associated Press about the study. That's because many of them experienced the largest-known natural gas leak in the nation in Aliso Canyon in 2015.Back then, 100,000 tons of methane and other gases, including benzene, leaked from a failed well operated by Southern California Gas Co. It took nearly four months to get the leak under control and resulted in headaches, nausea and nose bleeds.Dr. Jeffrey Nordella was a physician at an urgent care in the region during this time and remembers being puzzled by the variety of symptoms patients were experiencing. “I didn't have much to offer them,” except to help them try to detox from the exposures, he said.That was an acute exposure of a large amount of benzene, which is different from chronic exposure to smaller amounts, but “remember what the World Health Organization said: there's no safe level of benzene,” he said.Kyoko Hibino was one of the residents exposed to toxic air pollution as a result of the Aliso Canyon gas leak. After the leak, she started having a persistent cough and nosebleeds and eventually was diagnosed with breast cancer, which has also been linked to benzene exposure. Her cats also started having nosebleeds and one recently passed away from leukemia.“I'd say let's take this study really seriously and understand how bad (benzene exposure) is,” she said.___Follow Drew Costley on Twitter: @drewcostley.___The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.
Environmental Science
Cop27 may have been dismissed as a circus in some quarters, but a group of scientists and performers is staging a real circus in Ireland to inspire people to help tackle the climate crisis.The eclectic mix of engineers, conservation experts, clowns, jugglers and acrobats will perform this weekend in what is billed as Europe’s first circus science and environment festival.Circus Science by the Sea aims to harness entertainment and storytelling for climate awareness and action – and possibly to sprinkle some joy amid gloom over Cop27, the UN conference in Egypt.The festival will take place on Saturday and Sunday in the town of Westport and Achill island in County Mayo, scenic parts of Ireland’s Atlantic coast that face environmental challenges.“We don’t want people to feel powerless and miserable and that’s it all too late,” said Dea Birkett, the festival’s director and a founder of Circus250, an Achill-based community interest company. “We want people to laugh, to gasp and be full of hope. We want the joy of circus to be an engine for change.”The inaugural festival is part of Science Week, an annual event funded by the state agency Science Foundation Ireland. The shows, discussions and workshops are free, as are the puns. “Ireland’s most elevated scientists – science on stilts – taking interactive science to new heights,” says a Mayo county council publicity blurb.The festival would reflect the surrounding landscape, said Birkett. “In Achill we’re on the edge of the world. The world’s plastic ends up on our beaches. Climate change affects rural communities like ours. We notice the seasons, agriculture, the water level.”The festival has gathered shows that routinely tour Ireland and the UK. One features Angelica Santander as a clown named Juanita who appears buried under plastic that she accumulated through everyday use over the past year. Santander mimics swimming and dancing through it before inviting the audience to help repurpose and reimagine the plastic.A fruit-juggling and plate-spinning show weaves in information about an orange’s airmiles. In a show called “nature’s secret circus” an environmental scientist and circus tutors teach children acrobat shapes and movements that reflect the landscape.Another act, StrongWomen Science, features Maria Corcoran, an environmental scientist, and Aoife Raleigh, an engineer. They promote scientific inquiry while juggling liquid, eating fire, balancing chairs on chins and performing acrobatics with hula hoops.Circus tricks and problem-solving logic engage a similar part of the brain, said Raleigh, who learned circus skills while working as an engineer in Belfast. “Experimentation, creativity and embracing failure are the key ideas in both. Even though the result is very different, the methods to get there are often the same.”To learn a new trick one must think analytically, said Raleigh. “I try the movement and, if it doesn’t work, I have to figure out what the problem might be, make adjustments and try it again. This process continues tens or even thousands of times until you perfect the technique.”Circus can visually represent scientific principles, said Corcoran, who joined a juggling society while studying environmental science at university. “Circus is about pushing the boundaries of what is possible with the body and objects. Science is about pushing the boundaries of our knowledge and understanding of how the world works.”In addition to climate awareness, both performers hope to inspire girls to study science, technology, engineering and maths. “We need to inform young girls about the uncredited achievements of female scientists in the past and ensure that women never get left behind again,” said Corcoran.A seal rescue charity and coastal conservation group will launch a recruitment drive during the festival.Instead of feeling guilty or helpless about using plastic or driving cars, people who attend the festival should come away enthused and inspired, said Birkett. “Everybody gets circus. It portrays difficult and challenging issues in the most accessible way. Climate change, plastic pollution, the state of the ocean – they won’t go away, and neither will circus.”
Environmental Science
Study finds sulfate pollution impacts Texas gulf coast air Sitting on the beach, taking in the breeze, you might think the sea air is better for you than its inland equivalent. But researchers at the University of Houston have found that the air along the Gulf of Mexico coast in Texas can be more polluted due to its highly processed and acidic chemical components of particulate matter, which are microscopic solid or liquid particles in the air. Shan Zhou, research assistant professor of atmospheric chemistry, led the new study published in Environmental Science & Technology. "We found that ocean air was hazier and more polluted than the land breeze. The next question we had was why is it not clean? We concluded the microscopic particles known as particulate matter or aerosols from the Gulf of Mexico contain high concentrations of sulfate, which originates from anthropogenic (human-generated) shipping emissions. The emissions likely pump a lot of chemicals over the gulf and with a strong sea breeze, it brings that polluted air to land," said Zhou, a faculty member in UH's College of Natural Sciences and Mathematics who is the first and corresponding author of the study. In addition to shipping emissions, the team points to chemical processing as additional cause of particulate matter pollution. They report meteorological conditions, including high sunlight intensity, temperature and enhanced air humidity provided a favorable environment for chemical reactions that formed secondary aerosols, which can be harmful to your lungs and heart, according to the EPA. The research team used high-tech instrumentation that finds the chemical composition of the air in real time to conduct their study. "I supervised the team of graduate students and researchers who deployed the aerosol mass spectrometer to our field site in Corpus Christi," said co-corresponding author Rob Griffin, engineering professor at Roger Williams University. "The mass spectrometer tells us the mass of tiny particles in the atmosphere and gives us an indication of their constituents." The team spent several weeks collecting atmospheric data from Corpus Christi and San Antonio. The Corpus Christi metropolitan area in particular is understudied in atmospheric chemistry literature, compared to the Houston-Galveston and Dallas-Fort Worth regions. Further, only a handful of studies have investigated the composition and properties of particulate matter that originate in the Gulf of Mexico and travel into the region. A perfect combination Sulfate was the most abundant particulate component observed in the data from the Texas Gulf Coast air. Although this is similar to the atmospheric aerosol conditions of other marine locations globally, what set apart the Texas Gulf Coast was that its pollutant concentrations were much larger, by a factor of 3 to 70 times. The authors write that this observation points to a strong anthropogenic influence on sulfur sources over the Gulf of Mexico. The Gulf is one of the busiest maritime transport regions with 11 of the 15 busiest water ports in the U.S. located along its shores, one of them being the Port of Corpus Christi. Large commercial vessels typically burn fuel oil and this can produce sulfur oxides. The team found anthropogenic emissions over the Gulf of Mexico explain 78% of the total sulfate in the air from the Gulf. Humidity, combined with pollutant chemicals, created the perfect conditions for the conversion of sulfur dioxide to sulfates; the latter is particularly effective in degrading visibility by scattering light before it reaches an observer. Water in this case acts as a solvent and catalyst that promotes the chemical reactions and produces sulfate pollution rapidly. "The main difference I noticed in the coastal air pollution versus continental air pollution is that the coast pollution was very acidic," Zhou said. "Acidic means it's worse for your health compared to non-acidic particles." Looking ahead, Zhou aims to further investigate air pollutants over the sea. "Almost the entire state of Texas is potentially under oceanic flow throughout the year. How far can this sulfur pollution can come inland and how frequently can this happen?" She hopes this study helps other researchers in their understanding of Gulf coast pollutants across the rest of the state. More information: Shan Zhou et al, Marine Submicron Aerosols from the Gulf of Mexico: Polluted and Acidic with Rapid Production of Sulfate and Organosulfates, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c05469 Journal information: Environmental Science & Technology Provided by University of Houston
Environmental Science
Researchers engineer nanoparticles using ion irradiation to advance clean energy, fuel conversion MIT researchers and colleagues have demonstrated a way to precisely control the size, composition, and other properties of nanoparticles key to the reactions involved in a variety of clean energy and environmental technologies. They did so by leveraging ion irradiation, a technique in which beams of charged particles bombard a material. They went on to show that nanoparticles created this way have superior performance over their conventionally made counterparts. "The materials we have worked on could advance several technologies, from fuel cells to generate CO2-free electricity to the production of clean hydrogen feedstocks for the chemical industry [through electrolysis cells]," says Bilge Yildiz, leader of the work and a professor in MIT's Department of Nuclear Science and Engineering and Department of Materials Science and Engineering. Critical catalyst Fuel and electrolysis cells both involve electrochemical reactions through three principal parts: two electrodes (a cathode and anode) separated by an electrolyte. The difference between the two cells is that the reactions involved run in reverse. The electrodes are coated with catalysts, or materials that make the reactions involved go faster. But a critical catalyst made of metal-oxide materials has been limited by challenges including low durability. "The metal catalyst particles coarsen at high temperatures, and you lose surface area and activity as a result," says Yildiz, who is also affiliated with the Materials Research Laboratory and is an author of a paper on the work published in the journal Energy & Environmental Science. Enter metal exsolution, which involves precipitating metal nanoparticles out of a host oxide onto the surface of the electrode. The particles embed themselves into the electrode, "and that anchoring makes them more stable," says Yildiz. As a result, exsolution has "led to remarkable progress in clean energy conversion and energy-efficient computing devices," the researchers write in their paper. However, controlling the precise properties of the resulting nanoparticles has been difficult. "We know that exsolution can give us stable and active nanoparticles, but the challenging part is really to control it. The novelty of this work is that we've found a tool—ion irradiation—that can give us that control," says Jiayue Wang, first author of the paper. Wang, who conducted the work while earning his MIT Ph.D. in the Department of Nuclear Science and Engineering, is now a postdoctoral scholar at Stanford. Sossina Haile is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern University. Says Haile, who was not involved in the current work, "Metallic nanoparticles serve as catalysts in a whole host of reactions, including the important reaction of splitting water to generate hydrogen for energy storage. In this work, Yildiz and colleagues have created an ingenious method for controlling the way that nanoparticles form." Haile continues, "the community has shown that exsolution results in structurally stable nanoparticles, but the process is not easy to control, so one doesn't necessarily get the optimal number and size of particles. Using ion irradiation, this group was able to precisely control the features of the nanoparticles, resulting in excellent catalytic activity for water splitting." What they did The researchers found that aiming a beam of ions at the electrode while simultaneously exsolving metal nanoparticles onto the electrode's surface allowed them to control several properties of the resulting nanoparticles. "Through ion-matter interactions, we have successfully engineered the size, composition, density, and location of the exsolved nanoparticles," the team writes in Energy & Environmental Science. For example, they could make the particles much smaller—down to two billionths of a meter in diameter—than those made using conventional thermal exsolution methods alone. Further, they were able to change the composition of the nanoparticles by irradiating with specific elements. They demonstrated this with a beam of nickel ions that implanted nickel into the exsolved metal nanoparticle. As a result, they demonstrated a direct and convenient way to engineer the composition of exsolved nanoparticles. "We want to have multi-element nanoparticles, or alloys, because they usually have higher catalytic activity," Yildiz says. "With our approach the exsolution target does not have to be dependent on the substrate oxide itself." Irradiation opens the door to many more compositions. "We can pretty much choose any oxide and any ion that we can irradiate with and exsolve that," says Yildiz. The team also found that ion irradiation forms defects in the electrode itself. And these defects provide additional nucleation sites, or places for the exsolved nanoparticles to grow from, increasing the density of the resulting nanoparticles. Irradiation could also allow extreme spatial control over the nanoparticles. "Because you can focus the ion beam, you can imagine that you could 'write' with it to form specific nanostructures," says Wang. "We did a preliminary demonstration [of that], but we believe it has potential to realize well-controlled micro- and nano-structures." The team also showed that the nanoparticles they created with ion irradiation had superior catalytic activity over those created by conventional thermal exsolution alone. More information: Jiayue Wang et al, Ion irradiation to control size, composition and dispersion of metal nanoparticle exsolution, Energy & Environmental Science (2023). DOI: 10.1039/D3EE02448B
Environmental Science
Fungal pathogens that cause die-back in grape, avocado, citrus, nut and other crops has found a new host and is infecting conifer trees causing Pine Ghost Canker in urban forest areas of Southern California. The canker can be deadly to trees. Scientists from University of California, Davis, first spotted evidence that the pathogens had moved to pines during a routine examination of trees in Orange County in 2018. Over four years, they found that more than 30 mature pines had been infected in an area of nearly 100 acres, according to a report in the journal Plant Disease. Akif Eskalen, a professor of Cooperative Extension in the Department of Plant Pathology at UC Davis, suspects drought and other stress conditions brought on by climate change weakened the tree species, making it more susceptible to new threats. “We have been seeing this on pine trees for the last several years,” he said. “Our common crop pathogens are finding new hosts.” Pine Ghost Canker – caused by the fungal pathogens Neofusicoccum mediterraneum and Neofusicoccum parvum – usually infects the lower part of a tree’s canopy, killing branches before moving on to the trunks. This dieback in some cases can be deadly. Points of entry The pathogens infect a tree by entering through wounds caused by either insects, such as red-haired pine bark beetles, or pruning – meaning trees in managed or landscaped areas could be at risk. Another route is via tiny natural openings known as lenticels that fungi can make their way through, said Marcelo Bustamante, a Ph.D. candidate in Eskalen’s lab who is first author on the paper. Spores from the fungi can disperse and the higher the prevalence means an increased chance of transmission. Rain, irrigation water and humidity by fog can trigger the right circumstances for the spores to spread, he said. “The detection of these pathogens in urban forests raises concerns of potential spillover events to other forest and agricultural hosts in Southern California,” Bustamante and others wrote in the report. Dead branches can indicate a canker. Detecting the fungi is not an emergency but “people should keep an eye on their plants when they see abnormalities,” Eskalen said. Cankers are localized areas on stems, branches and tree trunks that are usually dead, discolored and sunken. On bark, the spores can look like strings of discolored dots. - Keep your trees healthy: Proper irrigation and maintenance will keep trees strong. - Prune dead branches to reduce sources of infestation. - Avoid unnecessary pruning, perform structural pruning only. Karina Elfar, Molly Arreguin, Carissa Chiang, Samuel Wells and Karen Alarcon from the Department of Plant Pathology contributed to the paper, as did experts from Disneyland Resort Horticulture Department, State University of New York’s College of Environmental Science and Forestry, UC Irvine and UC Los Angeles.
Environmental Science
Global warming, air pollution and energy insecurity are three of the biggest problems facing the world today. The primary solution to these problems is to transition nearly all energy for transportation, buildings and industry to electricity, then to provide the electricity from clean, renewable sources — like wind, solar, geothermal and hydroelectricity — combined with storage, while simultaneously addressing non-energy emissions such as biomass burning, halogens, methane and nitrous oxide from agriculture. Given that 7.4 million people die and billions more become ill each year from air pollution — and that it would be necessary to eliminate 80% of all emissions by 2030 and 100% by 2035-2050 to avoid more than 1.5 degrees Celsius (2.7 Fahrenheit) global warming above pre-industrial levels — the problem is overwhelming, and the solution must be implemented rapidly and effectively. The world cannot afford to spend time on solutions that do not work well or at all. Many organizations, including the International Energy Agency and fossil-fuel companies, like Exxon-Mobil, have argued that carbon capture storage or use (CCSU) is needed to help solve the climate problem. Carbon capture is the extraction of carbon dioxide (CO2) from an exhaust stream, such as from a coal-fired power plant, a fossil gas-fired boiler, an ethanol refinery or a cement factory. CCSU differs slightly from synthetic direct air carbon capture and storage or use (SDACCSU), which is the process by which CO2 is extracted by equipment from the air, instead of from an exhaust stream. A third type of carbon capture is natural direct air capture, which is the process by which trees and other vegetation extract both CO2 and water vapor from the air to grow, expelling oxygen. Natural carbon capture faces little objection. The other types, though, face objection because they require equipment and energy while they permit the continuation of fossil fuel mining, combustion and infrastructure, as well as bioenergy land use, combustion and infrastructure. After the CO2 is captured during CCSU and SDACCSU, it is piped either to a location where the CO2 is used for industry or stored underground. Today, 73% of CO2 captured is used for enhanced oil recovery, where the CO2 binds with oil to make it less dense to enable the oil to float to the surface faster. During that process, about 40% of the CO2 captured is released back to the air. The remaining CO2 is either used for other industrial applications (such as electro-fuels that displace gasoline, diesel or jet fuel) or stored underground. The Biden administration has poured billions of dollars into carbon capture and direct air capture, through the Infrastructure Investment and Jobs Act and the Inflation Reduction Act, with the hope that it will help solve the climate problem. Universities worldwide are also investing billions of dollars in researching carbon capture. The Inflation Reduction Act incentivizes CO2 capture from all possible sources. In particular, it has incentivized proposals to capture CO2 from the fermentation process of up to 34 ethanol refineries in the upper Midwest United States and pipe the CO2 to an underground storage facility in North Dakota. The CO2 from fermentation is extremely pure, so no energy is needed to separate the CO2 from the rest of the exhaust, unlike with carbon capture from a coal plant or with direct air capture. However, energy is still needed to compress the CO2 for transport through pipes. Despite the incentive provided by the U.S. government and the proposals submitted to build such carbon capture and pipe infrastructure for ethanol, no study has evaluated whether the infrastructure may even reduce CO2 or how it will affect consumer cost, air pollution or land requirements. In a recent study published in Environmental Science and Technology, I carried out such an evaluation. In this study, I first evaluated the CO2 emission savings and cost of a proposal, submitted by Summit Carbon Solutions, to add carbon capture equipment to each of 34 refineries in Iowa, Nebraska, South Dakota, Minnesota and North Dakota, then to build 2,000 miles of pipes connecting the refineries. The ethanol from the refineries is currently blended with gasoline for use in flex-fuel vehicles, which are vehicles that run on either gasoline or ethanol-gasoline blends. I then compared this “ethanol plan” with using the same money to purchase wind farms to power battery-electric vehicles. To do this, I compared the use of a 2023 Ford F-150 four-wheel drive, eight-cylinder flex-fuel vehicle running on a blend of E85 (85% ethanol and 15% gasoline) with a 2023 Ford F-150 four-wheel drive extended range battery-electric vehicle. - Carbon Capture No Substitute for Emissions Cuts To Fight Climate Change: Report - China’s 2022 climate mystery: Did carbon emissions actually go down? - Biden’s Direct Air Capture Plan Only Delays Meaningful Climate Action - NY-Based Isometric Attracts $25 Million Funding for Carbon Capture Technology Database - Clean Energy Transition Is ‘Unstoppable,’ According to Landmark Climate Report - Fossil fuel discoveries in poorer nations could unlock a windfall — but they are also fanning climate concerns Results suggest that, compared with using ethanol with carbon capture and pipes to power F-150 flex-fuel vehicles (the ethanol plan), this wind plan (using wind turbines to power F-150 battery-electric vehicles) may reduce 2.4 to four times the CO2 and may save drivers in these five states $40 billion to $66 billion (USD 2023) over 30 years even when each battery-electric vehicle initially costs $21,700 more than each flex fuel vehicle (as it does today). The wind plan may also require only 1/400,000 of the land footprint and 1/10 to 1/20 the spacing area, and it may decrease air pollution, compared with the ethanol plan. The large CO2 and cost savings due to the wind plan result from the fact that the F-150 battery-electric vehicle uses only about one-fourth the energy as the flex-fuel vehicle to go the same distance. Despite the $21,700 higher up-front cost of the electric vehicle, the fuel cost savings are so enormous, when projected over 30 years, that they save consumers billions of dollars. In addition, the wind plan eliminates almost all CO2, whereas the ethanol plan still results in significant CO2 emissions. Even building wind farms to replace coal-fired electricity generation (instead of powering battery-electric vehicles) may avoid 1.5 to 2.5 times the CO2 as the ethanol plan. Thus, ethanol with carbon capture appears to be an opportunity cost that may damage climate and air quality, occupy land and saddle consumers with high fuel costs for decades. Thus, instead of incentivizing a CO2 reduction, the Inflation Reduction Act, along with the Infrastructure Investment and Jobs Act, through their funding of carbon capture, actually incentivize net increases in CO2, air pollution, land use and consumer costs. This study concludes that it is far better to close the ethanol refineries and stop selling flex-fuel vehicles, but instead sell more battery electric vehicles powered by wind (or solar) electricity. As such, incentivizing carbon capture is an opportunity cost. This same result applies for all other carbon capture and direct air capture applications, as illustrated as well in other studies. The reason is that all carbon capture and non-natural direct air capture require energy and equipment and never reduce air pollution, mining or infrastructure. Even using renewable energy to power CCSU or SDACCSU prevents the renewable energy from replacing a fossil or bioenergy source of combustion, thereby preventing the renewables from reducing more CO2 in addition to reducing air pollution, mining and infrastructure, which CCSU and SDACCSU never do. Mark Z. Jacobson is a professor of civil and environmental engineering at Stanford University and the author of the book “No Miracles Needed: How Today’s Technology can Save our Climate and Clean our Air,” and the recent study, “Should Transportation Be Transitioned to Ethanol with Carbon Capture and Pipelines or Electricity? A Case Study.” - By Mark MagañaOpinionHow a Methane Rule Can Protect Latinos Impacted by Climate Harms - By Mark WolfeOpinionFamilies are Struggling to Pay High Home Energy Bills: Congress Must Act on Additional Funding - By Kris PerryOpinionPutting the Best Interests of Our Children First in the Digital Age - By Armstrong WilliamsOpinionOur Border Policies May Have Opened the Door to Terrorists - By David RichardsOpinionRace for the White House: It’s Too Late for 2024 and Not Too Soon for 2028 - By Maya Wiley and Alondra NelsonOpinionArtificial Intelligence and the 2023 Election - By Alexander J. MotylOpinionSome Republicans Just Don’t Get Ukraine - By Megan KowalskiOpinionWhat Barbie Can Teach Women About Finances - By Richard J. ShinderOpinionDoes Financial Innovation Do More Harm Than Good? - By Cathy CreightonOpinionUnited Auto Workers Pulled Off What Few Could Have Predicted - By Robin Ganzert, Ph.D. and Theo B. PagelOpinionHow to Ensure Fewer Species Follow Dinosaurs to Extinction - By Albert R. HuntOpinionWashington Hypocrisy on Full Display in Schumer’s Senate
Environmental Science