article_text
stringlengths
294
32.8k
topic
stringlengths
3
42
KYBURZ, Calif. -- On a steep mountainside where walls of flames torched the forest on their way toward Lake Tahoe in 2021, blackened trees stand in silhouette against a gray sky. “If you can find a live tree, point to it,” Hugh Safford, an environmental science and policy researcher at the University of California, Davis, said touring damage from the Caldor Fire, one of the past decade's many massive blazes. Dead pines, firs, and cedars stretch as far as the eye can see. Fire burned so hot that soil was still barren in places more than a year later. Granite boulders were charred and flaked from the inferno. Long, narrow indentations marked the graves of fallen logs that vanished in smoke. Damage in this area of Eldorado National Forest could be permanent — part of a troubling pattern that threatens a defining characteristic of the Sierra Nevada range John Muir once called a “waving sea of evergreens." Forest like this is disappearing as increasingly intense fires alter landscapes around the planet, threatening wildlife, jeopardizing efforts to capture climate-warming carbon and harming water supplies, according to scientific studies. A combination of factors is to blame in the U.S. West: A century of firefighting, elimination of Indigenous burning, logging of large fire-resistant trees, and other management practices that allowed small trees, undergrowth and deadwood to choke forests. Drought has killed hundreds of millions of conifers or made them susceptible to disease and pests, and more likely to go up in flames. And a changing climate has brought more intense, larger and less predictable fires. “What’s it’s coming down to is jungles of fuels in forest lands," Safford said. "You get a big head of steam going behind the fire there, it can burn forever and ever and ever.” Despite relatively mild wildfire seasons the past two years, California has seen 12 of its largest 20 wildfires — including the top eight — and 13 of the most destructive in the previous five years. Record rain and snowfall this year mostly ended a three-year drought but explosive vegetation growth could feed future fires. California has lost more than 1,760 square miles (4,560 square kilometers) — nearly 7% — of its tree cover since 1985, a recent study found. While forest increased in the 1990s, it declined rapidly after 2000 because of larger and more frequent fires, according to the study in the American Geophysical Union Advances journal. A study of the southern Sierra Nevada — home to Yosemite, Sequoia and Kings Canyon national parks — found nearly a third of conifer forest had transitioned to other vegetation as a result of fire, drought or bark beetles in the past decade. “We're losing them at a rate that is something that we can’t sustain,” said Brandon Collins, co-author of that report in the journal Ecological Applications and adjunct forestry professor at the University of California, Berkeley. “If you play it out (over) the next 20 to 30 years at the same rate, it would be gone.” Some environmentalists, like Chad Hanson of the John Muir Project sponsored by the nonprofit Earth Island Institute, said there's a “myth of catastrophic wildfire” to support logging efforts — and he has often sued to block plans to remove dead trees or thin forests. Hanson said seedlings are rising from the ashes in high-severity patches of fire and the dead wood provides habitat for imperiled spotted owls, Pacific fishers and rare woodpeckers. His research found forests always had dense patches of trees and some severe fires, Hanson said, contending that increasingly large ones result from weather and climate change, made worse by logging practices. “If everything people are hearing was true there would be a lot more reason for concern,” he said. “But the public is being gaslighted.” However, others are concerned failure to properly manage forests can result in intense fire that could harm wildlife habitat, the ability to store climate-warming carbon in trees and the quality of Sierra snowmelt that provides about 60% of the water for farms and cities. Burn scars are more prone to flooding and erosion, and runoff becomes tainted with ash and sediment. “Areas where mixed conifer burned at high severity, those are all areas that are vulnerable to total forest loss,” said Christy Brigham, chief of resources management and science at Sequoia & Kings Canyon National Parks. “We have no idea what that means for wildlife habitat, for water cycling, for carbon storage. And that’s not even getting into the things we love about forests." After wildfires in 2020 and 2021 wiped out up to about a fifth of all giant sequoias — once considered almost fireproof — the National Park Service last week embarked on a controversial project to help the mighty trees recover with its largest planting of seedlings a single grove. CHANGING FOREST LANDSCAPE Many researchers say the canopy of the Sierra Nevada has changed dramatically since heavy Gold Rush logging. Before the mid-1800s, fire sparked by lightning or set by Indigenous people burned millions of acres a year. It kept undergrowth in check, allowing low-intensity flames to creep along the forest floor and remove smaller trees competing with big ones. "The inviting openness of the Sierra woods is one of their most distinguishing characteristics,” John Muir said, describing how a horse rider could easily pass through the trees. But after settlers drove out Native Americans and logged forests, fighting fires became the mission to protect the valuable trees — and, increasingly, homes built deeper into wildlands. In 1935, the U.S. Forest Service established a policy to knock down any fire by 10 a.m. the next morning. That has allowed forests to become four to seven times more densely wooded than they once were, Safford said. While many larger, fire-resilient trees like ponderosa and Jeffrey pines were logged for lumber, smaller trees that are not so fire resistant have thrived. They compete for water and their low branches allow fire to climb into the canopy of taller trees, fueling devastating crown fires. “John Muir would not recognize any of this,” Safford said, gesturing at a stand of tightly packed dead trees during the tour last October. “He wouldn’t even know where he was.” A TINDERBOX TAKES OFF The Caldor Fire, which destroyed 1,000 structures while burning across the Sierra Crest and into the Tahoe basin, torched forest that hadn't seen flames in over a century, Safford said. Years of drought fueled by a warmer climate had made it a tinderbox. Swaths of Eldorado National Forest burned at such intensity that mature pines went up in flames and their seeds were killed. Unlike species such as giant sequoias and lodgepole pine that drop their seeds in fire, the dominant pines of the Sierra can't reproduce if their seeds burn. Manzanita and mountain whitethorn — chaparral typical at lower elevations in California — take root in ashes and can dominate the forest. Studies have found that repeated fires or other disruption provoke such shifts in ecosystems. A March study of 334 Western wildfires found increasing fire severity and drier conditions after fire made the dominant conifer species less likely to regenerate and it concluded the problem is apt to worsen with climate change. Along U.S. Highway 50, where the Caldor Fire had continued burning out of control toward Lake Tahoe, Safford parked his SUV and scrambled up a rocky knoll to point out a slope barren of trees. Forest there had been burned in 1981 and was replaced with chaparral. The Caldor blaze, allegedly caused by a reckless father and son, is likely to reinforce that condition, Safford said. And whether the severe burn recovers will depend largely on whether another fire tears through in coming years, he said. TOOLS FOR TREATING FORESTS To tackle the problem of huge wildfires, the federal government, which owns nearly 60% of California’s 51,560 square miles (134,00 square kilometers) of forest, agreed with the state in 2020 to jointly reduce fuels on 1,560 square miles (4,040 square kilometers) a year by 2025. While a fraction of the land needing treatment, it's considered a promising development after years of inaction, though not without controversy. Fire scientists advocate more deliberate burning at low-to-moderate severity to clear vegetation that makes forests susceptible to big fires. But the Forest Service has historically been risk averse, said Safford, the agency's regional ecologist for two decades before retiring in 2021. Rather than chance that a fire could blow up, officials have generally snuffed flames before they could deliver benefits of lower-intensity fire. Weeks before the Caldor Fire, the Forest Service had been monitoring a lightning fire south of Lake Tahoe, while dealing with more pressing ones. But when the small fire took off, causing millions of dollars in damage, politicians blasted the agency for not doing more. Officials quickly said they would no longer let some naturally ignited fires burn that season. With more than $4 billion in funding from the Bipartisan Infrastructure Law and the Inflation Reduction Act, the Forest Service plans to ramp up forest thinning in places where the wildfire threat to communities and infrastructure is most immediate. That will include cutting smaller trees, as well as setting intentional fires to clear accumulated forest litter. BATTLELINES OVER THINNING Last fall when Safford led two graduate students up a rutted fire road through charred forest, they came upon a patch of life where large pines and cedars towered overhead and seedlings sprouted. A “nirvana” is what Safford called it. Smaller fire-intolerant trees had been harvested and other vegetation removed before the fire. The space between the trees allowed the fire to creep along the ground, only charring some trunks. A coalition of Sierra-based conservation groups wrote congressional leaders in 2021 urging more federal funding for fire resilience. Their letter cited “broad consensus among fire scientists, land managers, firefighters” to increase thinning and prescribed fire. Susan Britting, executive director of one of the groups, Sierra Forest Legacy, acknowledged any cutting triggers skepticism because loggers historically took the largest, most marketable trees. But she said thinning trees up to a certain diameter is acceptable, though she prefers prescribed burning. “In my experience, things like logging, tree removal, even reforestation, those things happen,” Britting said. “The prescribed fire that needs to happen ... just gets delayed and punted and not prioritized.” The goal of prescribed burns is illustrated by a large green island on a fire severity map of the nearly 350-square-mile (906 square kilometers) Caldor blaze. The green area, representing low fire severity, corresponded to where a fire was set among older trees in 2019. The chance of a deliberate burn escaping its perimeter — as happened last year in New Mexico's largest fire in state history — remains a big challenge to the strategy. While managed fire and prescribed burns are widely supported by scientists and environmental groups, thinning is controversial and often faces court challenges. In a 2020 letter to Congress that opposed logging, The John Muir Project's Hanson and more than 200 climate and forest scientists said some thinning could reduce fire intensity but those operations often take larger trees to make it economically worthwhile. Safford — now chief scientist at Vibrant Planet, an environmental public benefits corporation — acknowledged larger trees have been logged in the past but said that's not now envisioned in thinning projects aimed at making forests healthier. Even with chainsaws, we won't be able to cut our way out of the problem, he said. Two-thirds of the rugged Sierra is inaccessible or off-limits to logging, so fire will have to do much of the work. But there's a backlash against fire as as a management tool. Homeowners are anxious prescribed fires will jump perimeters and destroy houses. Similar fears lead fire agencies to tame moderate fires that can clear forest floors. “It’s the classic wicked problem where any solution you derive has huge implications for other sides of society and the way people want things to be,” Safford said. “So I’m afraid what’s going to happen is at some point we’ll burn all of our forests.”
Environmental Science
Germicidal UV Lights May Produce Indoor Air Pollutants. In the ongoing battle against infectious diseases like Covid-19 and the flu, measures like mask-wearing and isolation have taken center stage. But there’s another crucial tool in the arsenal: ultraviolet germicidal UV lights, which help reduce airborne pathogens. However, new research from the Massachusetts Institute of Technology (MIT) suggests that these UV lights, while effective in killing germs, may also generate potentially harmful compounds in indoor spaces. The study underscores the importance of using these lights in combination with proper ventilation. Table of Contents Germicidal UV Lights: A Double-Edged Sword Conventional UV sources, used for disinfection purposes, have long been known for their harmful effects on human eyes and skin. However, newer UV lights that emit at a wavelength of 222 nanometers are considered safe for use. But according to MIT researchers, the story doesn’t end there. The study, published in the journal Environmental Science and Technology, reveals that these new UV lights can trigger chemical reactions that lead to the creation of unwanted compounds in indoor environments. While the researchers don’t advocate avoiding these UV lights altogether, they do emphasize the need for using the right UV light strength for specific indoor situations and ensuring proper ventilation. Research Team and Their Unexpected Focus The MIT study, led by recent postdoc Victoria Barber, alongside doctoral student Matthew Goss and Professor Jesse Kroll, involved collaboration with experts from MIT, Aerodyne Research, and Harvard University. Normally, Kroll’s team focuses on outdoor air pollution, but the COVID-19 pandemic led them to explore the realm of indoor air quality. Typically, indoor spaces experience little photochemical reactivity, unlike the outdoors where sunlight is constantly present. However, the use of devices that utilize chemical methods or UV light to clean indoor air can introduce a sudden influx of oxidation reactions indoors. This can have cascading effects, according to Kroll. The initial interaction of UV light with oxygen in the air leads to the formation of ozone, which, in itself, poses health risks. Furthermore, the creation of ozone sets the stage for various oxidation reactions. For instance, UV light can react with ozone, producing compounds known as OH radicals, which are potent oxidizers. Barber notes that when these oxidants interact with volatile organic compounds present in most indoor environments, it results in the generation of oxidized volatile organic compounds that can be more harmful to human health than their unoxidized counterparts. This process also gives rise to secondary organic aerosols, which can be harmful to breathe. These compounds pose a particular problem indoors, where people spend a significant portion of their time, and lower ventilation rates can cause these compounds to accumulate at higher levels. Testing and Key Findings On Germicidal UV Lights Given their extensive experience studying such processes in outdoor air, the MIT team was well-equipped to directly observe these pollution-forming processes indoors. They conducted a series of experiments, exposing clean air to UV lights in a controlled environment, and then introducing one organic compound at a time to observe their effects on the compounds produced. While further research is needed to determine how these findings apply to real indoor settings, the formation of secondary products was evident. Not a Substitute for Ventilation The devices using the new UV wavelengths, known as KrCl excimer lamps, are still relatively rare and costly, primarily used in hospitals, restaurants, or commercial settings rather than homes. Despite some suggestions that these devices might replace the need for ventilation, the MIT study asserts otherwise. The research findings indicate that these UV lights should not replace ventilation but should instead complement it. Kroll suggests a balanced approach, where the health benefits of UV light for pathogen deactivation are achieved without a significant buildup of harmful compounds through effective ventilation. The Road Ahead The results from the MIT study are based on highly controlled lab experiments using air contained in a controlled environment. While the findings provide valuable insights into the chemistry occurring under UV light radiation, the next step in the research is to conduct measurements in real-world indoor spaces. Final Thoughts On Germicidal UV Lights As Dustin Poppendieck, a research scientist at the National Institute for Standards and Technology, points out, these 222-nanometer radiation devices are being deployed in various indoor spaces without a full understanding of their potential benefits and harms. This study forms the foundation for a comprehensive evaluation of the health impacts associated with these devices. It is crucial to complete this process before relying on the technology to prevent future pandemics. Read more on latest trending tech news in our Tech News section.
Environmental Science
For the first time, researchers from the University of Toronto, Indiana University, and the University of Notre Dame have discovered the presence of harmful PFAS chemicals in Canadian fast-food packaging. These chemicals, known as per- and polyfluoroalkyl substances, have been found in water-and-grease repellent paper alternatives to plastic. The study, published in the journal Environmental Science and Technology Letters, reveals that food packaging can expose people to PFAS, chemicals associated with severe health impacts such as elevated cancer risk and harm to the immune system, through contamination of the food we consume. Additionally, when disposed of, packaging introduces PFAS into the environment, where these persistent substances never degrade. In response to the health and ecological hazards, 11 U.S. states have prohibited PFAS in the majority of food packaging, and two leading restaurant chains have pledged to eliminate PFAS from their operations by 2025. “As Canada restricts single-use plastics in food-service ware, our research shows that what we like to think of as the better alternatives, such as paper wrappers and compostable bowls, are not so safe and ‘green’ after all. In fact, they may harm our health and the environment—from our air to our drinking water—by providing a direct route to PFAS exposure,” says Miriam Diamond, professor in the Department of Earth Sciences and School of the Environment at the University of Toronto and study co-author. For the study, the researchers collected 42 paper-based wrappers and bowls from fast-food restaurants in Toronto and tested them for total fluorine, an indicator of PFAS. They then completed a detailed analysis of eight of those samples with high levels of total fluorine. Fibre-based moulded bowls, which are marketed as “compostable”, had PFAS levels three to 10 times higher than doughnut and pastry bags. PFAS are added to these bowls and bags as a water- and grease-repellent. PFAS are a complex group of about 9,000 manufactured chemicals, few of which have been studied for their toxicity. A PFAS that is known to be toxic—6:2 FTOH (6:2 fluorotelomer alcohol)—was the most abundant compound detected in these samples. Other PFAS that were commonly found in all the Canadian fast-food packaging tested can transform into this compound, thereby adding to a consumer’s exposure to it. They detected several PFAS for the first time in food packaging, showing how difficult it is to track the presence of this large family of compounds. Critically, the researchers found that the concentration of PFAS declined by up to 85 per cent after storing the products for two years, contradicting claims that polymeric PFAS—a type composed of larger molecules—do not degrade and escape from products. The release of PFAS from food packaging into indoor air presents another opportunity for human exposure to these chemicals. “The use of PFAS in food packaging is a regrettable substitution of trading one harmful option—single-use plastics—for another. We need to strengthen regulations and push for the use of fibre-based food packaging that doesn’t contain PFAS,” says Diamond. Reference: “Per- and Polyfluoroalkyl Substances in Canadian Fast Food Packaging” by Heather Schwartz-Narbonne, Chunjie Xia, Anna Shalin, Heather D. Whitehead, Diwen Yang, Graham F. Peaslee, Zhanyun Wang, Yan Wu, Hui Peng, Arlene Blum, Marta Venier and Miriam L. Diamond, 28 March 2023, Environmental Science & Technology Letters. DOI: 10.1021/acs.estlett.2c00926 The study was funded by Environment and Climate Change Canada, the Great Lakes Protection Initiative, the Natural Sciences and Engineering Research Council of Canada, the Green Science Policy Institute, and the European Union under the Horizon 2020 Research and Innovation Programme.
Environmental Science
Pottery becomes water treatment device for Navajo nation Large chunks of the Navajo Nation in the Southwest lack access to clean drinkable water, a trend that has been rising in many parts of the U.S. in recent years. A research team led by engineers with The University of Texas at Austin is changing that. The team has developed a new water filtration solution for members of the Navajo Nation, lining clay pots with pine tree resin collected from the Navajo Nation and incorporating tiny, silver-based particles that can be used to purify water to make it drinkable. "Making water filtration technology cheap doesn't solve all the problems, and making it effective doesn't solve everything either," said Navid Saleh, a professor in the Fariborz Maseeh Department of Civil, Architectural and Environmental Engineering and one of the leaders on the project. "You have to think about the people you are making it for." And that's what the researchers did. They worked closely with a third-generation potter from Arizona—Deanna Tso, who is also a co-author on the paper—to create a device that is simple for the users. All they have to do is pour water through the clay pots, and the coated pottery removes bacteria from water and generates clean, drinkable water. The Navajo Nation has a history of mistrust of outsiders, the researchers say, and that makes it less likely that people there would adopt a new technology made entirely by others. Using pottery, working with the community, and relying on local materials were important to the effectiveness of this project. The research appears in a new paper in the journal Environmental Science & Technology. "Navajo pottery is at the heart of this innovation because we hoped it would bridge a trust gap," said Lewis Stetson Rowles III, now a faculty member at Georgia Southern University's Department of Civil Engineering and Construction after earning a Ph.D. from UT in 2021. "Pottery is sacred there, and using their materials and their techniques could help them get more comfortable with embracing new solutions." Using silver particles for water filtration is not the main innovation. Others have used this technology in the past. The key is controlling the release of nanoparticles, which can reduce the usable life of the filters. And the silver particles mix at high volume with some of the chemicals, such as chloride and sulfide, in the untreated water, leading to a "poison layer" that can reduce the disinfection efficacy of the silver particles on the clay lining. The researchers used materials abundant in the environment of the community, including pine tree resin, to mitigate the uncontrolled release of silver particles during the water purifying process. The materials and construction process for the pots cost less than $10, making for a potentially low-cost solution. "This is just the beginning of trying to solve a local problem for a specific group of people," Saleh said. "But the technical breakthrough we've made can be used all over the world to help other communities." The next step for the researchers is to grow the technology and find other materials and techniques to help communities use the materials abundant in their regions to help create fresh, drinkable water. The researchers are not seeking to commercialize the research, but they are eager to share it with potential partners. More information: Lewis S. Rowles et al, Integrating Navajo Pottery Techniques To Improve Silver Nanoparticle-Enabled Ceramic Water Filters for Disinfection, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.3c03462
Environmental Science
California aims to tap beavers, once viewed as a nuisance, to help with water issues and wildfires For years, beavers have been treated as an annoyance for chewing down trees and shrubs and blocking up streams, leading to flooding in neighborhoods and farms. But the animal is increasingly being seen as nature's helper in the midst of climate change. California recently changed its tune and is embracing the animals that can create lush habitats that lure species back into now-urban areas, enhance groundwater supplies and buffer against the threat of wildfires. A new policy that went into effect last month encourages landowners and agencies dealing with beaver damage to seek solutions such as putting flow devices in streams or protective wrap on trees before seeking permission from the state to kill the animals. The state is also running pilot projects to relocate beavers to places where they can be more beneficial. The aim is to preserve more beavers, along with their nature-friendly behaviors. “There's been this major paradigm shift throughout the West where people have really transitioned from viewing beavers strictly as a nuisance species, and recognizing them for the ecological benefits that they have,” said Valerie Cook, beaver restoration program manager for California's Department of Fish and Wildlife. The program was funded by Gov. Gavin Newsom's administration last year. The push follows similar efforts in other Western states including Washington, which has a pilot beaver relocation program, Cook said. It marks a new chapter in Californians' lengthy history with the animals, which experts say used to be everywhere, but after years of trapping, attempts at reintroduction, and then removal under depredation permits, are found in much smaller numbers than they once were — largely in the Central Valley and northern part of the state. It is unknown how many beavers live in California, but hundreds of permits are sought by landowners each year that typically allowed them to kill the animals. According to the state's Department of Fish and Wildlife, the beaver population in North America used to range between 100 million and 200 million but now totals between 10 million and 15 million. Kate Lundquist, director of the WATER Institute at the Occidental Arts & Ecology Center, said she expects California's changes will lead to fewer beavers killed in the state and a growth in wetland spaces. She said she believes the past three years of drought and devastating wildfires contributed to the state's shift on beavers. “There has been increased motivation to identify and fund the implementation of nature-based climate smart solutions,” she said. “Beaver restoration is just that.” Beavers live in family units and quickly build dams on streams, creating ponds. The pools help slow the flow of water, replenishing groundwater supplies, and can also stall the spread of wildfires — a critical issue for a state plagued by fires in recent years, said Emily Fairfax, professor of environmental science and management at California State University, Channel Islands. “You talk to anyone who has lived near beaver ponds. They’ll tell you: These things don’t burn,” said Fairfax, who has researched beavers and the ponds they build. The animals are not a protected species but help create habitat that is critical for others such as the coho salmon, which is listed under the Endangered Species Act. Young salmon grow and thrive in beaver ponds before heading to the ocean, which gives them a better shot at survival, said Tom Wheeler, executive director of the Environmental Protection Information Center, which has long pushed for California to try to resolve problems with beavers without killing them. Officials at the California Farm Bureau said they were studying the change and have not yet taken a position on it. California will continue to issue depredation permits as needed, but the state wants people to try other solutions before resorting to killing the animals, officials said. Those could be wrapping trees with wire mesh or using flow devices on streams to control beaver pond levels to prevent flooding. In some cases, it may involve relocating beavers to places that want them. Vicky Monroe, statewide conflict programs coordinator for California’s Department of Fish and Wildlife, said her office has long received requests from groups that want beavers, but the state didn’t have a mechanism to legally move them until recently. California has planned two pilot relocation projects, including one to bring beavers back to the Tule River. Kenneth McDarment, a councilmember for the Tule River Indian Tribe, said the tribe started seeking ways to reintroduce beavers nearly a decade ago due to drought and hopes to see them relocated later this year. “We’re going to give these beavers a chance to do what they do naturally in a place where they’re wanted,” he said. The state is also hoping to educate people about the benefits of beavers. Rusty Cohn, a 69-year-old retired auto parts businessman, said he knew little about the animals before he spotted chewed trees on a walk through the Northern California city of Napa in a region better known for winemaking than the critters. He later observed beavers building a dam on a trickling stream, converting the area into a lush pond for heron, mink and other species, and became a fan. “It was like a little magical place with an incredible amount of wildlife,” Cohn said. That was eight years ago, he said, adding that beaver sightings in that spot are becoming rarer amid increased development, but he can still find them on streams throughout Napa.
Environmental Science
Research Open Access Published: 11 October 2022 Jill L. Reiter2, Igor Zakharevich1, Cathy Proctor3, Jun Ying4, Robin Mesnage5, Michael Antoniou5 & …Paul D. Winchester3  Environmental Health volume 21, Article number: 95 (2022) Cite this article 1698 Accesses 196 Altmetric Metrics details AbstractBackgroundPrenatal glyphosate (GLY) exposure is associated with adverse reproductive outcomes in animal studies. Little is known about the effects of GLY exposure during pregnancy in the human population. This study aims to establish baseline urine GLY levels in a high-risk and racially diverse pregnancy cohort and to assess the relationship between prenatal GLY exposure and fetal development and birth outcomes.MethodsRandom first trimester urine specimens were collected from high risk pregnant women between 2013 and 2016 as part of the Indiana Pregnancy Environmental Exposures Study (PEES). Demographic and clinical data were abstracted from mother and infant medical records. Urine glyphosate levels were measured as a proxy for GLY exposure and quantified using liquid chromatography-tandem mass spectrometry. Primary outcome variables included gestation-adjusted birth weight percentile (BWT%ile) and neonatal intensive care unit (NICU) admission. Relationships between primary outcome variables and GLY exposure were assessed using univariate and multivariate linear and logistic regression models.ResultsUrine GLY levels above the limit of detection (0.1 ng/mL) were found in 186 of 187 (99%) pregnant women. Further analyses were limited to 155 pregnant women with singleton live births. The mean age of participants was 29 years, and the majority were non-Hispanic white (70%) or non-Hispanic Black (21%). The mean (± SD) urine GLY level was 3.33 ± 1.67 ng/mL. Newborn BWT%iles were negatively related to GLY (adjusted slope ± SE = -0.032 + 0.014, p = 0.023). Infants born to women living outside of Indiana’s large central metropolitan area were more likely to have a lower BWT%ile associated with mother’s first trimester GLY levels (slope ± SE = -0.064 ± 0.024, p = 0.007). The adjusted odds ratio for NICU admission and maternal GLY levels was 1.16 (95% CI: 0.90, 1.67, p = 0.233).ConclusionGLY was found in 99% of pregnant women in this Midwestern cohort. Higher maternal GLY levels in the first trimester were associated with lower BWT%iles and higher NICU admission risk. The results warrant further investigation on the effects of GLY exposure in human pregnancies in larger population studies. Peer Review reports BackgroundGlyphosate (GLY, N-(phosphonomethyl) glycine) is the most widely used broad spectrum herbicide in agricultural, commercial, and residential areas in the United States. GLY is the active ingredient in commercial formulations of glyphosate-based herbicides (GBHs) that are commonly used on corn, wheat, soy and cotton fields to manage invasive weeds and grasses. Roundup was the first GBH to enter the market in the 1970s; however, over 750 GBHs currently exist for use in the US and worldwide. The development of genetically-engineered GLY-tolerant crops and preharvest desiccant use of GLY in whole grain crops (e.g. oats, wheat) has increased GBH agricultural use 35-fold since the 1970s from 7.8 million pounds to 276 million pounds in 2014 [1]. GLY is also widely used commercially and residentially in areas such as gardens, public parks, school grounds, playing fields, and along roads and railway tracks. Consequently, GLY exposure is frequent in both humans and animals.As an herbicide, the primary mode of GLY action is inhibiting 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS) of the shikimate pathway, which is involved in the biosynthesis of aromatic amino acids in plants. Inhibition of EPSPS leads to a shortage in essential aromatic amino acids resulting in plant death [2]. The absence of the shikimate pathway in vertebrates and a rapid elimination of GLY in mammals led to the assumption that humans and other mammals should not experience significant toxicity from GLY exposure [3].GLY, an organophosphorus compound has been extensively studied and many potentially toxic effects have been reported including inhibition of acetylcholinesterase activity in animal models [4, 5]. Additionally, low doses of GLY and GBHs increase oxidative stress and inhibit mitochondrial bioenergetics in various animal models [6].In utero exposure to GLY has been linked to birth defects, fetal loss, and decreased reproductive function in animals such as chickens, frogs, and other mammals [7,8,9]. In mice, prenatal GLY exposure at the acceptable daily intake dose range resulted in lower sperm counts and decreased testosterone in male offspring [10]. In addition, GBH exposure in pregnant rats induced adverse effects in second generation offspring, including a higher incidence of small for gestational age (SGA) fetuses and delayed growth of F2 pups [11]. Moreover, GBHs administered to rats during pregnancy and through adulthood caused significant androgen effects on genital development in both male and female offspring and exposure to Roundup was associated with delayed onset of puberty and higher testosterone levels and free thyroxine index in females [11]. Another emerging line of evidence suggests that some of the toxic effects of prenatal GLY exposure are caused by it changing the composition of gut microbiota in juvenile offspring [12].GLY and GBHs also give rise to epigenetic changes, particularly alterations in DNA methylation patterns, that can be passed on to future generations. For example, transgenerational epigenetic effects were demonstrated in rats [13, 14]. In these studies, only pregnant F0 females were administered 25 mg per kg body weight per day (mg/kg bw/d) GLY intraperitoneally from day 8 to day 14 of gestation and GLY-naïve offspring were bred on to the F3 generation. Although F0 animals showed no adverse effects from GLY treatment, there was a dramatic increase in pathologies in the F2 and, crucially, the F3 transgenerational offspring such as obesity, disease of the kidney, ovary, and prostate, as well as higher death rates of late-stage pregnant F2 females or their pups at, or immediately after, birth. These pathologies were correlated with differential DNA methylation regions in sperm, some of which were associated with genes previously shown to be involved in the observed pathologies [13, 14]. Although, these studies provided compelling evidence for GLY-induced transgenerational effects, the implications for human health remain unknown due to the relatively high GLY doses and non-physiological route of administration that were used.Existing observational studies in humans are limited and frequently lack direct measurements of GLY; as a result, evidence is mixed as to whether current levels of occupational and environmental GLY exposures represent a risk for human development and reproduction. For example, the Ontario Farm Family Health Study (OFFHS) found a significant association between preconception exposure to GBHs and increased risk of spontaneous abortion [15]. In addition, ambient GLY exposure within 2000 m of a pregnant woman’s residence in California increased the risk of autism spectrum disorder (ASD), as well as the risk of ASD with intellectual disability in the offspring [16]. On the other hand, the Agricultural Health Study did not find an association between maternal GLY exposure and birth weight, although birth weight was not corrected for gestation in their study [17].Our earlier prospective Indiana birth cohort study found that 93% of low-risk pregnant women had detectable urine GLY levels and that higher urine GLY levels were associated with shortened pregnancy length. No association was found with fetal growth as referenced by BWT%ile. A sub-analysis based upon self-reported data suggested that food and beverage consumption may be a potential exposure pathway, although GLY was not detected in residential tap water samples [18]. A second US cohort of low-risk pregnant women, The Infant Development and the Environment Study (TIDES), also demonstrated that higher urine GLY levels were associated with shortened pregnancy gestation [19]. A nested case-control study of pregnant women in Puerto Rico found that preterm birth (a dichotomous measure of shortened gestation) was significantly correlated with both urine GLY and its presumptive metabolite aminomethylphosphonic acid (AMPA) measured at 26 weeks gestation [20]. Thus, these new findings provide evidence that, in selected populations, GLY may shorten the length of pregnancy.In the U.S., 8% of the pregnant population have high-risk pregnancies, that is pregnancy complications that put the health or life of a woman or her fetus at risk. Such high-risk pregnancy complications may involve medical and obstetrical issues like preexisting diabetes, chronic high blood pressure, preeclampsia, preterm labor, and other complex medical conditions that affect pregnancy. Complications may also involve unexpected problems during pregnancy, such as early labor, bleeding or high blood pressure or babies who may have birth defects or growth problems. Some complications are inter-related such as hypertension which increases the woman’s risk of stroke or heart attack, often decreases fetal growth (intrauterine growth restriction), increases the risk of abruption of the placenta, and contributes to early labor and preterm birth. The extent to which GLY exposure may impact pregnancies and contribute to adverse outcomes is not well understood.As our previous low-risk cohort found an association with GLY and shortened pregnancy, we sought to conduct a larger study of more diverse women with higher risk pregnancies living across the state of Indiana. We collected urine from pregnant women attending a University-based Maternal-Fetal Medicine Specialty Obstetrics Clinic that specializes in the care of mothers and fetuses during high-risk pregnancies, with the chief aims to: (1) Establish the prevalence of detectable urine GLY in high-risk pregnancies, (2) Correlate urine GLY levels and maternal characteristics and co-morbidities in high-risk pregnancies, and (3) Investigate correlations between first trimester maternal GLY levels and fetal growth indicated by birth weight percentiles and risk of NICU admission.MethodsStudy participantsA subset of women in early pregnancies were identified from our Indiana Pregnancy Environmental Exposures Study (PEES) Cohort which consists of 853 high risk pregnancies from 822 women with over 3000 urine samples. Inclusion criteria for the PEES cohort required a living conceptus at the time of the sample collection in women who were at least 18 years of age. Discarded random urine samples were collected during the course of prenatal care from the Maternal-Fetal Medicine Specialty Obstetrics Clinic at Indiana University Hospital between the years 2013 to 2017. This clinic offers specialty obstetrics care for a wide variety of maternal or fetal conditions considered to be high-risk, including diabetes, recurrent pregnancy loss, prior stillbirth, prior preterm birth, cervical insufficiency, chronic or gestational hypertension, cardiac or other diseases, substance abuse, fetal malformations, fetal growth problems, and alloimmunization, etc. The racial/ethnic makeup of the PEES cohort is 77% White, 19% Black, 2% Asian, and 1% Hispanic. In addition, women from 68 of Indiana’s 93 counties are represented with over half the subjects living outside Marion County (Indianapolis). All PEES study samples were transported to the processing laboratory at the end of the clinic day, assigned a unique study ID, and stored frozen at -20 °C; long-term storage was at -80 °C and maintained in the Indiana Clinical and Translational Sciences Institute (CTSI) Specimen Storage Facility (SSF). The research protocols were approved by the Indiana University Institutional Review Board with waivers of informed consent and authorization for the use of protected health information as it involved no more than minimal risk of loss of privacy to the subject and no participant contact was attempted.For the present study, all high risk pregnancies with a first trimester (< 14 weeks) urine sample (≥ 1 mL) that had not undergone a freeze-thaw cycle were initially selected for urine GLY analysis (n = 215). One batch of 28 samples did not pass the analytical laboratory’s quality control cycle criteria and were not analyzed further. Statistical analysis of birth outcomes (BWT%ile and NICU admission) was conducted on 155 first trimester samples, which also excluded 32 pregnancies with multiple gestations, fetal loss, and unverified deliveries. The study population consisted of 155 pregnancies resulting in 155 singleton live births from 150 women.Abstraction of medical recordsStudy data were abstracted for each pregnancy from electronic medical records following newborn delivery and coded information was stored in a secure research electronic data capture (REDCap) database. All medical records were collected without prior knowledge of urine GLY results. Each maternal record was reviewed for pre-pregnancy factors, pregnancy risk factors, gestational length, and fetal growth indicators as well as neonatal outcomes. Gestational length was calculated in days based on last menstrual period and obstetrical adjustment by first ultrasound. Pre-pregnancy factors included maternal age, parity, race/ethnicity, education, employment, insurance, marital status, county and state of residence, pre-existing health conditions and substance use. Maternal county and state of residence were used to construct metro classifications for each participant based upon the 2013 National Center for Health Statistics (NCHS) Urban-Rural Classification Scheme for Counties [21]. In Indiana, only Marion County is defined as a large central metro area, a county population of 1 million or more with a high population density per square mile and a low percent of exurban/rural population. Pregnancy risk factors collected for this study were medical diagnoses obtained following perinatal screenings for healthcare purposes that were documented in electronic medical records. Study data included on-going maternal diseases, hypertensive disorders of pregnancy (chronic hypertension, pregnancy induced hypertension, pre-eclampsia, and eclampsia), diabetes (pre-existing and gestational), substance use (drug, alcohol, tobacco, caffeine use), stress, body mass index (BMI) at delivery, duration of pregnancy, and route of delivery. For purposes of this study, the American College of Obstetrics and Gynecology (ACOG) definitions for hypertensive disorders of pregnancy and diabetes were used. Maternal stress was defined as high stress during pregnancy that was self-reported during obstetrical evaluation as noted in the patient’s electronic medical record. Neonatal factors included race/ethnicity, sex, fetal growth indicators such as birth weight and gestational age, NICU transitional care and admission, as well as neonatal diagnoses such as congenital anomalies, neonatal abstinence, respiratory distress, and prematurity. To standardize fetal growth across gestations, gestational age and birth weight were used to calculate a BWT%ile for each liveborn infant using Fenton 2013 Growth Curves for each sex [22].Analytical methodAll urine samples were de-identified and coded prior to shipping to the University of California San Francisco Clinical Toxicology and Environmental Biomonitoring Laboratory for GLY analysis. Urine GLY was measured by standard addition using an Agilent LC 1260 (Agilent Technologies Inc, Santa Clara, CA)-AB Sciex 5500 Triple Quadrupole MS (SCIEX, Redwood City, CA). Isocratic elution chromatography was performed using an Obelisc-N mixed-mode column (2.1 × 100 mm, 5 μm) that was maintained at 40 °C. A 25 µL sample was injected into the column and GLY was eluted using a mobile phase of 1% formic acid in bisphenol-A free water at a flow rate of 1 mL/min and a total run time of 6 min.Mass spectral analysis was performed using an electrospray ionization source operated in negative mode. The parameters used for ionization included curtain gas, 20 psi; collision gas, 9 psi; ion spray voltage, -4500 V; temperature, 700 °C: and ion source gas, 60 psi. GLY was monitored using two transitions: 168.1–62.9 m/z (quantifier) and 168.1–81.0 m/z (qualifier). We used 2-13C,15N-GLY as an internal standard that was monitored using the 169.4–63.0 m/z transition. Quantitative analysis of GLY was done by isotope dilution method.Each batch of samples was injected in duplicate. Procedural quality control (QC) materials and procedural blanks were run along with the samples at the start, middle, and end of each run. Two QC materials were used at low and high concentrations. To accept the results of a batch run, QC materials measurements must be within 20% of their target values. GLY identification from total ion chromatograms was evaluated using AB Sciex Analyst v2.1 software while quantification was processed using AB Sciex MultiQuant v2.02 software.All measured urine samples were corrected for specific gravity in our analysis for differences in urine dilution. The established limits of quantification (LOQ) and limits of detection (LOD) for GLY in urine were 0.5 and 0.1 ng/mL, respectively [23, 24].Statistical analysesPrimary clinical outcome measures were the numerical variable of birth weight adjusted for gestation, BWT%ile, and a binary variable of NICU admission during newborn hospitalization. The continuous measure of exposure, urine GLY (ng/mL) was considered a major independent variable of interest. Socio-demographics, substance use, and other pregnancy and delivery-related risk factors were categorized and used as controlling covariates and moderators in the analysis. The primary biostatistical methods were a linear regression model to assess the relationship between BWT%ile and GLY and a logistic regression model to assess the relationship between NICU admission risk and GLY. Both adjusted and unadjusted methods were used in the statistical models. The unadjusted or univariate models used GLY as the only independent variable in the analysis, while the adjusted or multivariate models included other controlling covariates along with GLY as the independent variables. In order to assess if the relationships of the dependent variables to the GLY level were moderated by a controlling covariate, we used linear and logistic regression models but added GLY, the controlling covariate or moderator, and their interaction as independent variables in the analysis. A moderator was considered significant if the p-value of the interaction term was less than 0.05. In addition, slopes of GLY under subgroups stratified by the moderator were estimated from the linear regression model or logistic regression model and used to assess the relationships under the subgroups. Finally, one-way fixed effect models (or one-way ANOVA models) were used to assess the associations of GLY to the categorical controlling covariates. Statistical models were computed using SAS 9.4 software (SAS, Cary, NC). P values less than 0.05 were considered statistically significant.ResultsAll pregnancies with a first-trimester (< 14 weeks) urine sample (≥ 1 mL) that had not undergone a freeze-thaw cycle in PEES were initially selected for this study (n = 215). One batch of 28 samples did not meet laboratory quality control criteria. Of the remaining 187 samples, 186 (99.5%) had GLY levels  > LOD. To determine the relationship between first trimester GLY urine measurements and outcomes of singleton newborns, 32 pregnancies were excluded (20 with fetal losses, 9 with multiple gestations, 2 unverified deliveries, and 1 with GLY < LOD). Thus, the subset study cohort included 155 pregnancies with singleton liveborn infants. The study population included pregnant women from 32 Indiana counties and one adjacent Illinois county; 57% of the participants lived in Indiana’s large central metropolitan area. The mean maternal age was 29 years (range 18–45 years). The mean pregnancy length was 37.9 weeks (mean ± SD, 265.2 ± 12.7 days). Characteristics of the study participants and their newborns along with the maternal first trimester urine GLY levels are summarized in Table 1. Table 1 Mother and infant characteristics and first trimester glyphosate (GLY)Full size table Associations between maternal characteristics and GLY levelsMean (± SD) GLY level was 3.33 ± 1.67 ng/mL (range 1.02–10.31 ng/mL) for the eligible study population (n = 155). Comparatively, the mean (± SD) GLY level for the excluded individuals (n = 32) was 2.86 ± 1.41 ng/mL (range 0.10–6.89 ng/mL). No statistically significant differences were found between the two groups based upon both parametric and non-parametric tests (Table 2). In a sensitivity analysis we also added the single participant whose GLY was < LOD using the LOD as the GLY level. The findings were not changed and thus, the single participant with a GLY level  < LOD was excluded from the final 155 cases. Table 2 Summary of urine glyphosate (GLY) levels based upon eligibility criteriaFull size table No differences in GLY levels were found based on maternal age, race, or residence in a large central metro area; however, GLY appears to be associated with maternal education. In particular, participants with less than a high school degree had significantly higher urine GLY levels compared to groups with a high school degree or higher (mean ± SE, 5.16 ± 0.61 vs. 3.07 ± 0.26 ng/mL, p = 0.003). Though higher maternal GLY levels were found in lower educated women there were only seven women in the < High School group, two with urinary GLY levels of 8.77 ng/mL and 10.31 ng/mL, the highest among all participants. Statistical significance was not found after these two individuals were removed from analysis (p = 0.146).Higher GLY levels were also found in pregnant women who used tobacco during pregnancy (p = 0.051), while significantly lower GLY levels were found in participants who consumed caffeine during pregnancy (p = 0.018). Although not statistically significant, higher GLY levels were found in participants whose newborns were treated for neonatal abstinence syndrome (p = 0.067); however, there were no differences in GLY levels between participants who did or did not use opioids, cannabis, or polysubstances (multiple drug use). Paradoxically, significantly lower GLY levels were found in pregnant women with diabetes (p = 0.03), while there were no differences in GLY levels based on delivery BMI.Relationship between maternal urine GLY and newborn birth weightTo investigate whether maternal GLY levels in the first trimester of pregnancy were related to fetal growth, we used linear regression models to assess the relationship between GLY and newborn birth weight adjusted for gestation (BWT%ile). Mean (± SD) BWT%ile for the singleton newborns was 47.7 ± 30.1. We found that BWT%ile was negatively related to GLY (Slope ± SE = -0.041 ± 0.014, p = 0.004) (Fig. 1). This negative relationship remained significant after controlling for social demographics and geographic characteristics (maternal age, race, education, employment, marriage, residence, and infant sex), health characteristics (maternal hypertension, diabetes, and delivery BMI), and behavior characteristics (tobacco, alcohol, caffeine, opioid/THC or illicit polysubstance use) in the adjusted model (Slope ± SE = -0.032 ± 0.014, p = 0.023) (Table 3). Tobacco use, alcohol use, opioid/THC or polysubstance use, maternal hypertension, residence, and infant sex were found significant moderators when they were interacted with the relationship between GLY and BWT%ile individually. In particular, when comparing the relationship between GLY and BWT%ile in sub-groups, there was a stronger negative relationship in participants using tobacco (Slope ± SE = -0.046 ± 0.019, p = 0.016) compared to those who did not smoke (Slope ± SE = -0.027 ± 0.022, p = 0.208). Similar findings were observed in participants with opioid/THC or illicit polysubstance use compared to those with no use of these substances. On the other hand, the negative relationship between GLY and BWT%ile was weaker for participants who used alcohol or had hypertension. While we did not observe significant differences in GLY levels between participants living in or outside of Indiana’s large central metropolitan area (Table 1), we found that women living outside of Indiana’s large central metropolitan area were more likely to have a negative relationship between BWT%ile and GLY (Table 3).We also noted that BWT%ile in this cohort was significantly lower in non-Hispanic black than in non-Hispanic white pregnancies (25.7 vs. 53.8, p < 0.001) and that the negative relationship between GLY and BWT%ile was strongest for male infants. However, the size of this study cohort was too small to further investigate the relationship between first trimester urine GLY and BWT%ile in each maternal race and infant sex.Additional analysis was performed to further assess the relationship among the dependent variable (BWT%iles), the independent variable (GLY) and maternal and infant characteristics listed in Table 1 (i.e., alcohol use, tobacco use, substance use, hypertensive disorders of pregnancy, diabetes, infant sex, etc.) (See Additional File 1, Supplemental Table 1).In a sensitivity analysis we used GLY after adjusting for gestational age using a regression method and repeated the statistical analysis using this adjusted GLY instead of the unadjusted GLY. The findings were the same as those reported in our results. Hence those results using adjusted GLY were not reported. In another sensitivity analysis we also adjusted GLY for gestational age at time of urine collection (4–13 weeks); however, this analysis did not change the findings (data not shown). Table 3 Relationship between first trimester urine glyphosate (GLY) and gestation-adjusted birth weight (BWT%ile)Full size table Fig. 1Infant birth weight percentile vs. first trimester urine glyphosate levelScatter plot of maternal urine glyphosate measures in the first trimester of pregnancy and newborn birthweight adjusted for gestation. Each point represents one mother-infant pair (n = 155). Red line indicates the slope calculated from the linear regression model. Each one-unit increase in GLY resulted in a 4.1% drop in birth weight percentile.Full size image Relationship between maternal urine GLY and newborn admission to intensive careWe used logistic regression models to investigate whether maternal GLY levels in the first trimester of pregnancy were related to the risk of their newborn being admitted to the NICU. Sixty-nine out of 155 infants (44.5%) from this high-risk obstetrical cohort were admitted to the NICU after birth. The odds ratio (OR) of NICU admission corresponding to one unit increase of GLY was 1.21 (95% CI: 0.99, 1.38, p = 0.069) (Table 4). When the adjusted model was used, the positive relationship was also not significant (p = 0.233). However, significant relationships were observed in subset analyses among significant moderators of alcohol use, opioid/THC or polysubstance use, maternal hypertension, residence, and infant sex. Significant positive relationships between NICU admission and GLY was found in participants with no use of alcohol or opioid/THC or polysubstance use, no hypertension, and those living in Indiana’s large central metropolitan area. Table 4 Relationship between first trimester glyphosate (GLY) and neonatal intensive care unit (NICU) admissionFull size table Additional analysis was performed to further assess the relationships among the dependent variables (NICU Admit), the independent variable (GLY) and maternal and infant characteristics listed in Table 1 (i.e., alcohol use, tobacco use, substance use, hypertensive disorders of pregnancy, diabetes, infant sex, etc.) (See Additional File 1, Supplemental Table 1).DiscussionThe major finding of this study is that GLY was detectable in the urine of 99% (186/187) of midwestern US women with high risk pregnancies in their first trimester. A mean urine GLY level of 3.3 ng/mL was found in the pregnancies resulting in a singleton live birth (n = 155). These findings are in close agreement with our previous study that found a mean urine GLY level of 3.4 ng/mL in 93% (66/71) of Indiana women with low-risk pregnancies [18]. Participants in our earlier study were predominantly Caucasian (94%) and lived in central Indiana. Therefore, the high incidence of GLY detection reported herein is not likely related to the high-risk nature of the pregnancies, but rather to the widespread environmental use of GLY.Unexpectedly, this study found lower urine GLY levels in women with any form of diabetes (type I, type II, or gestational) (p = 0.03) and in women who consumed caffeine (p = 0.018) during pregnancy (Table 2). The reason for this finding is unclear, but possible explanations could include potential dietary changes in pregnant women who are being treated for diabetes and physiological consequences of polyuria, which is seen more often with diabetes and caffeine consumption. This finding is in contrast to the association of maternal GLY with diabetes and caffeine in our previous low-risk cohort, which included fewer participants and only six women with diabetes; therefore, further study is needed to clarify the true association. Whereas GLY was correlated with shortened pregnancy length in our previous investigation of low risk pregnancies [18], that finding was not replicated in this study, most likely because of the many co-morbidities of high-risk pregnancies that lead to an obstetrical decision to deliver the baby at an earlier gestation time point.To our knowledge, there are no animal or human studies which correlate prenatal glyphosate exposure and fetal growth or BWT%ile. An Agriculture Health Study using self-report as a measure of GLY exposure in pregnancy did not find a statistically significant association with birth weight; however, birth weight percentile was not measured, and most participants were applicators with multiple pesticide exposures [17]. Our previous study measuring birth weight percentiles in low-risk pregnancies did not find a statistically significant association with GLY. Though we did find a trend towards lower birth weight percentiles, our data suggested that a larger sample size might be warranted [18]. An animal study found perinatal exposure to GLY did not alter the weight of the exposed animals when compared with the control groups. Exposed rodents weren’t weighed until PND90 which is considered adulthood. Moreover, that study did not examine birth weight percentiles or birth weights [25]. Another animal study found weaning weights to be significantly lower in rodents exposed to glyphosate in utero. The findings further showed increased rates of adult obesity in F2 and F3 descendants of the GLY exposed lineage [26].Our finding that higher urine GLY levels in pregnancy was associated with lower BWT%iles is compatible with a rodent study, in which smaller birth weights were observed although litter effects were not factored [27]. GLY exposure of our cohort of pregnant women in the first trimester approximates to the time of exposure of rats showing marked fetal epigenetic (DNA methylation) alterations with subsequent transgenerational effects [13, 14]. Thus, our observation of first trimester GLY exposure may have long term health consequences44.A consensus group expressed worldwide concern to scientists, physicians, and regulatory officials about the unanticipated risks to human health and the environment arising from the increasing use of GBHs. They suggested that scientifically up-to-date human studies based on biomonitoring should be prioritized by US government regulatory agencies [28]. When a new chemical is approved for release into the environment such that it is found in every sector (rain, dust, water, air, food, and beverages) and in 94–99% of pregnant women, it is vital that extensive safety measures be undertaken. Indirect causes of toxicity such as those linking GLY with altered microbiome and perhaps such adverse outcomes as autism spectrum disorder, are examples of potential harm, which have not been considered in regulatory agency decisions [29]. It has recently been shown that GLY and a GBH can inhibit the shikimate pathway of bacteria present in the rat gut microbiome with possible health consequences [30]. Furthermore, bioinformatics scrutiny of the human microbiome database, particularly that pertaining to the gut, revealed many bacterial
Environmental Science
Heat-loving marine bacteria can help detoxify asbestos Asbestos materials were once widely used in homes, buildings, automobile brakes and many other built materials due to their strength and resistance to heat and fire, as well as to their low electrical conductivity. Unfortunately, asbestos exposure through inhalation of small fiber particles has been shown to be highly carcinogenic. Now, for the first time, researchers from the University of Pennsylvania have shown that extremophilic bacteria from high temperature marine environments can be used to reduce asbestos' toxicity. The research is published in Applied and Environmental Microbiology. Much of their research has focused on use of the thermophilic bacterium Deferrisoma palaeochoriense to remove iron from asbestos minerals through anaerobic respiration of that iron. "Iron has been identified as a major component driving the toxicity of asbestos minerals and its removal from asbestos minerals has been shown to decrease their toxic properties," said Ileana Pérez-Rodríguez, Ph.D., Assistant Professor of Earth and Environmental Science at the University of Pennsylvania. D. palaeochoriense has also been shown to mediate transfer of electrical charge within the iron contained in asbestos, without changing its mineral structure. Doing so might enhance asbestos' electrical conductivity, said Pérez-Rodríguez. Based on this observation, the bacterium could be used to treat asbestos' toxicity through iron removal. Alternatively, the new properties of electrical conductivity could enable reuse of treated asbestos for that purpose. As with iron, the fibrous silicate structures of asbestos are also carcinogenic. Removal of silicon and magnesium from asbestos has been shown to disrupt its fibrous structure. The investigators tested the ability of the thermophilic bacterium Thermovibrio ammonificans to remove these elements from asbestos minerals by accumulating silicon in its biomass in a process known as biosilicification. T. ammonificans accumulated silicon in its biomass when in the presence of "serpentine" asbestos, which has curly fibers, but not while growing in the presence of "amphibole" asbestos, which has straight fibers, said Pérez-Rodríguez. This difference, along with the varying amounts and types of elements released during microbe-mineral interactions with different types of asbestos "highlights the difficulty of approaching asbestos treatments as a one-size-fits-all solution, given the unique chemical compositions and crystal structures associated with each asbestos mineral," Pérez-Rodríguez said. Overall, these experiments promoted the removal of iron, silicon and/or magnesium for the detoxification of asbestos in a superior manner as compared to other biologically mediated detoxification of asbestos, such as via fungi, said Pérez-Rodríguez. However, further analysis will be required to optimize asbestos treatments to determine the most practical methods for the detoxification and/or reuse of asbestos as secondary raw materials. More information: Jessica K. Choi et al, Microbe-Mineral Interactions between Asbestos and Thermophilic Chemolithoautotrophic Anaerobes, Applied and Environmental Microbiology (2023). DOI: 10.1128/aem.02048-22 journals.asm.org/doi/10.1128/aem.02048-22 Journal information: Applied and Environmental Microbiology Provided by American Society for Microbiology
Environmental Science
Electric roads would pave the way for smaller car batteries, shows modeling study If an electric car charges while driving, the size of the battery can be reduced by up to 70 percent, and the load on the power grid can be spread out over the day. Charging on the move suits most people, but not everyone. This is shown by a new study from Chalmers University of Technology, Sweden, where for the first time researchers combine the so-called electric road system with real-life driving patterns of Swedish drivers. The Swedish government has proposed a ban on new petrol and diesel cars from 2030 to reduce carbon dioxide emissions. The same trends are seen across Europe, as demonstrated by the rapid increasing sales of electric vehicles. As this development progresses, challenges are also increasing, including the uneven load on the power grid and where to charge the electric vehicles. Different ways of charging vehicles on the move Several countries, including Sweden, Denmark, and Germany, are testing whether electric road system (ERS) can be used to electrify road networks. An ERS charges moving vehicles with either loops in or next to the road, or with wires suspended above vehicles, similar to trams and trains. All variants mean that vehicles do not need to be parked to charge, and there are less needs for large batteries storing energy to overcome "range anxiety," a term referring to the fear or concern of an electric vehicle driver's experiences about the distance their electric vehicle can travel before the battery needs to be charged. Now researchers from Chalmers have used data from over 400 passenger cars to study real driving patterns on different parts of Swedish national and European roads. They have used the data to calculate, among other things, battery size needs to complete all journeys given possible charging options (stationary versus ERS), charging patterns, and total costs including infrastructure and batteries. Smaller battery results in lower costs The results show that a combination of electric roads on 25 percent of the busiest national and European roads and home charging would be optimal. The batteries, which account for a large part of the cost for an electric car, can become significantly smaller, at best only one-third of the current size. "We see that it is possible to reduce the required range of batteries by more than two thirds if you combine charging in this way. This would reduce the need for raw materials for batteries, and an electric car could also become cheaper for the consumer," says Sten Karlsson, who, together with research colleagues Wasim Shoman and Sonia Yeh, is behind the study "Benefits of an Electric Road System for Battery Electric Vehicles." Other positive effects are that peaks in electricity consumption would be reduced if car drivers did not entirely rely on home charging but also supplemented it with electric road charging. "After all, many people charge their cars after work and during the night, which puts a lot of strain on the power grid. By instead charging more evenly throughout the day, peak load would be significantly reduced." Limited benefit in sparsely populated areas But different groups of motorists also have different conditions for benefiting from the combination of stationary charging and ERS. "There are big differences between groups, depending on driving patterns and proximity to electric roads. Even in the optimal case, some would manage with only electric road charging, while others would not be able to use the opportunity at all. For example, we see that those who live in the countryside would need almost 20 percent greater range on their batteries compared to those who live in a city center," says Wasim Shoman. The study also shows that small batteries do not automatically lead to charging through ERS. "Just because you can charge does not mean the consumer actually wants to do it at every given opportunity. The business model, therefore, becomes extremely important because benefits and costs may become unevenly distributed. And there are no decisions yet on what the business model should look like," says Sten Karlsson. The study is published in the journal Environmental Science & Technology. More information: Johannes Morfeldt et al, If Electric Cars Are Good for Reducing Emissions, They Could Be Even Better with Electric Roads, Environmental Science & Technology (2022). DOI: 10.1021/acs.est.2c00018
Environmental Science
By Matt McGrathEnvironment correspondentImage source, Getty ImagesImage caption, PFAS have been found in rain in TibetNew research shows that rainwater in most locations on Earth contains levels of chemicals that "greatly exceed" safety levels.These synthetic substances called PFAS are used in non-stick pans, fire-fighting foam and water-repellent clothes. Dubbed 'forever chemicals', they persist for years in the environment.Such is their prevalence now that scientists say there is no safe space on Earth to avoid them.The researchers from Stockholm University say it is "vitally important" that the use of these substances is rapidly restricted. Scientists fear PFAS may pose health risks including cancer, though research has so far been inconclusive. They have been growing increasingly concerned about the proliferation of PFAS in recent years. PFAS stands for poly- and perfluoroalkyl substances. There are around 4,500 of these fluorine-based compounds and they are found in almost every dwelling on Earth in hundreds of everyday products including food packaging, non-stick cookware, rain gear, adhesives, paper and paints.Image source, Getty ImagesImage caption, Fire fighting foams often contain PFAS chemicalsSafety concerns about the presence of these long-lasting substances in drinking water have also been raised. Earlier this year a BBC investigation found PFAS in water samples in England at levels that exceeded European safety levels, but did not exceed the current safety level in England and Wales.This new study, which looks at four specific chemicals in the class, suggests that levels of one PFAS in rainwater around the globe often "greatly exceed" US drinking water advisory levels. Soil around the world is similarly contaminated, evidence suggests. The study's findings lead the authors to conclude that a planetary boundary has been crossed - that there simply is no safe space on Earth to avoid these substances."We argue here that we're not within this safe operating space anymore, because we now have these chemicals everywhere, and these safety advisories, we can't achieve them anymore," said Prof Ian Cousins, the lead author from Stockholm University. "I'm not saying that we're all going to die of these effects. But we're in a place now where you can't live anywhere on the planet, and be sure that the environment is safe."While this is undoubtedly cause for concern, there are some provisos. Many of these safety levels in place are advisory, meaning they are not legally enforceable. Other scientists take the view that action on these chemicals should wait until the health risks are more clearly proven. Much research has been carried out on the health risks posed by PFAS, and scientists say that exposure to high levels may be associated with an increased risk of some cancers, fertility issues and developmental delays in children. However such associations don't prove cause and effect and other studies have found no connection between PFAS and disease.Image source, Markus FreyImage caption, Scientists drilling for ice samples in Antarctica to see how far PFAS have travelledBut for those who have spent years working closely with PFAS, the evidence in the new research paper underlines the need for a precautionary approach. "In this background rain, the levels are higher than those environmental quality criteria already. So that means that over time, we are going to get a statistically significant impact of those chemicals on human health," said Prof Crispin Halsall from the University of Lancaster. He was not involved with the Swedish study. "And how that will manifest itself? I'm not sure but it's going come out over time, because we're exceeding those concentrations which are going to cause some harm, because of exposure to humans in their drinking water."Removing the chemicals in the study from drinking water at treatment plants is possible, if expensive. Image source, Getty ImagesImage caption, Rainwater all over the planet exceeds US safety guidelines say scientistsBut getting below the US advisory levels is extremely challenging, according to the authors.As scientists have gained more knowledge about PFAS over the past 20 years, the safety advisories have been continuously lowered. The has also happened with regard to the presence of these chemicals in soil - and that too is causing problems. In the Netherlands in 2018, the infrastructure ministry set new limits on concentrations of PFAS in soil and dredging material.But this caused 70% of building projects involving soil removal or using excavated material to be halted. After protests, the government relaxed the guidelines. According to the new study, this type of relaxation of safety levels is likely to happen with water contamination as well. "If you applied those guidelines everywhere, you wouldn't be able to build anywhere," said Prof Ian Cousins. "I think they'll do the same thing with the US drinking water advisories, because they're not practical to apply."It's not because there's anything wrong with the risk assessment. It's just because you can't apply those things. It's just impossible, from an economic viewpoint to apply any of those guidelines."Image source, Getty ImagesImage caption, A Netherlands construction site - many projects in the country had to stop because of restrictions on PFASThe key challenge with these chemicals is their persistence, rather than their toxicity, say the study authors. While some harmful PFAS were phased out by manufacturers two decades ago, they persist in water, air and soil. One way PFAS cycle through the environment is in the form of tiny particles carried in sea spray into the air and then back to land. This inability to breakdown in the environment means that PFAS are now found even in remote areas of the Antarctic, as reported by Prof Halsall recently. While there are moves at European level to restrict the uses of these chemicals and to find more benign replacements, there are also hopes that industry will quickly move away from using PFAS. "We do need persistent chemicals and substances, we want our products to last a long time while we use them," said Prof Cousins. "And while there are conservative voices in industry, there are progressive actors too. I'm very optimistic when I see these progressive industries working together."The research has been published in the journal Environmental Science & Technology.Follow Matt on Twitter @mattmcgrathbbc.
Environmental Science
Humans have increased the concentration of potentially toxic mercury in the atmosphere sevenfold since the beginning of the modern era around 1500 C.E., according to new research from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). The research team, led by Elsie M. Sunderland, the Fred Kavli Professor of Environmental Chemistry and Professor of Earth and Planetary Sciences, developed a new method to accurately estimate how much mercury is emitted annually from volcanos, the largest single natural emitter of mercury. The team used that estimate — along with a computer model — to reconstruct pre-anthropogenic atmospheric mercury levels. The researchers estimated that before humans started pumping mercury into the atmosphere, it contained on average about 580 megagrams of mercury. However, in 2015, independent research that looked at all available atmospheric measurements estimated the atmospheric mercury reservoir was about 4,000 Mg — nearly 7 times larger than the natural condition estimated in this study. Human emissions of mercury from coal-fired power plants, waste-incineration, industry and mining make up the difference. "Methylmercury is a potent neurotoxicant that bioaccumulates in fish and other organisms — including us,” said Sunderland, senior author of the paper. “Understanding the natural mercury cycle driven by volcanic emissions sets a baseline goal for policies aimed at reducing mercury emissions and allows us to understand the full impact of human activities on the environment.” The research is published in Geophysical Research Letters. The challenge with measuring mercury in the atmosphere is that there’s not very much of it, despite its outsized impact on human health. In a cubic meter of air, there may be only a nanogram of mercury, making it virtually impossible to detect via satellite. Instead, the researchers needed to use another chemical emitted in tandem with mercury as a proxy. In this case, the team used sulfur dioxide, a major component of volcanic emissions. “The nice thing about sulfur dioxide is that it’s really easy to see using satellites,” said Benjamin Geyman, a PhD student in Environmental Science & Engineering at SEAS and first author of the paper. “Using sulfur dioxide as a proxy for mercury allows us to understand where and when volcanic mercury emissions are occurring.” Using a compilation of mercury to sulfur dioxide ratios measured in volcanic gas plumes, the researchers reverse engineered how much mercury could be attributed to volcanic eruptions. Then, using the GEOS-Chem atmospheric model, they modeled how mercury from volcanic eruptions moved across the globe. The team found that while mercury mixes into the atmosphere and can travel long distances from its injection site, volcanic emissions are directly responsible for only a few percent of ground level concentrations in most areas on the planet. However, there are areas— such as in South America, the Mediterranean and the Ring of Fire in the Pacific — where levels of volcanic emissions of mercury make it harder to track human emissions. “In Boston, we can do our local monitoring and we don’t have to think about whether it was a big volcano year or a small volcano year,” said Geyman. “But in a place like Hawaii, you’ve got a big source of natural mercury that is highly variable over time. This map helps us understand where volcanos are important and where they aren’t, which is really useful for understanding the impact of humans on long-term mercury trends in fish, in the air and in the ocean. It’s important to be able to correct for natural variability in the volcanic influence in places where we think that influence may not be negligible.” The research was co-authored by Colin Thackray and, Daniel J. Jacob, the Vasco McCoy Family Professor of Atmospheric Chemistry and Environmental Engineering. It was supported by the National Science Foundation under grants 2210173 and 2108452. Journal Geophysical Research Letters
Environmental Science
From Annapolis to the Arctic: Research schooner begins journey to examine the effects of climate change In the sunlight of a May afternoon on the Chesapeake Bay, a 72-foot schooner called the Marie Tharp floated above a shipwreck from long ago. The boat's instruments were hard at work, mapping the ruins of the sunken wooden steam freighter, the New Jersey, using sonar equipment affixed to its hull. But that effort Thursday afternoon was just a test, and the Chesapeake Bay a proving ground, for mapping the previously uncharted. On Monday, the crew of the Marie Tharp will set sail from Annapolis for Greenland to begin mapping the seabed around the massive North Atlantic island and analyze the environmental impacts of glacial melting caused by a warming climate. The ship is named after Tharp, a woman whose work charting the ocean floor, and revealing its peaks and valleys, laid the groundwork for the theory of continental drift. However, she wasn't allowed aboard research ships due to her sex. Tharp died in 2006 at the age of 86. From Annapolis, the ship will sail north to the Chesapeake and Delaware Canal, and go out through Delaware Bay to reach the Atlantic Ocean. In early June, the ship is scheduled to reach St. John's, Newfoundland. Next, the crew must traverse the Labrador Sea—known for its punishing conditions mingling fog, gale-force winds and ice—to reach Nuuk, Greenland. From there, the crew will explore a series of fjords, which are estuaries formed by glaciers, along the coast of Greenland and even further north in the Canadian Arctic, near Devon and Ellesmere islands. During their travels, they will gather seabed and water samples to assess the health of bodies of water in the wake of glacial retreat. Additionally, they will be charting what's below. It will be the second Arctic voyage for the Marie Tharp, a steel-hulled sailboat constructed in 2000 that serves as home base for the nonprofit Ocean Research Project. This type of voyage is nothing new for the Annapolis-based organization's founder—Matt Rutherford. Rutherford is best known for being the first to complete a nonstop solo sail circumnavigating North and South America. But after his 2011 trip, during which he "caught more plastic trash than I caught fish," Rutherford founded the Ocean Research Project, and—joined by scientist Nicole Trenholm of the University of Maryland Center for Environmental Science—decided to use his sailing prowess to map the eastern side of the North Atlantic Garbage Patch, a collection of microplastic and other trash concentrated by ocean currents. In 2015, the nonprofit's attention turned to climate change research, and it teamed up with NASA's Ocean Melting Greenland Program to conduct research in the coldest northern reaches of the planet. The idea was simple, but groundbreaking. Aboard his small sailboat, Rutherford could conduct research more cheaply and efficiently than with a large icebreaker, which would burn far more fuel along the way. But it takes tremendous skill—and a "sailor's intuition"—to avoid icebergs and endure storms and fog without the safety net that a heftier vessel provides, Rutherford said. It quickly became clear that the nonprofit was outgrowing its aging sailboat, which—at 42 feet—could only hold about four people at once. Enter the Marie Tharp, a Bruce Roberts Voyager 650 that hardly had been sailed at all when it was donated to the nonprofit. Outfitting the boat, which had fallen into disrepair, for dangerous voyages north required significant work. The coronavirus pandemic found Rutherford living in a boat yard on his new ship, a layer of saw dust coating his bed and his belongings as he completed the repairs. After the Tharp's first trip to Greenland last year, there were a few kinks to work out, Rutherford said. For one thing, the boat's multi-beam sonar system was attached to a pole in front of the boat, frequently endangered by ice formations and the elements. But now it is stowed on the bottom of the boat, protected by metal shark fins that will work to divert potentially damaging ice chunks. During the trip, the Marie Tharp and her passengers (seven to nine people, including a rotating cast of scientists with Arctic research to complete) will visit what Trenholm called the "dirtiest fjord in all of Greenland," located near the southwestern town of Paamiut. Water from melting glaciers have filled that fjord with excess sediment, which carries nutrients capable of upending the ecosystem's balance—the same phenomena playing out in an estuary closer to home: the Chesapeake Bay. The water quality research collected by the Tharp and her crew will offer a critical snapshot of the current conditions stemming from climate change in the Arctic, assisting efforts to predict the future. And the seabed samples will tell the story of the fjord's history before glacier melt, in the same way rings on a trunk chart the life of a tree. The Ocean Research Project's latest trip has attracted the attention of consumer advocate and environmentalist Ralph Nader, now 89-years-old, whose great nephew Adnaan Stumo is a crew member. Nader, who ran four times for president, said he was struck by Rutherford's effort to make climate research cheaper and greener, calling it a bright spot amid otherwise distressing subject matter. "There's so much grim reality in this climate violence, climate catastrophe," Nader said. "And he's quite a unique person." Stumo, an experienced sailor who has completed trips across the Atlantic and Pacific oceans, said he learned about the Ocean Research Project while listening to Rutherford—one of his sailing heroes—speak on a podcast. When Rutherford mentioned that he was seeking sailors for his next voyage, Stumo reached out. And soon enough, he was set to join Rutherford and make his very first trip to the Arctic. This year, the crew is all volunteer. After a critical funding source dried up, none of the crew members will receive a salary during the five-month voyage, though all the research activities are paid for. That news did not deter Stumo. "I thought about it for a minute," he said. "But at the end of the day, if I wanted to get rich, I would be doing something else." The crew is mostly women, which is a first for sailor and shipwright Allie Gretzinger. "Usually, I'm the only female or there's maybe one other," said Gretzinger, who also will be making her first voyage to the Arctic aboard the Marie Tharp. Over the past several weeks, Gretzinger and Stumo and have been among those helping Rutherford prepare the boat. For Stumo, that has included installing a wind generator while "swinging high in the air trying not to drop tools" and a Costco haul for the ages, featuring massive amounts of dry and canned goods, from pasta to tuna. The bill? More than $3,500, he said. "The girls behind the counter were kind of wide-eyed, like: 'Are you having a party?'" Stumo said. But a party it won't be. Stumo and the crew will battle the elements, navigating narrow fjords and dodging submerged icebergs the size of school buses. Of course, that is all part of the adventure. But Stumo said he is more energized by the knowledge that he will be contributing to climate change research conducted in a "unique window" in time, as glaciers packed with geological clues recede into the land. In addition, he is sailing with the memory of his sister Samya Stumo, who perished at 24-years-old in the 2019 Ethiopian Airlines crash that took the lives of all 149 passengers and crew. At the time, Samya Stumo was heading to Kenya for her first project with a health systems development organization, trying to fulfill health care needs for vulnerable communities. "She would have had a 50-plus-year career and touched so many lives," Stumo said. "So, I am trying to do my small part and keep her in mind." 2023 The Baltimore Sun. Distributed by Tribune Content Agency, LLC.
Environmental Science
New study shows social media content opens new frontiers for sustainability science researchers With more than half of the world’s population active on social media networks, user-generated data has proved to be fertile ground for social scientists who study attitudes about the environment and sustainability. But several challenges threaten the success of what’s known as social media data science. The primary concern, according to a new study from an international research team, is limited access to data resulting from restrictive terms of service, shutdown of platforms, data manipulation, censorship and regulations. The study, published online March 17 in the journal One Earth, is the first known to evaluate the scope of environmental social media research and its potential to transform sustainability science. The 17-member research team analyzed 415 studies, published between 2011 and 2021, that examined social media content related to the environment. “Ideas about climate change and our environment are increasingly coming from social media,” said Derek Van Berkel, assistant professor at the University of Michigan’s School for Environment and Sustainability and one of the study’ three lead authors. “Online communities like Reddit, or simply news stories shared by your friends on Facebook, have become digital landscapes where many ideas are shaped and formed.” Understanding how those ideas are shaped aids science communicators in honing environmental messaging and prompts them to fill gaps where information is lacking or misrepresented. Despite the potential public benefits of social media data science, the authors argue, current business models of social media platforms have generated a vicious cycle in which user data is treated as a private asset that can be purchased or sold for profit. This has raised public concern and mistrust of social media companies, leading to a greater demand for more regulation. The study supports the idea of replacing this vicious cycle with a “virtuous cycle.” “A virtuous cycle requires the collaboration of SM companies, researchers, and the public,” said co-lead study author Johannes Langemeyer from the Institute of Environmental Science and Technology at the Autonomous University of Barcelona. “For their part, sustainability researchers can foster more trust and cooperation by embracing high ethical standards. Inclusivity, transparency, privacy protection, and responsible use of the data are key requirements—and will lead to an improved standardization of research practices moving forward,” Langemeyer said. A promising example of cooperation from a social media platform was initiated in January 2021 when Twitter set a new standard for broader access to researchers by introducing a new academic research product track, which for the first time allowed free full-archive searches for approved researchers. Such an approach could have served as a model for wider open access across social media platforms. But confirming the fears of researchers, Twitter recently announced that as of Feb. 9, 2023, the company will no longer support free access. “SM data has the potential to usher in a revolution in the current practices of sustainability research, especially in the social sciences, with an impact on par with that of Earth observation in the environmental sciences,” said co-lead study author Andrea Ghermandi from the Department of Natural Resources and Environmental Management at the University of Haifa in Israel. The study concludes that social media data assessments can support the 2015 U.N. Sustainable Development Goals that serve as a universal call to action to end poverty, protect the planet, and ensure that by 2030 all people enjoy peace and prosperity. “Achieving the U.N. Sustainable Development Goals will require large-scale, multi-country efforts as well as granular data for tailoring sustainability efforts,” the study authors wrote. “The shared values and goals of working for a sustainable future may provide common ground for the cooperation needed to fully realize the contribution that SM data offers.” Funding support for the study came from multiple international and domestic sources, including the U.S. National Science Foundation, the Alfred P. Sloan Foundation, the Gordon and Betty Moore Foundation, and the German-Israeli Foundation for Scientific Research & Development.
Environmental Science
PFAS found in blood of dogs, horses living near Fayetteville, NC In a new study, researchers from North Carolina State University detected elevated PFAS levels in the blood of pet dogs and horses from Gray's Creek, N.C.—including dogs that only drank bottled water. The work establishes horses as an important sentinel species and is a step toward investigating connections between PFAS exposure and liver and kidney function in dogs and horses. The study included 31 dogs and 32 horses from the community, and was conducted at the behest of community members concerned about their pets' well-being. All of the households in the study were on well water, and all of the wells had been tested and deemed PFAS contaminated by state inspectors. The animals received a general veterinary health check and had their blood serum screened for 33 different PFAS chemicals. These PFAS were chosen based on compounds that were present in the Cape Fear River basin and the availability of analytical standards. From the targeted list of 33 PFAS of interest, researchers found 20 different PFAS in the animals. All of the animals in the study had at least one chemical detected in their blood serum, and over 50% of the dogs and horses had at least 12 of the 20 detected PFAS. PFOS, a long-chain PFAS used for years in industrial and commercial products, had the highest concentrations in dog serum. The perfluorosulfonic acid PFHxS, a surfactant used in consumer products and firefighting foams, was detected in dogs, but not horses. Consistent with wells being the known contamination source, some ether-containing PFAS including HFPO-DA (colloquially known as GenX), were detected only in dogs and horses that drank well water. In dogs who drank well water, median concentrations of two of the PFAS—PFOS and PFHxS –were similar to those of children in the Wilmington GenX exposure study, suggesting that pet dogs may serve as an important indicator of household PFAS. Dogs who drank bottled water, on the other hand, had different types of PFAS in their blood serum. However, 16 out of the 20 PFAS detected in this study were found in the dogs who drank bottled water. Overall, horses had lower concentrations of PFAS than dogs, though the horses did show higher concentrations of Nafion byproduct 2 (NBP2), a byproduct of fluorochemical manufacturing. The finding suggests that contamination of the outdoor environment, potentially from deposition of the PFAS onto forage, contributed to their exposure. "Horses have not previously been used to monitor PFAS exposure," says Kylie Rock, postdoctoral researcher at NC State and first author of the work. "But they may provide critical information about routes of exposure from the outdoor environment when they reside in close proximity to known contamination sources." Finally, the veterinary blood chemistry panels for the animals showed changes in diagnostic biomarkers used to assess liver and kidney dysfunction, two organ systems that are primary targets of PFAS toxicity in humans. "While the exposures that we found were generally low, we did see differences in concentration and composition for animals that live indoors versus outside," says Scott Belcher, associate professor of biology at NC State and corresponding author of the work. "The fact that some of the concentrations in dogs are similar to those in children reinforces the fact that dogs are important in-home sentinels for these contaminants," Belcher says. "And the fact that PFAS is still present in animals that don't drink well water points to other sources of contamination within homes, such as household dust or food." The work, title "Domestic Dogs and Horses as Sentinels of Per- and Polyfluoroalkyl Substance (PFAS) Exposure and Associated Health Biomarkers in Gray's Creek North Carolina," appears in Environmental Science and Technology. More information: Domestic Dogs and Horses as Sentinels of Per- and Polyfluoroalkyl Substance (PFAS) Exposure and Associated Health Biomarkers in Gray's Creek North Carolina, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.3c01146 Provided by North Carolina State University
Environmental Science
Rich industrialised countries responsible for excessive levels of greenhouse gas emissions could be liable to pay $170tn in climate reparations by 2050 to ensure targets to curtail climate breakdown are met, a new study calculates. The proposed compensation, which amounts to almost $6tn annually, would be paid to historically low-polluting developing countries that must transition away from fossil fuels despite not having yet used their “fair share” of the global carbon budget, according to the analysis published in the journal Nature Sustainability. The compensation system is based on the idea that the atmosphere is a commons, a natural resource for everyone which has not been used equitably. It is the first scheme where wealthy countries historically responsible for excessive or unjust greenhouse emissions including the UK, US, Germany, Japan and Russia, are held liable to compensate countries which have contributed the least to global heating – but must decarbonise their economies by 2050 if we are to keep global heating below 1.5C and avert the most catastrophic climate breakdown. In this ambitious scenario, the study found that 55 countries including most of sub-Saharan Africa and India would have to sacrifice more than 75% of their fair share of the carbon budget. On the other hand, the UK has used 2.5 times its fair allocation, and would be liable to pay $7.7tn for its excessive emissions by 2050. The US has used more than four times its fair share to become the richest country in the world, and would be responsible for $80tn in reparations under this scheme. “It is a matter of climate justice that if we are asking nations to rapidly decarbonise their economies, even though they hold no responsibility for the excess emissions that are destabilising the climate, then they should be compensated for this unfair burden,” said Andrew Fanning, lead author and visiting research fellow at the University of Leeds’ Sustainability Research Institute. In order to keep global heating to below 1.5C, the total global carbon budget starting from 1960 is 1.8tn tonnes of CO2 or equivalent greenhouse gases, according to Intergovernmental Panel on Climate Change (IPCC) figures. Using population size, researchers calculated how much 168 countries have over- or under-used their fair share of the global carbon budget since 1960. Some countries were within their fair share allocation, while the global north (the US, Europe, Canada, Australia, New Zealand, Japan, and Israel) have already massively overshot their fair share of the atmospheric commons. Almost 90% of the excess emissions are down to the wealthy global north, while the remainder are from high-emitting countries in the global south, especially oil-rich states such as Saudi Arabia and United Arab Emirates. Five low-emitting countries with large populations – India, Indonesia, Pakistan, Nigeria and China (currently the world’s largest emitter) – would be entitled to receive $102tn, for sacrificing their fair share of the carbon budget in the zero emissions scenario. “Climate change reflects clear patterns of atmospheric colonisation,” said Jason Hickel, co-author and professor at the Institute of Environmental Science and Technology at the Autonomous University of Barcelona. “Responsibility for excess emissions is largely held by the wealthy classes [within nations] who have very high consumption and who wield disproportionate power over production and national policy. They are the ones who must bear the costs of compensation.” Demands are mounting to compensate climate-vulnerable countries for the threats they face due to the excessive greenhouse gas emissions of others, as part of a broader climate justice movement to make polluters pay for the climate crisis and green energy transition. Last year at the UN’s Cop27 summit, states agreed to establish a “loss and damage” financing fund to provide funds to poor countries for the irreparable and unavoidable economic and non-economic costs of extreme weather events and slow-onset climate disasters such as sea level rise and melting glaciers. According to research published last month, the world’s top oil, gas and coal companies are responsible for $5.4tn (£4.3tn) in drought, wildfires, sea level rise, and melting glaciers among other climate catastrophes expected between 2025 and 2050. This was the first study quantifying the economic burden caused by individual companies that have extracted – and continue to extract – wealth from planet heating fossil fuels.
Environmental Science
Study: Streamflow timing in Pakistan will become three times faster by end of century Nature has remained in balance for a long time, but climate change due to modern human activities is disrupting the balance of the natural system. The disruption makes it more difficult for humans—who must work with nature to survive—to predict the future. Moreover, developing countries with limited understanding and preparation for climate change are more vulnerable to climate change-driven social and economic damage. Recently, a research team from POSTECH corrected the biases of future regional climate model projection data to better understand seasonal changes in the streamflow regime in Pakistan's four main rivers in the mid and late 21st century. POSTECH'S research team led by Professor Jonghun Kam (Division of Environmental Science and Engineering) and post-graduate researcher Shahid Ali assessed the past and future changes in streamflow timing of the four major river basins of Pakistan including Upper Indus, Kabul, Jhelum, and Chenab River basins. The research team used observational data and bias-corrected hydrological projections. This study was recently published in the Journal of Hydrology. Hydrology mainly deals with the cycle of water on Earth and the use of surface water. As the science explores the complexity of the natural water flow, various assumptions, statistics, and mathematical techniques, instead of reproduction in the lab, are used to study precipitation, runoff, infiltration, and streamflow and provide basic knowledge and data for the use of water resources. However, climate change and human activities are changing the water cycle itself, rendering it difficult to solve future problems with past knowledge and data. Pakistan is a representative example of a country suffering severe seasonal changes in streamflow, causing a lack of available water resources for agriculture. To make it worse, the Indus River was inundated over the downstream regions of Pakistan last year, causing catastrophic effects on regional communities. However, understanding of future seasonal changes in streamflow over Pakistan remains limited. The researchers simulated the VIC-river routing model forced by surface and runoff data from six regional climate models. They later corrected the minimum and seasonality bias against observational records. To quantify seasonal changes in the hydrologic regime, they computed half of the annual cumulative streamflows (HSCs) and the dates of reaching to the first quartile (25th percentile), that is, center-of-volume dates (CVDs) from observed and bias-corrected simulated streamflow data. Observational records (1962–2019) showed a significant decreasing trend in CVD by a range between -4.5 and -12.6 days across the three river basins, except for Chenab River basin. Bias-corrected hydrologic projections showed decreased CVD by −4.2 to −6.3 days during the observational period. The four study river basins showed that the decreased CVDs range from −5 to −20 days in the near future (the 2050 to 2059 average) and −11 days to −37 days in the far future (the 2090 to 2099 average). Professor Kam explained, "In late winter, accelerated snow melting processes over mountainous regions in Pakistan can cause changes in available water resources for crop planting in spring. This study highlights diversity in the hydrologic response to a similar magnitude of surface warming in the future climate projection." He added, "there is an urgent need to prepare basin-specific water resources management and policies in order to adapt to climate change." More information: Shahid Ali et al, Past and future changes toward earlier timing of streamflow over Pakistan from bias-corrected regional climate projections (1962–2099), Journal of Hydrology (2022). DOI: 10.1016/j.jhydrol.2022.128959 Provided by Pohang University of Science & Technology (POSTECH)
Environmental Science
BOSTON -- When it comes to hurricanes, New England can't compete with Florida or the Caribbean. But scientists said Friday the arrival of storms like Hurricane Lee this weekend could become more common in the region as the planet warms, including in places such as the Gulf of Maine. Lee remained a Category 1 hurricane late Friday night with sustained winds of 80 mph (128 kph). The storm was forecast to brush the New England coast before making landfall later Saturday in the Canadian province of Nova Scotia. States of emergency were declared for Massachusetts and Maine. One recent study found climate change could result in hurricanes expanding their reach more often into mid-latitude regions, which include New York, Boston and even Beijing. The study says the factors include warmer sea surface temperatures in these regions and the shifting and weakening of the jet streams, which are the strong bands of air currents encircling the planet in both hemispheres. “These jet stream changes combined with the warmer ocean temperatures are making the mid-latitude more favorable to hurricanes,” said Joshua Studholme, a Yale University physicist and the study's lead author. “Ultimately meaning that these regions are likely to see more storm formation, intensification and persistence.” Another recent study simulated tropical cyclone tracks from pre-industrial times, modern times and a future with higher emissions. It found hurricanes will move north and east in the Atlantic. The research also found hurricanes would track closer to the coasts including Boston, New York and Norfolk, Virginia, and more likely form along the Southeast coast, giving New Englanders less time to prepare. “We also found that hurricanes are more likely to move most slowly when they’re traveling along the U.S. East Coast, which causes their impacts to last longer and increase that duration of dealing with winds and storm surge,” said Andra Garner, lead study author and an assistant professor of environmental science at Rowan University in New Jersey. Garner noted the study results included New York City and Boston. Kerry Emanuel, a professor emeritus of atmospheric science at the Massachusetts Institute of Technology, who has long studied the physics of hurricanes, said parts of Maine will see more frequent hurricanes and heavier rains with each storm. “We expect to see more hurricanes than we’ve seen in the last few decades. They should produce more rain and more wind," said Emanuel, who lives in Maine. “We certainly have seen up here an increase in the destructiveness of winter storms, which is a very different beast. I would say the bulk of the evidence, the weight of the evidence, is that we’ll see more rain and more wind from these storms.” One reason for the trend is the region's warming waters. The Gulf of Maine, for example, is warming faster than the vast majority of the world’s oceans. In 2022, the gulf recorded the second-warmest year on record, beating the old record by less than half a degree Fahrenheit. The average sea surface temperature was 53.66 degrees Fahrenheit (12 degrees Celsius), more than 3.7 degrees above the 40-year average, scientists said. “Certainly, when we think about storms forming and traveling at more northern latitudes, sea surface temperature comes into play a lot because hurricanes need those really warm ocean waters to fuel them,” Garner said. "And if those warm ocean waters exist at higher latitudes than they used to, it makes it more possible for storms to move in those areas." While hurricanes and tropical storms are uncommon in New England, the region has been seen its share of violent weather events. The Great New England Hurricane of 1938 brought gusts as high as 186 mph (300 kph) and sustained winds of 121 mph (195 kph) at Massachusetts’ Blue Hill Observatory. Hurricanes Carol and Edna hit the region 11 days apart in 1954 and Hurricane Bob decimated Block Island in 1991. Superstorm Sandy in 2012 caused damage across more than a dozen states and wreaked havoc in the Northeast when it made landfall near Atlantic City, New Jersey. Tropical Storm Irene killed six people in Vermont in August 2011, washing homes off their foundations and damaging or destroying more than 200 bridges and 500 miles (805 kilometers) of highway. Experts warn that policy makers need to take projections of increased hurricane activity seriously and start upgrading their dams, roadways and neighborhoods for these future storms. “We definitely in our coastal communities need to be thinking about how can we make our shorelines more resilient," Garner said. ”Do we need to change," she said, "where those flood zones are located, kind of thinking about how to perhaps protect the shorelines and think about solutions for that and adaptation kinds of things?” Those making policy also can implement measures to keep emissions down so the worst effects of climate change don't materialize, Garner said. ___ Follow Michael Casey on X, formerly Twitter: @mcasey1 ___ Associated Press climate and environmental coverage receives support from several private foundations. See more about AP’s climate initiative here. The AP is solely responsible for all content.
Environmental Science
The parkway in front of Marco De La Rosa’s home remains bare.There isn’t a sapling to bloom in spring or a shade tree to temper the summer heat along this stretch of seven properties in a row in Gage Park, a predominantly Hispanic neighborhood on the Southwest Side.De La Rosa tried to change that. More than 2 ½ years ago, the former environmental science student asked the city to plant a tree. He’s still waiting.“I feel disappointed,” he told the Tribune. “But I also don’t feel surprised.”Over the past decade the city has backtracked on ambitious goals made years ago to provide residents with trees, particularly on the South and West sides where researchers say trees are needed the most, a Tribune investigation found.Marco De La Rosa stands in a treeless stretch outside his home in the Gage Park neighborhood of Chicago on May 22, 2022. De La Rosa requested a tree from the city more than 2 1/2 years ago. His request still hasn't been filled. (Raquel Zaldívar/Chicago Tribune)The failures come as research shows trees blunt the warmer, wetter effects of climate change in the Great Lakes region. Fewer trees in neighborhoods can mean hotter temperatures, more flooding, dirtier air and higher electric bills — all of which can affect mental and physical health.The city’s half million street trees, those often found on the strip of grass between roadways and sidewalks, make up a part of the overall canopy coverage, along with trees in parks and yards. How the city manages these trees can directly affect residents’ quality of life.Tribune analyzed the rate at which street trees were planted per mile of streets from 2011 through 2021, finding higher planting rates in wealthier, whiter neighborhoods deemed less of a priority.In Gage Park, a working-class neighborhood that’s become home to thousands of Latino immigrants, this translated to fewer than 300 street trees planted during that time period. Yet the city planted more than 850 trees in a similar-sized community on the North Side: North Center. And Edgewater, with fewer miles of streets than Gage Park, saw more than 1,000 trees planted in that time.The Tribune studied data provided by the city’s forestry and transportation departments on street tree plantings and removals, then compared that to where federal and local studies had directed the city to prioritize plantings.Among the Tribune’s findings:Despite a public push a generation ago to plant more trees, Chicago parkways have lost more greenery in the past decade than they’ve gained. For every tree planted, the city has removed about two trees. While a destructive pest killed off tens of thousands of trees, the city drastically cut back on tree plantings, from 17,000 a year in the late 1990s to a few thousand annually in recent years.When planting trees, the city failed to follow research that identified vulnerable areas or areas that had lost the most trees. Instead, a Tribune analysis found a greater share of trees went to community areas with higher income, education and employment levels, even if they were deemed a lower priority for planting, further contributing to the inequitable canopy.The city has pushed residents to use 311 to request a tree — a system described as first-come, first-served that places responsibility on residents to know it exists and then work through a bureaucracy that inexplicably serves some people faster than others. Some residents are still waiting on requests made in 2019.Mayor Lori Lightfoot has committed $46 million from the pandemic recovery plan to plant 75,000 trees in the next five years as part of the “Our Roots Chicago” initiative, a pace of roughly 15,000 trees a year. But the city has acknowledged that it would take at least that many trees planted annually over 10 years — and potentially thousands more — to make up for the past decade’s losses.Acknowledging enduring problems in the city’s forestry efforts, the Lightfoot administration announced last fall that it would prioritize planting trees in “historically marginalized and underserved communities, equitably conveying ecosystem benefits to communities disproportionately impacted by the climate crisis.”With the season beginning on Arbor Day, the city has so far planted about 2,000 trees, according to a spokesperson for the Department of Streets and Sanitation, which oversees the forestry bureau.The city’s efforts come as tree equity has grown into a national issue.“It’s become undeniable,” said Ian Leahy, vice president of urban forestry for the national nonprofit American Forests. “Trees have gone from nice–to-have-background to life-changing infrastructure.”Chicago’s second Mayor Daley came to office in 1989 with a vision of trees.By then, the city had been losing thousands a year as a result of killer disease, harsh conditions and poor care. Some residents simply wanted trees gone, viewing them as a nuisance even if they were healthy.Daley didn’t like the concrete, said Edith Makra, then an under-30 arborist who was tasked with carrying out the early years of his tree agenda.“He said, when the kids go to school or when you walk to the train, all you see is concrete,” said Makra, who today is the director of environmental initiatives for an organization that works on regional public policies. “And it shouldn’t be that way. We need trees.”Reactions to Daley’s push were sometimes incredulous, she said. “He likes what? Trees?” U.S. Rep. Bobby Rush at the time called it a “poorly arranged rendition of Johnny Appleseed.”Mayor Richard M. Daley, second from right, Chicago Park District board members and Ald. Eugene Schulter, plant a flowering crab tree in Grant Park in celebration of Arbor Day on April 28, 1989. (Anne Cusack / Chicago Tribune)The Daley administration wasn’t above cutting down a few trees for political gain but worked to change the way trees were planted and removed, creating policy to boost plantings as part of construction and capping unlimited aldermanic removal requests.Under Daley, the city studied its trees. A groundbreaking 1994 project with the U.S. Forest Service quantified the climate, pollution and energy benefits of Chicago trees, down to the dollar, in an effort researchers saw as positioning the city as a “green pioneer” with the ability to strategically plant trees, according to a retelling of the undertaking in Jill Jonnes’ book “Urban Forests.”Some findings were not flattering: Chicago’s tree canopy cover was estimated at 11%.From the time Daley took office through 2010, nearly 300,000 street trees were planted for a net gain of nearly 70,000 street trees after removals , according to city records. Around the time he was on his way out, the total street tree tally was estimated to be about 580,000.But by then, the Great Recession had arrived, as well as a new invasive pest.Tens of thousands of ash trees, which made up nearly a fifth of street trees, were removed as a result of the tree-killing emerald ash borer beetle, while forestry worked to treat those that could be saved.And faced with budget decisions in a city with more immediate problems, some aldermen prioritized other efforts over trees.By 2012 — Mayor Rahm Emanuel’s first year in office — the overall forestry budget was down to what it was two decades prior, according to city records. The planting-specific budget plummeted, from $3.5 million in 2008 to $173,500 in 2013.From 2011 through 2021, there was a net loss of at least 69,000 street trees, according to city records.Among the more than 140,000 trees felled by the city in the past decade was one outside the Gage Park home of Diana Mendez.Mendez came home from work to find a stump where a mature tree was hours earlier. She still doesn’t know why the tree was removed.She later got an oak tree planted through a nonprofit, but it wasn’t enough to stem the losses she saw across her neighborhood.“If you go through the area, you’re going to see trees have been taken down and never replaced,” Mendez said. “I feel like because they’re mostly Latino areas and African American areas, nobody really cares.”Gage Park residents Diana Mendez, right, and Mauro Hernandez, center, step on the soil around a recently-planted tree on April 30, 2022, during a tree planting event organized by Openlands. (Raquel Zaldívar/Chicago Tribune)In recent years, city records didn’t say why thousands of trees were removed. Along with the ash borer, the city has blamed the loss of trees on extreme weather due to climate change. For those with a listed reason, the largest group was chopped down because those trees were dead, diseased or damaged, followed by water department work.Each loss comes at a cost. Today a removal of a tree with a trunk 2 feet in diameter is about $1,000, while planting costs about half that, according to the city. And saplings, if they survive the crucial first years, won’t be an equal replacement of mature, broad-canopied trees for decades.The lack of plantings and wide-ranging removals came as the city’s overall canopy coverage thinned. Unlike most of the collar counties, which saw modest increases in canopy cover, Chicago’s decreased by 3 percentage points from 2010 through 2020, from 19% of the city to 16%, according to the last census from the Chicago Region Trees Initiative, established by the Morton Arboretum.[ In the past 10 years, millions more ash trees have died, and the invasive buckthorn now makes up 36% of the Chicago region’s trees, census shows ]Chicago lags behind other cities — large and small — in canopy cover. New York has increased its cover in recent years to 22%. Pittsburgh has canopy cover around 40% with a goal to increase that to 60% by 2030.The census report said the Chicago drop was likely due to the loss of mature ash trees and younger trees unable to provide much cover — or from not replacing lost trees in the first place.The losses came as researchers learned how important street trees are in cities — in aiding human health and guarding against climate extremes.“If you can put something in the ground, and it protects you from the health impacts of extreme heat, and it sequesters carbon, and it can maybe actually reduce the overall rate of warming — then that sounds wonderful,” said Trent Ford, the state climatologist.Trees can soak up water dumped during intense storms, which already disproportionately affect communities on the South and West sides with basement flooding. And they can lower neighborhood temperatures.By the end of the century, a summer in Chicago could feel like one today in Mesquite, Texas, with average summer highs more than 10 degrees warmer than they are now.And, for now, some neighborhoods are equipped with fewer trees to endure it.Monica SanMiguel lives in Pilsen, a largely Latino neighborhood with vaulted sidewalks and a long history battling industrial pollution. It’s part of Chicago’s Lower West Side, where the canopy covers just 7% of the community area.SanMiguel’s interest in trees was inspired by her mother’s appreciation of nature. It’s now sustained by her own worries about the world that awaits her kids.“We should have the broadest canopy in these neighborhoods because you’ve got an asphalt plant spewing toxins down the street,” she said. “I’m raising my kids here, and I want them to be able to enjoy the same quality of life someone 2 miles north of us gets.”But in Chicago, like the rest of the country, trees are tied to deeper societal problems involving race and class.Researchers have documented how racial discrimination in mortgage lending led to “redlined” areas that, decades later, have significantly less canopy coverage than whiter communities, including in Chicago. Another study found low-income blocks in urban areas on average had 15% less tree cover and were 2.7 degrees warmer than wealthier counterparts.A street in the Chicago Lawn neighborhood shows a stretch with no trees on May 31, 2022. Despite researchers saying the community should be among the most prioritized for trees, it saw among the lowest rates of street trees planted by the city in the past decade. (Raquel Zaldívar/Chicago Tribune)Through the years, Chicago gained tools to confirm what urban foresters and anyone taking a drive through different neighborhoods could easily see — environmental infrastructure wasn’t equitably distributed.Under Daley, although equity wasn’t yet part of the conversation, the city began studying where best to plant trees, focusing on an unequal distribution of hot spots extending out from the Loop to the Northwest and Southwest sides where clusters of paved surfaces led to pockets of higher temperatures. The city later said it planted thousands of trees taking into account this urban heat island effect and low canopy cover.By 2010, as part of a study with the U.S. Forest Service, the city had identified priority areas for planting, based on population density, available space and low canopy coverage.In essence, the study offered a road map for the next decade of where Chicago should prioritize tree planting, down to the census tract.But the city didn’t use it.Instead, a Tribune analysis of planting locations found that efforts didn’t lead to — or even appear to work toward — a meaningful shift in how trees were distributed where people live and work.According to a Tribune analysis of city trees planted per mile of streets in each community, Edgewater, shown here on May 31, 2022, has one of the highest rates of trees planted. In that neighborhood, the city planted 28.5 trees per mile of streets. Over the past 10 years, this translates into more than 1,000 trees. (Erin Hooley and Raquel Zaldívar/Chicago Tribune)Because the city’s 77 community areas vary in size and the amount of space available for street trees, the Tribune analyzed trees planted per mile of streets in each community, and found the highest rate of trees planted were in Edgewater, Rogers Park, Lakeview and Edison Park — all North Side communities, and all ranking in the upper half of measurements of residents’ income, education and employment levels. In Edgewater, for example, the city planted 28.5 trees per mile of streets. Over the past 10 years, this translates into more than 1,000 trees.Some of the lowest rates of trees planted were in North Lawndale, Burnside, Pullman, Ashburn and Riverdale — nearly all majority Black neighborhoods — and nearly all of them ranking in the bottom half for income, education and employment. North Lawndale’s level of planting, for example, was 4.1 new trees per mile of streets, or just a seventh of Edgewater’s rate.Pastor Reshorna Fitzpatrick, of the Stone Temple Baptist Church in North Lawndale, pointed out some of the unshaded areas on a recent drive around the neighborhood. On the nearly 90-degree day residents were out and about, biking and walking the streets.Pastor Reshorna Fitzpatrick, of Stone Temple Baptist Church, right, and Trinity Pierce, a stewardship manager from the Chicago Region Trees Initiative and The Morton Arboretum, left, add plants to a planter box near a community garden across the street from the church in the North Lawndale neighborhood of Chicago on May 31, 2022. (Raquel Zaldívar/Chicago Tribune)“Some places are very, very hot and there’s no shade because there’s no trees,” Fitzpatrick said. “This community deserves to have trees.”Fitzpatrick likened the treeless stretches to gaps in a toothy smile — you can tell there’s something missing.“It looks and feels like a concrete jungle,” Fitzpatrick said. “It just doesn’t look alive.”In the past decade, some communities lost more trees than others, but the city didn’t appear to target more trees to places that lost the most, the Tribune found. Or to places where they could make the most difference for residents.The Chicago Region Trees Initiative — the Morton Arboretum partnership including city agencies — calculated priority rankings in 2016 for plantings for each community, based on air pollution, average temperature, flood susceptibility and vulnerability of its residents.The Tribune compared that to socioeconomic rankings for residents’ income, education and employment levels of each community. Reporters found that communities less prioritized for trees — but with higher socioeconomic rank — tended to get more trees than communities that were more prioritized for trees, but with lower socioeconomic rank.For example, low-priority, high socioeconomic communities saw a median of nearly 15 trees planted, per street mile. Compare that to high-priority but low socioeconomic communities, where the median rate was about half that, despite the greater need for trees.Every North Side community along the lake had at least 15 or more trees planted per mile of streets, while the only community on the South or West sides to achieve that was the wealthier enclave of Hyde Park.The city planted 16 trees per street mile there, or roughly triple the rate of what the city planted in Gage Park, among communities with lower socioeconomic status and also among the most prioritized for trees.There have been aldermen who prioritized trees more than others and residents who didn’t want them, advocates and former officials say.Sometimes that’s because residents buy into misconceptions around trees causing pipe problems. Or they don’t want to deal with maintenance issues — a fair concern in Chicago, where haphazard trimming based on the 311 request system has been criticized for more than a decade.Poor maintenance doesn’t inspire affection for trees, and can be a reason why residents turn them down, researchers have found, along with overall distrust of city government.Fitzpatrick, the North Lawndale pastor, is used to fielding these concerns, including who’s going to care for trees or safety issues they could cause. But it comes back to tree education, she said.Pastor Reshorna Fitzpatrick, of Stone Temple Baptist Church, near a community garden across the street from her church in the North Lawndale neighborhood on May 31, 2022. (Raquel Zaldívar/Chicago Tribune)“Someone said, it’ll just be another place for someone to hide,” Fitzpatrick said. “You can say it, but what stats do you have to show that that really happens? Because we can give you stats on how healthy you can be if you have a tree in front of your house.”Since 1999, residents have been able to turn to the 311 system for trees — if they know to do so.Residents can also hire a contractor to plant on the parkway or plant a tree through the nonprofit Openlands. But the city says it has used 311 as the primary way to identify street tree planting locations.Based on city foresters’ experience, the city said trees are most successful where they’ve been requested.Two years after their 311 request, Gage Park residents Stefany Barajas and her mother, Inocencia Vargas, had a bur oak planted through the city. Today it reaches the second story.If it survives the coming years, the native tree will one day swell dozens of feet toward the sky, a burst of yellow-brown in fall, fringed acorns providing snacks for birds and squirrels in winter, and its promised cover of dark green leaves returning every spring.“I just want to see it grow,” Barajas said.Gage Park residents Stefany Barajas, right, her mother Inocencia Vargas, left, and sister Issabella Barajas, center, peer through the bur oak tree the city planted outside their home, May 22, 2022. The tree was planted two years after their 311 request. (Raquel Zaldívar/Chicago Tribune)Marco De La Rosa cleans weeds from the spot in front of his home where a tree was removed years ago in the Gage Park neighborhood of Chicago, May 22, 2022. De La Rosa requested a tree from the city more than 2 1/2 years ago but his request hasn't been filled. (Raquel Zaldívar/Chicago Tribune)Other Gage Park residents told the Tribune they haven’t been so lucky, including De La Rosa, the former environmental science student still waiting on a tree.His mother had city trees removed, fearing pipe damage. But De La Rosa learned the benefits of trees from his studies and shared them with her. He filed his request.“I went ahead and did it for the sake of the environment,” he said.He questions if the request would have been fulfilled sooner if he lived in a different neighborhood.Most 311 requests, the city has acknowledged, have come from affluent communities, particularly on the North Side. But whether they live on the North Side or the South Side, the Tribune found residents facing lengthy delays with tree requests in limbo and a system that can make it inherently harder for people who lack the time, know-how or language skills to navigate it.The estimated wait for a tree planted via 311 is 300 days. But some residents have trees planted within months, while others wait years. The city says the backlog of open planting requests is more than 10,000, although some may have been completed but not yet inspected or updated in the 311 system.When asked why there was a difference in response times, the city said several factors are at play, including how quickly the forestry bureau can inspect the location, if planting is possible there, the kind of tree requested by the resident and when the planting permit is issued. Beyond that point, plantings are assigned in bulk to contractors, who determine the order trees are planted, the city said.A city worker trims trees near North Humboldt Boulevard and West Cortland Street on May 2, 2022, in the Logan Square neighborhood of Chicago. (Raquel Zaldívar/Chicago Tribune)The 311 process has also involved multiple steps beyond the initial ask. The Bureau of Forestry processed requests, the city said, and then inspected sites. From there, if tree planting was possible, a notice was left on doors and someone had to call to confirm the planting.Confirmed requests were scheduled with a contractor — planting is contracted out and supervised by the Bureau of Forestry, while removals and trimming are done by city crews.Many requests appear stuck in the confirmation step, which the city also acknowledged as a factor in response times.Some residents who want trees told the Tribune they never received the confirmation notice.Others said they had trees planted without confirming, such as Arasmo Delgado, a Gage Park resident who works in private tree care and made a request to replace trees that died. He had white oaks planted in less than a year.“I know that my trees are slow growers, but I’m at least happy that I got the trees,” Delgado said. “Plants make me feel part of nature. I’ve been in the city all my life.”He suspects others may struggle because they don’t speak English. He has stepped in to make requests for family members who speak Spanish. He also questioned if renters might be less likely to request trees — or if people know 311 is an option.Mendez, whose removed tree was replaced by Openlands, said she wasn’t aware tree requests could be made through 311.“That doesn’t surprise me,” Mendez said, while participating with her son in a neighborhood planting around a treeless parking lot. “That we have access to resources that we don’t even know exist.”Openlands volunteer Emmanuel Reyes, right, takes a photo of volunteers Nora Aguilar, left, and Juan Carlos Aguilar, second from right, with their children Mathew Aguilar, 16, Isaac Aguilar, 20, and Sara Aguilar, 11, on April 30, 2022. The family took part in the tree planting event organized by Openlands and St. Rita's parish. (Raquel Zaldívar/Chicago Tribune)Advocates have criticized the 311 system as creating barriers to planting and have warned that getting buy-in from residents who don’t trust the system is a challenge.“If somebody tells you, call 311 for trees — What? Why would I do that?” said Suzanne Malec-McKenna, the last environment commissioner under Daley. “With all the other issues, when I can’t get my potholes fixed or get the rats taken care of, that’s not going to be at the top of my list.”After questions in recent months from the Tribune, the city recently said it would begin “flipping the script” and switch to a new approach. Residents can still request a tree through 311, but they no longer have to do anything beyond the initial request, unless they want to call to opt-out of a planting within 60 days.Two summers ago, Lightfoot took a stroll in West Garfield Park.In an email to commissioners, Lightfoot said she walked along a stretch where something “really hit home.”“There are no trees along this heavily traveled boulevard, none,” Lightfoot said in the email, obtained by the Tribune in an open records request.“It was an utterly depressing sight,” she said.Today, dozens of young saplings dot the neighborhood, their tags still attached, some dated fall 2020.To address tree inequities, Lightfoot’s administration isn’t planning on having the mayor stroll down every city block.The city’s approach will “allow for us to plant trees where they have the greatest impact and to work directly with community stewards in those areas to help maintain the healthy tree canopy,” said chief sustainability officer Angela Tovar.The city previously talked trees with aldermen at a few dozen community meetings a year, but starting this month a tree ambassador pilot program will train residents to scout possible tree sites, with training in Little Village and South Lawndale followed by North Lawndale. The program will then expand into other census tracts in priority areas, the city said.In North Lawndale, efforts may be helped by longtime tree stewards including resident Mamie Gray, who has cared for neighborhood trees in the ground, watering them throughout the seasons.Gray said she thinks of trees as ancient ancestors.“When I look at a tree it’s almost like looking at a person,” Gray said.Fitzpatrick, the North Lawndale pastor, highlighted some recently planted trees going strong on the neighborhood drive, including an Arbor Day sapling part of the 75,000 push. Fitzpatrick helped plant that one, and she took pride in checking in.“You take ownership when you help plant it and then you water it and see it grow,” she said.Trinity Pierce, left, a stewardship manager from the Chicago Region Trees Initiative and The Morton Arboretum, and Pastor Reshorna Fitzpatrick, of Stone Temple Baptist Church, right, visit a recently planted black gum tupelo tree in the North Lawndale neighborhood of Chicago on May 31, 2022. (Raquel Zaldívar/Chicago Tribune)The city has created a new website for its tree effort, while the health department has also developed a new tool to identify where trees should be planted, taking into account data related to canopy cover, air quality, temperature, economic hardship and other socioeconomic factors.The city said planting locations are tracked through the 311 system but did not provide additional information about where the first trees of the 75,000 push have been planted.The city’s $7.2 million planting budget more than doubles the highest budgets in recent years. And there are additional planting funds for the city’s transportation department, which plants along arterial roads and has shifted focus to disinvested communities, the city says.Although advocates say the city is moving in the right direction, they’re still waiting on some recommendations made years ago.The city is “exploring options” for a tree inventory, referred to as a “vital” tool by city foresters and recommended in a city plan more than a decade ago. An urban forestry board suggested years ago by advocates as a way to connect tree priorities through mayoral administrations and ease communication between city departments, and approved last summer, is not yet in place. The city says it’s switching back to a more efficient grid trimming system, also recommended more than a decade ago, and has more than doubled crews in preparation, but the change is still in progress.During her campaign, Lightfoot promised to bring back the environment department, dismantled under Emanuel, but it hasn’t happened yet.More challenges lie ahead. Thousands of residents are waiting on a backlog of tree plantings, removals and trimming requests. A large-scale replacement of water mains and lead pipes is underway, meaning the potential for tree loss. And advocates are eager to see whether the city will start treating ash trees again — a practice it gave up years ago — before thousands more die.There’s also the question of what will happen to planting efforts in Chicago after the 75,000 push and federal funding comes to an end.“What will happen in five years?” Malec-McKenna said. “Who’s the Lorax in this situation? We don’t have a Lorax. And that’s very sad because there’s a lot of great people who care about this stuff and they only have so much time and resources and support.”Gage Park resident Sam Nava, from left, Openlands apprentice arborist Ray Bizot, Brighton Park resident Matías Oviedo-Fong and Gage Park resident Marta Nava finish planting a tree in front of the Nava home in Gage Park on April 30, 2022. (Raquel Zaldívar/Chicago Tribune)Cities across the country are facing their own challenges as they chart paths toward tree equity. Philadelphia has a goal to increase canopy cover to 30% in all neighborhoods by 2025. Phoenix is creating “cool corridors” in a city where wealthier and whiter districts enjoy more shade. Los Angeles has a goal to grow the canopy 50% where it’s most needed by 2028.Success should be defined by working with residents on their terms, some advocates say. That could mean maintaining existing trees, more jobs in green industry or protecting against gentrification as a result of more greenery — a concern of residents that has played out in rising property values along The 606 in Chicago.The city has not yet shared specific goals around increasing canopy cover but plans to agree on benchmarks with priority communities in the initiative’s next phase.Some residents are eager to move forward, such as SanMiguel in Pilsen.In a city with no lack of immediate problems to address, worrying about a tree “can almost be a privilege,” she said.But planting trees seems like one thing that can be done to make the city more equitable.“We actually have something we can do here,” she said. “Why aren’t we doing more?”Chicago Tribune’s Gregory Pratt contributed.mgreene@chicagotribune.comjmahr@chicagotribune.com
Environmental Science
Researchers find Asian Americans to have significantly higher exposure to 'toxic forever' chemicals Asian Americans have significantly higher exposure than other ethnic or racial groups to PFAS, a family of thousands of synthetic chemicals also known as "toxic forever" chemicals, Mount Sinai-led researchers report. People frequently encounter PFAS (per- and polyfluoroalkyl substances) in everyday life, and these exposures carry potentially adverse health impacts, according to the study published in Environmental Science and Technology, in the special issue "Data Science for Advancing Environmental Science, Engineering, and Technology." The scientists estimated a person's total exposure burden to PFAS and accounted for the exposure heterogeneity (for example, different diets and behaviors) of different groups of people that could expose them to different sets of PFAS. They found that Asian Americans had a significantly high PFAS exposure than all other U.S. ethnic or racial groups, and that the median exposure score for Asian Americans was 89% higher than for non-Hispanic whites. This is the first time that researchers accounted for complex exposure sources of different groups of people to calculate a person's exposure burden to PFAS. To achieve this, they used advanced psychometric and data science methods called mixture item response theory. The researchers analyzed human biomonitoring data from the U.S. National Health and Nutrition Examination Survey, a representative sample of the U.S. population. This research suggests that biomonitoring and risk assessment should consider an exposure metric that takes into consideration the fact that different groups of people are exposed to many different sources and patterns of PFAS. Based on these findings, these researches believe that exposure sources, such as dietary sources and occupational exposure, may underlie the disparities in exposure burden. This will be an important topic of future work, as it is difficult to trace exposure sources to PFAS because they are so ubiquitous. "We found that if we used a customized burden scoring approach, we could uncover some disparities in PFAS exposure burden across population sub-groups," said Shelley Liu, Ph.D., Associate Professor of Population Health Science and Policy at the Icahn School of Medicine at Mount Sinai. "These disparities are hidden if we use a one-size-fits-all approach to quantifying everyone's exposure burden. In order to advance precision environmental health, we need to optimally and equitably quantify exposure burden to PFAS mixtures, to ensure that our exposure burden metric used are fair and informative for all people." PFAS pollution is a major health concern, and nearly all Americans have detectable levels of PFAS chemicals in their blood. PFAS are ubiquitous, and are used in products that resist heat, oil, stains, grease, and water. The Biden administration has allocated $9 billion to PFAS clean-up, and in March 2023, the Environmental Protection Agency proposed the first enforceable federal standards to regulate PFAS contamination in public drinking water. In the future, Dr. Liu's team plans to incorporate toxicity information on each PFAS chemical into exposure burden scoring, to further evaluate disparities in toxicity-informed exposure burden in vulnerable groups and population subgroups. More information: Environmental Science and Technology (2023). Journal information: Environmental Science and Technology Provided by The Mount Sinai Hospital
Environmental Science
Gas stoves in California homes are leaking cancer-causing benzene, researchers found in a new study published on Thursday, though they say more research is needed to understand how many homes have leaks.In the study, published in Environmental Science and Technology on Thursday, researchers also estimated that over 4 tons of benzene per year are being leaked into the atmosphere from outdoor pipes that deliver the gas to buildings around California — the equivalent to the benzene emissions from nearly 60,000 vehicles. And those emissions are unaccounted for by the state.The researchers collected samples of gas from 159 homes in different regions of California and measured to see what types of gases were being emitted into homes when stoves were off. They found that all of the samples they tested had hazardous air pollutants, like benzene, toluene, ethylbenzene and xylene (BTEX), all of which can have adverse health effects in humans with chronic exposure or acute exposure in larger amounts.Of most concern to the researchers was benzene, a known carcinogen that can lead to leukemia and other cancers and blood disorders, according to the National Cancer Institute.The finding could have major implications for indoor and outdoor air quality in California, which has the second highest level of residential natural gas use in the United States. “What our science shows is that people in California are exposed to potentially hazardous levels of benzene from the gas that is piped into their homes,” said Drew Michanowicz, a study co-author and senior scientist at PSE Healthy Energy, an energy research and policy institute. “We hope that policymakers will consider this data when they are making policy to ensure current and future policies are health-protective in light of this new research.”Homes in the Greater Los Angeles, the North San Fernando Valley, and the San Clarita Valley areas had the highest benzene in gas levels. Leaks from stoves in these regions could emit enough benzene to significantly exceed the limit determined to be safe by the California Office of Environmental Health Hazards Assessment.This finding in particular didn’t surprise residents and health care workers in the region who spoke to The Associated Press about the study. That’s because many of them experienced the largest-known natural gas leak in the nation in Aliso Canyon in 2015.Back then, 100,000 tons of methane and other gases, including benzene, leaked from a failed well operated by Southern California Gas Co. It took nearly four months to get the leak under control and resulted in headaches, nausea and nose bleeds.Dr. Jeffrey Nordella was a physician at an urgent care in the region during this time and remembers being puzzled by the variety of symptoms patients were experiencing. “I didn’t have much to offer them,” except to help them try to detox from the exposures, he said.That was an acute exposure of a large amount of benzene, which is different from chronic exposure to smaller amounts, but “remember what the World Health Organization said: there’s no safe level of benzene,” he said.Kyoko Hibino was one of the residents exposed to toxic air pollution as a result of the Aliso Canyon gas leak. After the leak, she started having a persistent cough and nosebleeds and eventually was diagnosed with breast cancer, which has also been linked to benzene exposure. Her cats also started having nosebleeds and one recently passed away from leukemia.“I’d say let’s take this study really seriously and understand how bad (benzene exposure) is,” she said.___Follow Drew Costley on Twitter: @drewcostley.___The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.
Environmental Science
Study uses pine slash to improve soil Pine slash—a major problem after recent flooding events—could be chipped and used to rehabilitate soil, new research from the University of Canterbury and ESR suggests. University of Canterbury Master's student Mingyuan (Kathy) Liu has been investigating the use of pine waste mixed with urea fertilizer on silt-covered soils from Canterbury and Gisborne. She has found that combining pine waste with urea is the most effective for plant growth, compared to urea alone, or using compost or other organic matter on the soils. Liu says with flooding becoming increasingly common, and pine slash—a waste product from commercial forestry that can be dislodged by moving water—causing issues, this study indicates there's a real opportunity to use one challenging waste problem, pine slash—to fix another—silt-covered soils. "We've looked at blending pine waste into finer sawdust particles and mixing them with the soil and some fertilizer to make the soil more porous—better for water drainage and for plants to grow," Liu says. The results in a campus greenhouse show a large increase in soil fertility, and she says field testing is now required. In the study, oats were planted in the soil which had been treated with pine sawdust and fertilizer. Oats are a popular green manure that improve soil texture and increase soil organic matter. "Oats are really helpful for stabilization of the soil structure," Liu says. "We could immediately see the difference in the crops grown in pine sawdust mixed into the soil." Liu's Supervisor UC Science Professor Brett Robinson says preliminary results are exciting. "Pine slash is a current issue facing New Zealand and the rest of the world. To date, we know of no other reports detailing the rehabilitation of flood-deposited sediment using pine waste. We hope to take it to the next stage—field testing—soon," he says. This work would be conducted in collaboration with Dr. Maria Jesus Gutierrez-Gines, a science leader at ESR (Institute of Environmental Science and Research) who has co-supervised Liu's research. ESR has supported the research to date by providing technical time and analysis and organizing the delivery of sediment from flood-affected Gisborne. Professor Robinson says pine contains substances that are known to inhibit plant growth, but when applied in the trials to work on the structure of sediment or silt, it created the capacity for the soil to retain nutrients. "Essentially it acts like a sponge and breaks down to humus which is beneficial to the soil." Provided by University of Canterbury
Environmental Science
Using fertilisers derived from human faeces and urine can be as productive as conventional organic ones, with no risk of transmitting disease, according to new research.It may seem unappetising, but humans have been using human waste as a fertiliser for thousands of years because it contains the key nutrients that plants need to grow, including nitrogen, phosphorus and potassium. Ploughing human excrement – conventionally flushed down our toilets and into the sewage system – back into the soil creates a more sustainable farming system without significant drops in yield, the researchers found.The team studied a crop of white cabbages grown 12 miles (20km) south of Berlin between June and October 2019. They tested three waste-based products: two fertilisers derived from human urine and one from human faeces called “faecal compost”. The effects were compared with those of using a commercial organic fertiliser, vinasse, which is made from sugar-beet and is a byproduct of bioethanol production.The lead co-author Franziska Häfner, a PhD student at University of Hohenheim in Germany, said: “The fertilisers from nitrified human urine gave similar yields as a conventional fertiliser product, and did not show any risk regarding transmission of pathogens or pharmaceuticals.”Urine fertiliser produced comparable, or even slightly higher, yields to those of the commercial fertiliser. According to the paper, published in Frontiers in Environmental Science, the yield for faecal compost was on average 20-30% lower. However, the faecal compost bolstered soil carbon, meaning fertility could be maintained long term. As a result, the most sustainable option is mixing urine fertiliser and faecal compost together, the researchers sugggested, producing yields on average 5-10% lower than commercial fertilisers.Experiments on the human waste digestate in progress at the University of Hohenheim. Photograph: Franziska Häfner/University of HohenheimResearchers tested the human waste fertilisers against organic fertilisers instead of conventional synthetic fertilisers because they say there are many reasons why we need to shift away from synthetic ones, given the damage they do to the environment, so they did not want to revert to them as standard in the experiment. Yield from organic fertiliser is estimated to be about 20% lower.Synthetic fertilisers are credited with increasing food production and reducing hunger, but they come with huge environmental costs, including air and water pollution, as well as driving declines in wildlife. Fertilisers have high greenhouse gas emissions, with synthetic nitrogen fertilisers responsible for about 2% of global energy use.Meanwhile, the cost of fertiliser has increased exponentially. Prices in 2022 were three times higher than at the start of 2021, which is likely to cause food costs to rise this year, putting an additional 100 million people at risk of undernourishment, research suggests.To test how safe the fertilisers were, researchers screened the waste for 310 chemicals such as insect repellants, rubber additives and flame retardants, which people sometimes empty into their toilet. They also looked at pharmaceutical products such as painkillers and hormones, which end up mainly in urine. More than 93% of these chemicals were not detected, and the remainder were present at very low concentrations.The most significantly present were the painkiller ibuprofen and carbamazepine – used to treat epilepsy and as a mood-stabiliser – which were found in the edible parts of the cabbages (ie, the head) but in extremely low quantities. This would be because the vegetable had taken them in through its roots.The researchers found that you would need to eat half a million heads of cabbage to take the equivalent of one carbamazepine pill. “In general, the risk for human health of pharmaceutical compounds entering the food system by means of faecal compost use seems low,” they wrote.Dr Rupert Hough, an environmental and soil scientist from the James Hutton Institute in Aberdeen, who was not involved in the research, said sewage sludge has been used for decades as a fertiliser in agricultural production but that there had always been challenges in ensuring it was not contaminated. He said: “Quality has significantly improved over time due to new modern treatment methods and now, in most situations, it can be used without harm. This study shows that source separation of human waste prior to any conventional wastewater treatment, ie, the use of composting toilets, has the potential to improve quality further.”Effectively recycling human waste requires changes to toilets so that urine and faeces can be separated and the nutrients they contain harvested. For the experiment, researchers used waste collected in dry toilets, although some new water-based toilets can also keep faeces and urine separate.The study’s other co-author, Dr Ariane Krause, a researcher at the Leibniz Institute of Vegetable and Ornamental Crops in Grossbeeren in Germany, said: “I think water toilets as we know them will only be on our planet for a short period of time – they are nice and comfortable but they don’t work in the long term, because they are not sustainable.“Our findings in the field experiment corroborate what researchers have found in a couple of dozen experiments from Asia, Africa, North America and South America. Our next step will be to merge the datasets and to conduct a meta-analysis,” she said.Find more age of extinction coverage here, and follow biodiversity reporters Phoebe Weston and Patrick Greenfield on Twitter for all the latest news and features
Environmental Science
A Proposal to Decouple Food Systems From Deforestation in Brazil At the 2023 Global Public Policy Network Conference (GPPN) in March, five students from Columbia University’s School of International and Public Affairs were selected as finalists for their policy proposal to tackle deforestation in Brazil. This year’s conference asked participants to address the issue of global political polarization and the challenges to building social cohesion. What started as a simple conversation on the challenges of ensuring food security and environmental conservation turned into a winning idea for one group of students in the M.P.A. in Environmental Science and Policy program. After hearing about the GPPN competition, the team pooled their international expertise on existing regulations from across the globe, with their individual strengths, to build a robust and extensive plan for effecting change. Sarah Bryan, João Francisco Adrien Fernandes, Olivia Parker, Matteo Chiadò Piat, and Ezekiel Maben traveled to São Paulo, Brazil, to present their idea to the deans of eight of the most prestigious schools of public affairs across the world. Their strategy, “A National Green Label for Brazil,” was aimed at tracking, labelling, and reducing illegal deforestation in the Brazilian cattle supply chain. While they didn’t win the overall competition, the students advanced to the final round and received the award for “Best Concept.” The National Green Label system they proposed would enable the Brazilian government to trace and monitor deforestation along the cattle supply chain at the property level. It suggests the integration of multiple existing state and national databases that do not currently work together into a cohesive and publicly accessible, federally managed data center to increase transparency and highlight inconsistencies in the supply chain. The proposal’s ultimate measure of success would depend on how much of the cattle production can be tracked and therefore not connected to illegal deforestation. The proposal also addresses an international context in which governments like those of the U.K. and E.U. are developing trade barriers for agricultural products related to deforestation. Brazil, along with other countries, would have to show evidence through a due diligence system that their agricultural commodities are not coming from deforested areas. The Green Label proposal avoids the use of private certification, which would create market segregation with a major negative social impact, since only larger and well established farmers could afford private certification schemes. Implementing a public Green Label would mean all farmers could receive the same treatment. In the future, the Green Label system would offer the possibility for expansion to cover additional agricultural products. In doing so, Brazil would be able to decouple illegal deforestation from its food supply chain, ensuring access to foreign markets. This program could also be adopted by other developing nations, transforming Brazil from a rule-follower into a rule-maker and establishing the country’s role as a leader in global food security while conserving its natural resources. With the demand for food production accelerating alongside population growth, agriculture could continue to burden Brazil’s biodiversity loss if strategies such as the Green Label are not on the table. Home to almost 60% of the Amazon rainforest, Brazil has the chance to play a key role in safeguarding that critical ecosystem by tackling illegal deforestation and regulating land use change for food production. The technology to implement this policy already exists, and the students are continuing to engage in stakeholder conversations to back this initiative and to build political support to implement this policy. Most recently, the group submitted their research to the Brazilian Ministry of Environment’s public hearing on the Action Plan for Prevention and Control of Deforestation in the Legal Amazon. Their experience at the GPPN conference was very rewarding, according to the students. In particular, they were pleased with their ability to synthesize such vast information into a 5-minute presentation, and the number of international perspectives they heard during peer-to-peer interactions. But the dreams of this team are not limited to Brazil. “One thing that excites all of us is the scalability of this proposal,” Parker said. Other agricultural export nations are going to face the same trade restrictions in coming years, so there are many opportunities to learn from and replicate this project.
Environmental Science
Plant ecophysiologist Felicity Hayes places a damaged leaf of a Silver Birch tree inside a LI-COR analyser at the UK Centre for Ecology and Hydrology research site near Bangor, Britain, July 20, 2022. REUTERS/Phil NobleRegister now for FREE unlimited access to Reuters.comABERGWYNGREGYN, Wales, Aug 31 (Reuters) - Plant scientist Felicity Hayes checks on her crops inside one of eight tiny domed greenhouses set against the Welsh hills. The potted pigeon pea and papaya planted in spring are leafy and green, soon to bear fruit.In a neighbouring greenhouse, those same plants look sickly and stunted. The pigeon pea is an aged yellow with pockmarked leaves; the papaya trees reach only half as tall.The only difference between the two greenhouse atmospheres - ozone pollution.Register now for FREE unlimited access to Reuters.comHayes, who works at the UK Centre for Ecology and Hydrology (UKCEH), is pumping ozone gas at various concentrations into the greenhouses where African staple crops are growing. She is studying how rising ozone pollution might impact crop yields - and food security for subsistence farmers - in the developing world.Ozone, a gas formed when sunlight and heat interact with fossil fuel emissions, can cause substantial losses for farmers, research suggests, by quickly aging crops before they reach full production potential and decreasing photosynthesis, the process by which plants turn sunlight into food.Ozone stress also reduces plants' defences against pests.A 2018 study in the journal Global Change Biology estimated global wheat losses from ozone pollution totalled $24.2 billion annually from 2010 to 2012.In a January paper published in Nature Food, researchers tallied some $63 billion in wheat, rice and maize losses annually within the last decade in East Asia. read more Scientists are particularly worried about Africa, which will see more vehicle traffic and waste burning as the population is set to double by mid-century.That means more ozone pollution, a major challenge for smallholder farmers who make up 60% of the population in sub-Saharan Africa."There is a serious concern that ozone pollution will affect yields in the long run," said senior scientist Martin Moyo at the International Crops Research Institute for the Semi-Arid Tropics in Zimbabwe.He called out an "urgent need for more rural studies to determine ozone concentrations" across the continent.Earlier this year, scientists with the UK-based non-profit Centre for Agriculture and Bioscience International (CABI) set up ozone monitoring equipment around cocoa and maize fields in Ghana, Zambia and Kenya.But most African countries do not have reliable or consistent air pollution monitors, according to a 2019 UNICEF report. Among those that do, few measure ozone.RISING OZONEIn the stratosphere, ozone protects the Earth from the sun's ultraviolet radiation. Closer to the planet's surface, it can harm plants and animals, including humans.While air quality regulations have helped reduce ozone levels in the United States and Europe, the trend is set to spike in the opposite direction for fast-growing Africa and parts of Asia.Climate change could also speed things along.In areas of Africa with high fossil fuel emissions and frequent burning of forests or grassland, new research suggests hotter temperatures could make the problem worse as they can accelerate chemical reactions that create ozone.While research has found North American wheat is generally less impacted by ozone than European and Asian counterparts, there have been fewer studies on African versions of the same crops that over decades of cultivation have been made more suitable to those environments.Once every two weeks in a Nairobi market, farmers from the countryside bring samples of their ailing crops to a "plant doctor" in hopes of determining what is affecting their yields."A lot of (ozone) symptoms can be confused with mites or fungal damage," said CABI entomologist Lena Durocher-Granger. "Farmers might keep applying fertilizer or chemicals thinking it's a disease, but it's ozone pollution."Her organization is working with UKCEH to help people identify signs of ozone stress and recommend fixes, such as watering less on high ozone days. Watering can leave leaf pores wide open, causing plants to take in even more ozone.RESILIENT CROPSIn her Welsh greenhouses, Hayes was exposing crops in one dome to the lowest amount - 30 parts per billion - similar to the environment of North Wales. In the dome with the highest ozone level, plants were receiving more than triple that amount, mimicking North Africa's polluted conditions.Hayes and her colleagues have found that certain African staples are more affected than others.In a dome filled with a mid-level amount of ozone, North African wheat plants had quickly turned from green to yellow within just a few months."You get tiny thin grains that don't have all the good bits in them, a lot of husk on the outside and not as much protein and nutritional value," Hayes said.That fits with research her team published last year on sub-Saharan plant cultivars, which found that ozone pollution could be lowering sub-Saharan wheat yields by as much as 13%.Dry beans could fare worse, with estimated yield losses of up to 21% in some areas, according to the same study, published in Environmental Science and Pollution Research."Beans are a useful protein source in Africa, and subsistence farmers grow a lot of it," said KatrinaSharps, a UKCEH spatial data analyst.Sub-Saharan millet, however, seemed more ozone tolerant. Yet Africa produced about half as much millet as wheat in 2020."If the soil and growing conditions are suitable," Sharps said, "subsistence farmers may consider growing more millet."Register now for FREE unlimited access to Reuters.comReporting by Gloria Dickie; Editing by Katy Daigle, Marguerita Choy and Bill BerkrotOur Standards: The Thomson Reuters Trust Principles.
Environmental Science
NEWS AND VIEWS 07 December 2022 Modelling reveals that the carbon emissions associated with plastics could be negative by 2100 under a strict set of technological and socio-economic conditions — including increased recycling and plant-derived production. Sangwon Suh Sangwon Suh is in the Bren School of Environmental Science and Management, University of California, Santa Barbara, Santa Barbara, California 93106, USA. André Bardow André Bardow is in the Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, Switzerland. The direct effect of plastics on the marine ecosystem has attracted global attention. However, the production and disposal of plastics are also a concern, because these processes release more climate-warming gases annually than does global aviation1. And these emissions are increasing: the growing global appetite for plastics is expected to result in a doubling of their associated carbon emissions by 2050. Such an increase would prevent us from achieving net-zero emissions, a target that is widely held to be necessary to protect the planet’s ability to support life (see go.nature.com/3u7uiqc). Writing in Nature, Stegmann et al.2 provide a road map for avoiding this future by examining the entire life cycle of plastics in the context of various strategies for mitigating climate change. Access options Subscribe to Nature+Get immediate online access to Nature and 55 other Nature journalSubscribe to JournalGet full journal access for 1 year$199.00only $3.90 per issueAll prices are NET prices. VAT will be added later in the checkout.Tax calculation will be finalised during checkout.Buy articleGet time limited or full article access on ReadCube.$32.00All prices are NET prices. Additional access options: Log in Learn about institutional subscriptions Nature 612, 214-215 (2022) doi: https://doi.org/10.1038/d41586-022-04164-8 ReferencesZheng, J. & Suh, S. Nature Clim. Change 9, 374–378 (2019).Article  Google Scholar  Stegmann, P., Daioglou, V., Londo, M., van Vuuren, D. P. & Junginger, M. Nature 612, 272–276 (2022).Article  Google Scholar  de Oliveira, C. C. N., Zotin, M. Z., Rochedo, P. R. R. & Szklo, A. Biofuels Bioprod. Biorefin. 15, 430–453 (2021).Article  Google Scholar  Meys, R. et al. Science 374, 71–76 (2021).Article  PubMed  Google Scholar  Rogelj, J. et al. Nature Clim. Change 8, 325–332 (2018).Article  Google Scholar  Chamas, A. et al. ACS Sustainable Chem. Eng. 8, 3494–3511 (2020).Article  Google Scholar  Geyer, R., Jambeck, J. R. & Law, K. L. Sci. Adv. 3, e1700782 (2017).Article  PubMed  Google Scholar  Stegmann, P., Daioglou, V., Londo, M. & Junginger, M. MethodsX 9, 101666 (2022).Article  PubMed  Google Scholar  Download references Competing Interests We have co-authored a paper together with one of the authors in the past, but do not currently have a professional relationship or any ongoing collaboration with any of the authors. Related Articles Read the paper: Plastic futures and their CO2 emissions Tailor-made enzymes poised to propel plastic recycling into a new era High-performance plastic made from renewable oils is chemically recyclable by design See all News & Views Subjects Jobs Postdoctoral position on liver metastasis of colorectal cancer Open Postdoc Positions: University of California Davis School of Medicine, Davis, CA PhD or Post-doctoral researcher (m/f/d) Summer Research Assistant/Associate, Center for Computational Biology
Environmental Science
In August 2020, following a period of prolonged drought and intense rainfall, a dam situated near the Seomjin River in Korea experienced overflow during a water release, resulting in damages exceeding 100 billion won (USD 76 million). The flooding was attributed to maintaining the dam's water level 6 meters higher than the norm. Could this incident have been averted through predictive dam management? A research team led by Professor Jonghun Kam and Eunmi Lee, a PhD candidate, from the Division of Environmental Science & Engineering at Pohang University of Science and Technology (POSTECH), recently employed deep learning techniques to scrutinize dam operation patterns and assess their effectiveness. Their findings were published in the Journal of Hydrology. Korea faces a precipitation peak during the summer, relying on dams and associated infrastructure for water management. However, the escalating global climate crisis has led to the emergence of unforeseen typhoons and droughts, complicating dam operations. In response, a new study has emerged, aiming to surpass conventional physical models by harnessing the potential of an artificial intelligence (AI) model trained on extensive big data. The team focused on crafting an AI model aimed at not only predicting the operational patterns of dams within the Seomjin River basin, specifically focusing on the Seomjin River Dam, Juam Dam, and Juam Control Dam, but also understanding the decision-making processes of the trained AI models. Their objective was to formulate a scenario outlining the methodology for forecasting dam water levels. Employing the Gated Recurrent Unit (GRU) model, a deep learning algorithm, the team trained it using data spanning from 2002 to 2021 from dams along the Seomjin River. Precipitation, inflow, and outflow data served as inputs while hourly dam levels served as outputs. The analysis demonstrated remarkable accuracy, boasting an efficiency index exceeding 0.9. Subsequently, the team devised explainable scenarios, manipulating inputs by -40%, -20%, +20%, and 40%, of each input variable to examine how the trained GRU model responded to these alterations in inputs. While changes in precipitation had a negligible impact on dam water levels, variations in inflow significantly influenced the dam's water level. Notably, the identical change in outflow yielded different water levels at distinct dams, affirming that the GRU model had effectively learned the unique operational nuances of each dam. Professor Jonghun Kam remarked "Our examination delved beyond predicting the patterns of dam operations securitize their effectiveness using AI models. We introduced a methodology aimed at indirectly understanding the decision-making process of AI-based black box model determining dam water levels." He further stated, "Our aspiration is that this insight will contribute to a deeper understanding of dam operations and enhance their efficiency in the future." The research was sponsored by the Mid-career Researcher Program of the National Research Foundation of Korea. Story Source: Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. Journal Reference: Cite This Page:
Environmental Science
From tiny plankton to massive whales, microplastics have been found throughout the ocean food chain. One major source of this pollution are fibers shed while laundering synthetic fabrics. Although many studies show microfibers are released during machine washing, it’s been less clear how hand washing contributes. Now, researchers reporting in ACS Environmental Science & Technology Water report that hand washing can drastically cut the amount of fibers shed compared with using a machine. When clothing made from plastic fibers, such as polyester and nylon, are laundered, the fabric sheds microscopic fibers that eventually end up in wastewater and the environment. Though researchers have investigated the amount and types of microplastic fibers shed while laundering clothing, most studies have focused on washing machines. In many countries, however, it is still common to manually launder clothing. A team has previously reported on the effects of washing fabric by hand, but the study was not comprehensive. So, Wang, Zhao, Xing and colleagues wanted to systematically investigate microplastic fiber release from synthetic textiles with different methods of hand washing in contrast to machine washing. The team cleaned two types of fabric swatches made from 100% polyester and a 95% polyester-5% spandex blend with hand washing methods and a washing machine. The researchers found that: Manual methods released far fewer fibers. For example, the 100% polyester fabric shed an average of 1,853 microplastic pieces during hand washing compared with an average of 23,723 pieces from the same fabric that was machine laundered. By weight, machine laundering released over five times more microplastics than the traditional method. The fibers released from hand washing tended to be longer. Adding detergent, pre-soaking the fabrics and using a washboard increased the number of released fibers with manual methods, but still not to the same extent as using a machine. In contrast, they found that temperature, detergent type, wash time and the amount of water used had no meaningful effects on the amount of microplastics shed while hand washing. The researchers say that these results will help clarify the sources of microplastic pollution in the environment and can provide guidance for “greener” laundering methods. The authors acknowledge funding from the Zhejiang Provincial Natural Science Foundation of China, the National Natural Science Foundation of China and the Scientific Research Foundation of Hangzhou Dianzi University. The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org. Follow us: Twitter | Facebook | LinkedIn | Instagram Article Title “Microplastic Fiber Release by Laundry: A Comparative Study of Hand-Washing and Machine-Washing” Article Publication Date 4-Jan-2023 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Environmental Science
Whenever a plastic bag or bottle degrades, it breaks into ever smaller pieces that work their way into nooks in the environment. When you wash synthetic fabrics, tiny plastic fibers break loose and flow out to sea. When you drive, plastic bits fly off your tires and brakes. That’s why literally everywhere scientists look, they’re finding microplastics—specks of synthetic material that measure less than 5 millimeters long. They’re on the most remote mountaintops and in the deepest oceans. They’re blowing vast distances in the wind to sully once pristine regions like the Arctic. In 11 protected areas in the western US, the equivalent of 120 million ground-up plastic bottles are falling out of the sky each year.And now, microplastics are coming out of babies. In a pilot study published today, scientists describe sifting through infants’ dirty diapers and finding an average of 36,000 nanograms of polyethylene terephthalate (PET) per gram of feces, 10 times the amount they found in adult feces. They even found it in newborns' first feces. PET is an extremely common polymer that’s known as polyester when it’s used in clothing, and it is also used to make plastic bottles. The finding comes a year after another team of researchers calculated that preparing hot formula in plastic bottles severely erodes the material, which could dose babies with several million microplastic particles a day, and perhaps nearly a billion a year. Although adults are bigger, scientists think that in some ways infants have more exposure. In addition to drinking from bottles, babies could be ingesting microplastics in a dizzying number of ways. They have a habit of putting everything in their mouths—plastic toys of all kinds, but they’ll also chew on fabrics. (Microplastics that shed from synthetic textiles are known more specifically as microfibers, but they’re plastic all the same.) Babies’ foods are wrapped in single-use plastics. Children drink from plastic sippy cups and eat off plastic plates. The carpets they crawl on are often made of polyester. Even hardwood floors are coated in polymers that shed microplastics. Any of this could generate tiny particles that children breathe or swallow. Indoor dust is also emerging as a major route of microplastic exposure, especially for infants. (In general, indoor air is absolutely lousy with them; each year you could be inhaling tens of thousands of particles.) Several studies of indoor spaces have shown that each day in a typical household, 10,000 microfibers might land on a single square meter of floor, having flown off of clothing, couches, and bed sheets. Infants spend a significant amount of their time crawling through the stuff, agitating the settled fibers and kicking them up into the air. “Unfortunately, with the modern lifestyle, babies are exposed to so many different things for which we don't know what kind of effect they can have later in their life,” says Kurunthachalam Kannan, an environmental health scientist at New York University School of Medicine and coauthor of the new paper, which appears in the journal Environmental Science and Technology Letters. The researchers did their tally by collecting dirty diapers from six 1-year-olds and running the feces through a filter to collect the microplastics. They did the same with three samples of meconium—a newborn’s first feces—and stool samples from 10 adults. In addition to analyzing the samples for PET, they also looked for polycarbonate plastic, which is used as a lightweight alternative to glass, for instance in eyeglass lenses. To make sure that they only counted the microplastics that came from the infants’ guts, and not from their diapers, they ruled out the plastic that the diapers were made of: polypropylene, a polymer that’s distinct from polycarbonate and PET.All told, PET concentrations were 10 times higher in infants than in adults, while polycarbonate levels were more even between the two groups. The researchers found smaller amounts of both polymers in the meconium, suggesting that babies are born with plastics already in their systems. This echoes previous studies that have found microplastics in human placentas and meconium.What this all means for human health—and, more urgently, for infant health—scientists are now racing to find out. Different varieties of plastic can contain any of at least 10,000 different chemicals, a quarter of which are of concern for people, according to a recent study from researchers at ​​ETH Zürich in Switzerland. These additives serve all kinds of plastic-making purposes, like providing flexibility, extra strength, or protection from UV bombardment, which degrades the material. Microplastics may contain heavy metals like lead, but they also tend to accumulate heavy metals and other pollutants as they tumble through the environment. They also readily grow a microbial community of viruses, bacteria, and fungi, many of which are human pathogens.Of particular concern are a class of chemicals called endocrine-disrupting chemicals, or EDCs, which disrupt hormones and have been connected to reproductive, neurological, and metabolic problems, for instance increased obesity. The infamous plastic ingredient bisphenol A, or BPA, is one such EDC that has been linked to various cancers. “We should be concerned because the EDCs in microplastics have been shown to be linked with several adverse outcomes in human and animal studies,” says Jodi Flaws, a reproductive toxicologist at the University of Illinois at Urbana-Champaign, who led a 2020 study from the Endocrine Society on plastics. (She wasn’t involved in this new research.) “Some of the microplastics contain chemicals that can interfere with the normal function of the endocrine system.” Infants are especially vulnerable to EDCs, since the development of their bodies depends on a healthy endocrine system. “I strongly believe that these chemicals do affect early life stages,” says Kannan. “That's a vulnerable period.”This new research adds to a growing body of evidence that babies are highly exposed to microplastic. “This is a very interesting paper with some very worrying numbers,” says University of Strathclyde microplastic researcher Deonie Allen, who wasn’t involved in the study. “We need to look at everything a child is exposed to, not just their bottles and toys.”Since infants are passing microplastics in their feces, that means the gut could be absorbing some of the particles, like it would absorb nutrients from food. This is known as translocation: Particularly small particles might pass through the gut wall and end up in other organs, including the brain. Researchers have actually demonstrated this in carp by feeding them plastic particles, which translocated through the gut and worked their way to the head, where they caused brain damage that manifested as behavioral problems: Compared to control fish, the individuals with plastic particles in their brains were less active and ate more slowly.But that was done with very high concentrations of particles, and in an entirely different species. While scientists know that EDCs are bad news, they don’t yet know what level of microplastic exposure it would take to cause problems in the human body. “We need many more studies to confirm the doses and types of chemicals in microplastics that lead to adverse outcomes,” says Flaws.In the meantime, microplastics researchers say you can limit children’s contact with particles. Do not prepare infant formula with hot water in a plastic bottle—use a glass bottle and transfer it over to the plastic one once the liquid reaches room temperature. Vacuum and sweep to keep floors clear of microfibers. Avoid plastic wrappers and containers when possible. Microplastics have contaminated every aspect of our lives, so while you’ll never get rid of them, you can at least reduce your family’s exposure.More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!Rain boots, turning tides, and the search for a missing boyBetter data on ivermectin is finally on the wayA bad solar storm could cause an “internet apocalypse”New York City wasn't built for 21st-century storms9 PC games you can play forever👁️ Explore AI like never before with our new database🎮 WIRED Games: Get the latest tips, reviews, and more🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones
Environmental Science
Fine particulate matter comes from wood burning, power generation, motor vehicles and other combustion sources that emit tiny particles into the air. At only 2.5 micrometers or smaller, these particles are small enough to be inhaled and cause lasting damage to the heart and lungs. Known as PM2.5, exposure to these particles is a leading mortality risk factor in India and the surrounding region of South Asia. A new study by researchers in Randall Martin's lab in the McKelvey School of Engineering at Washington University in St. Louis evaluated the contribution of various emission sectors and fuels to PM2.5 mass for 29 states in India and six surrounding countries: Pakistan, Bangladesh, Nepal, Bhutan, Sri Lanka and Myanmar. The results, published July 7 in Environmental Science & Technology, identify primary organics -- organic particles emitted directly into the atmosphere from various sources -- as the main drivers of high concentrations of PM2.5 over South Asia. The paper also illuminates potential pathways to reduce PM2.5 mass and improve population health across South Asia. "Countries in South Asia have substantial emissions and associated air pollution and mortality burden," said first author Deepangsu Chatterjee, a doctoral student in energy, environmental & chemical engineering in the McKelvey School of Engineering. "Our study shows that over 1 million deaths in South Asia attributable to ambient PM2.5 in 2019 were primarily from residential combustion, industry and power generation. Solid biofuel is the leading combustible fuel contributing to the PM2.5-attributable mortality, followed by coal and oil and gas." "Air pollution, both indoors and outdoors, is the leading risk factor for death in South Asia," said co-author Michael Brauer, professor at the Institute for Health Metrics and Evaluation at the University of Washington and the University of British Columbia. "Understanding the major contributing sources is a critical first step towards management of this serious problem." A major challenge in evaluating the impacts of PM2.5 is understanding how it is produced and distributed over time. Chatterjee and Martin, the Raymond R. Tucker Distinguished Professor in McKelvey Engineering, combined global emission inventories, satellite-derived fine surface particulate matter estimates and state-of-the-art global scale modeling capabilities to develop regional simulations. They also accounted for long-range transport to understand how different emission sectors and fuels contributed to PM2.5 and associated mortality rates. "Advances in modeling atmospheric composition with constraints from satellite remote sensing enabled our assessment of the sources of PM2.5 across South Asia," Martin said. "That helped draw our attention to large contributions from burning biofuel and coal." Chatterjee also noted that PM2.5 mass composition in South Asia is driven by primary organics across major contributing sectors. The team's PM2.5 composition analysis can be particularly useful to develop mitigation strategies associated with particular species. A few other notable features include high contribution from coal in central and eastern India, higher household air pollution in north-east and central India, biofuel contributions in Bangladesh and open fires in Myanmar. "This study shows that the air pollution problem in South Asia is not just an urban scale problem, so policies targeted at urban scale development will not be enough to mitigate the national level PM2.5 exposure," Chatterjee said. Chatterjee, Martin and their co-authors suggest several strategies for future interventions throughout South Asia, including policies encouraging the replacement of traditional fuel sources with sustainable sources of energy. "Policies in India in the past five to 10 years have worked toward identifying and improving air pollution concerns and associated health burden and mortality risks. Seeing these policies be effective is motivating for the South Asian population to keep moving the needle and develop strategic policies to curb the growth of air pollution," Chatterjee said. "Our paper provides detailed sector-, fuel- and composition-based information for different states in India along with surrounding countries, which could be useful for local policymakers to eliminate PM2.5 sources associated with their specific region." This work was supported by NASA (80NSSC21K0508) and the Health Effects Institute, an organization jointly funded by the U.S. Environmental Protection Agency (R-82811201) and certain motor vehicle manufacturers. Originally published by the McKelvey School of Engineering. Story Source: Journal Reference: Cite This Page:
Environmental Science
A wheat sample exposed to increased levels of Ozone is seen at the UK Centre for Ecology and Hydrology research site near Bangor, Britain, July 20, 2022. REUTERS/Phil NobleRegister now for FREE unlimited access to Reuters.comABERGWYNGREGYN, Wales, Aug 31 (Reuters) - Plant scientist Felicity Hayes checks on her crops inside one of eight tiny domed greenhouses set against the Welsh hills. The potted pigeon pea and papaya planted in spring are leafy and green, soon to bear fruit.In a neighbouring greenhouse, those same plants look sickly and stunted. The pigeon pea is an aged yellow with pockmarked leaves; the papaya trees reach only half as tall.The only difference between the two greenhouse atmospheres - ozone pollution.Register now for FREE unlimited access to Reuters.comHayes, who works at the UK Centre for Ecology and Hydrology (UKCEH), is pumping ozone gas at various concentrations into the greenhouses where African staple crops are growing. She is studying how rising ozone pollution might impact crop yields - and food security for subsistence farmers - in the developing world.Ozone, a gas formed when sunlight and heat interact with fossil fuel emissions, can cause substantial losses for farmers, research suggests, by quickly aging crops before they reach full production potential and decreasing photosynthesis, the process by which plants turn sunlight into food.Ozone stress also reduces plants' defences against pests.A 2018 study in the journal Global Change Biology estimated global wheat losses from ozone pollution totalled $24.2 billion annually from 2010 to 2012.In a January paper published in Nature Food, researchers tallied some $63 billion in wheat, rice and maize losses annually within the last decade in East Asia. read more Scientists are particularly worried about Africa, which will see more vehicle traffic and waste burning as the population is set to double by mid-century.That means more ozone pollution, a major challenge for smallholder farmers who make up 60% of the population in sub-Saharan Africa."There is a serious concern that ozone pollution will affect yields in the long run," said senior scientist Martin Moyo at the International Crops Research Institute for the Semi-Arid Tropics in Zimbabwe.He called out an "urgent need for more rural studies to determine ozone concentrations" across the continent.Earlier this year, scientists with the UK-based non-profit Centre for Agriculture and Bioscience International (CABI) set up ozone monitoring equipment around cocoa and maize fields in Ghana, Zambia and Kenya.But most African countries do not have reliable or consistent air pollution monitors, according to a 2019 UNICEF report. Among those that do, few measure ozone.RISING OZONEIn the stratosphere, ozone protects the Earth from the sun's ultraviolet radiation. Closer to the planet's surface, it can harm plants and animals, including humans.While air quality regulations have helped reduce ozone levels in the United States and Europe, the trend is set to spike in the opposite direction for fast-growing Africa and parts of Asia.Climate change could also speed things along.In areas of Africa with high fossil fuel emissions and frequent burning of forests or grassland, new research suggests hotter temperatures could make the problem worse as they can accelerate chemical reactions that create ozone.While research has found North American wheat is generally less impacted by ozone than European and Asian counterparts, there have been fewer studies on African versions of the same crops that over decades of cultivation have been made more suitable to those environments.Once every two weeks in a Nairobi market, farmers from the countryside bring samples of their ailing crops to a "plant doctor" in hopes of determining what is affecting their yields."A lot of (ozone) symptoms can be confused with mites or fungal damage," said CABI entomologist Lena Durocher-Granger. "Farmers might keep applying fertilizer or chemicals thinking it's a disease, but it's ozone pollution."Her organization is working with UKCEH to help people identify signs of ozone stress and recommend fixes, such as watering less on high ozone days. Watering can leave leaf pores wide open, causing plants to take in even more ozone.RESILIENT CROPSIn her Welsh greenhouses, Hayes was exposing crops in one dome to the lowest amount - 30 parts per billion - similar to the environment of North Wales. In the dome with the highest ozone level, plants were receiving more than triple that amount, mimicking North Africa's polluted conditions.Hayes and her colleagues have found that certain African staples are more affected than others.In a dome filled with a mid-level amount of ozone, North African wheat plants had quickly turned from green to yellow within just a few months."You get tiny thin grains that don't have all the good bits in them, a lot of husk on the outside and not as much protein and nutritional value," Hayes said.That fits with research her team published last year on sub-Saharan plant cultivars, which found that ozone pollution could be lowering sub-Saharan wheat yields by as much as 13%.Dry beans could fare worse, with estimated yield losses of up to 21% in some areas, according to the same study, published in Environmental Science and Pollution Research."Beans are a useful protein source in Africa, and subsistence farmers grow a lot of it," said KatrinaSharps, a UKCEH spatial data analyst.Sub-Saharan millet, however, seemed more ozone tolerant. Yet Africa produced about half as much millet as wheat in 2020."If the soil and growing conditions are suitable," Sharps said, "subsistence farmers may consider growing more millet."Register now for FREE unlimited access to Reuters.comReporting by Gloria Dickie; Editing by Katy Daigle, Marguerita Choy and Bill BerkrotOur Standards: The Thomson Reuters Trust Principles.
Environmental Science
Study: Mercury emission estimates rarely provide enough data to assess success in eliminating harmful mining practices A global treaty called the Minamata Convention requires gold-mining countries to regularly report the amount of toxic mercury that miners are using to find and extract gold, designed to help nations gauge success toward at least minimizing a practice that produces the world's largest amount of manmade mercury pollution. But a study of baseline mercury emission estimates reported by 25 countries—many in developing African, South American and Asian nations—found that these estimates rarely provide enough information to tell whether changes in the rate from one year to the next were the result of actual change or data uncertainty. Key variables—like how the country determines the amount of its gold production—can result in vastly different baseline estimates. Yet, countries often don't report this range of possible estimates. Millions are at risk About 15 million artisanal and small-scale gold miners around the world risk their lives every day facing hazardous working conditions that include constant exposure to mercury—a potent neurotoxin. Mercury vapors cause debilitating effects on the nervous, digestive and immune systems, lungs and kidneys, and may be fatal. Mercury is particularly harmful for children and pregnant women, whose developing fetuses are especially susceptible to the neurotoxic effects. An estimated 4 to 5 million of 15 million artisanal miners are women or children. "To make effective and impactful mercury interventions and policies, you must first make sure you have the baseline emission estimate right," said Kathleen M. Smits, chair of Civil and Environmental Engineering and Solomon Professor for Global Development in SMU's Lyle School of Engineering. "Providing more transparency in their reporting would help with that." Smits joined civil engineers from the University of Texas at Arlington and the U.S. Air Force Academy in the study recently published in the journal Environmental Science and Policy. The work was supported by the National Science Foundation. The research group analyzed 22 countries' national action plans (NAP), which contained their annual baseline estimates assembled under the Minamata Convention and posted on the organization's website. The team also looked at three additional countries with pertinent information posted to national government or non-governmental websites. Smits and her co-authors calculated the baseline estimates for Paraguay, if different variables were used. The South American country was selected for analysis in this study, due to the country's transparency of their reporting. Lacking key data in countries' baseline estimates Baseline mercury emission estimates seek to determine how many kilograms of mercury pollution are injected into the atmosphere each year from the practice of artisanal gold mining. To do that, countries calculate how much gold was found by miners—and therefore an approximation of how much mercury was used to get it. Countries primarily collect that information using interviews with miners, gold and mercury traders and other key players in the gold mining business; ratios that calculate the mercury to gold ratio; previous research, and field visits to known mining locations. But the study cites key problems with the way those estimates are currently calculated: - Not enough data on gold production estimates. Fifteen countries, like the Central African Republic and Madagascar, only provide one source for the calculation of the gold production rate, yet as Zimbabwe demonstrates, different data sources can provide vastly different values. In a separate study, Zimbabwe reported that extraction, processing and miners' income information resulted in gold production estimates varying between 11 percent and 55 percent using 2012 mining data and 9 percent to 35 percent using 2018 mining data. The African country's goal for reduced mercury emissions is a smaller percentage than range of uncertainty the study found for gold production. - Countries aren't unified in how they select important metrics. The mercury to gold ratio (Hg:Au) is used to estimate the amount of mercury used to produce a given amount of gold. A different ratio can result in different reasonable estimates for how much mercury was emitted. In the study, five different ways were listed as a ratio for Hg:Au, and a few countries cited more than one in their national action plan. Similarly, different countries used different techniques to come up with the national estimate of mercury emitted, some based on a small sample of mines and some without verifying the data with other sources. Smits said countries must do a better job of accounting for these variables if they want to draft more meaningful mercury reduction targets in their national action plans. "If you just take a look at the baseline mercury emission estimate process, it is clear that the NAP program will not achieve its goal of reducing mercury emissions if they continue with the current approach," Smits said, whose team spent six years working alongside miners in gold-mining countries for the study. Why do miners use toxic mercury to get gold? Artisanal and small-scale miners—the term for individual miners, families or small groups with minimal or no mechanization to do the work—sift through rocks in rivers and dump beads of mercury over the sediment, which clings to gold. They then light a match, using the flame to separate the mercury from the gold, a process that shoots toxic vapors into the air. It's a cheap method of mining gold, but mercury can leak toxins into the air and pollute water systems. The hazardous gold mining process accounts for roughly 40 percent of all man-made mercury emissions, making it the largest source for this type of pollution, the United Nations (U.N.) says. In 2013, the U.N. created the global treaty called the Minamata Convention to try to phase out artisanal and small-scale gold mining, as well as other mercury emission contributors. This treaty currently has 139 countries committing to its goal. "To join its treaty, countries that regularly engage in artisanal gold mining are required to report baseline mercury emission estimates on a regular basis and offer a national action plan for how they will eventually reduce their country's footprint for mercury," says Monifa Thomas-Nguyen More information: Michelle Schwartz et al, Quantifying mercury use in artisanal and small-scale gold mining for the Minamata Convention on Mercury's national action plans: Approaches and policy implications, Environmental Science & Policy (2023). DOI: 10.1016/j.envsci.2022.12.002 Provided by Southern Methodist University
Environmental Science
New Antarctic extremes 'virtually certain' as world warms Extreme events in Antarctica such as ocean heat waves and ice loss will almost certainly become more common and more severe, researchers say. With drastic action now needed to limit global warming to the Paris Agreement target of 1.5°C, the scientists warn that recent extremes in Antarctica may be the tip of the iceberg. The study reviews evidence of extreme events in Antarctica and the Southern Ocean, including weather, sea ice, ocean temperatures, glacier and ice shelf systems, and biodiversity on land and sea. It concludes that Antarctica's fragile environments "may well be subject to considerable stress and damage in future years and decades"—and calls for urgent policy action to protect it. "Antarctic change has global implications," said lead author Professor Martin Siegert, from the University of Exeter. "Reducing greenhouse gas emissions to net zero is our best hope of preserving Antarctica, and this must matter to every country—and individual—on the planet." Professor Siegert said the rapid changes now happening in Antarctica could place many countries in breach of an international treaty. "Signatories to the Antarctic Treaty (including the UK, U.S., India and China) pledge to preserve the environment of this remote and fragile place," he said. "Nations must understand that by continuing to explore, extract and burn fossil fuels anywhere in the world, the environment of Antarctica will become ever more affected in ways inconsistent with their pledge." The researchers considered the vulnerability of Antarctica to a range of extreme events, to understand the causes and likely future changes—following a series of recent extremes. For example, the world's largest recorded heat wave (38.5°C above the mean) occurred in East Antarctica in 2022 and, at present, winter sea ice formation is the lowest on record. Extreme events can also affect biodiversity. For example, high temperatures have been linked to years with lower krill numbers, leading to breeding failures of krill-reliant predators—evidenced by many dead fur seal pups on beaches. Co-author Professor Anna Hogg, from the University of Leeds, said, "Our results show that while extreme events are known to impact the globe through heavy rainfall and flooding, heat waves and wildfires, such as those seen in Europe this summer, they also impact the remote polar regions." "Antarctic glaciers, sea ice and natural ecosystems are all impacted by extreme events. Therefore, it is essential that international treaties and policy are implemented in order to protect these beautiful but delicate regions." Dr. Caroline Holmes, a sea ice expert at British Antarctic Survey, said, "Antarctic sea ice has been grabbing headlines in recent weeks, and this paper shows how sea ice records—first record highs but, since 2017, record lows—have been tumbling in Antarctica for several years." "On top of that, there are deep interconnections between extreme events in different aspects of the Antarctic physical and biological system, almost all of them vulnerable to human influence in some way." The retreat of Antarctic sea ice will make new areas accessible by ships, and the researchers say careful management will be required to protect vulnerable sites. The European Space Agency and European Commission Copernicus Sentinel satellites are an essential tool for regular monitoring of the whole Antarctic region and Southern Ocean. This data can be used to measure ice speed, sea ice thickness and ice loss at exceptionally fine resolution. The paper is published in the journal Frontiers in Environmental Science. More information: Martin Siegert et al, Antarctic Extreme Events, Frontiers in Environmental Science (2023). DOI: 10.3389/fenvs.2023.1229283 Provided by University of Exeter
Environmental Science
'Ultrashort' PFAS compounds detected in people and their homes Per- and polyfluoroalkyl substances (PFAS) have become ubiquitous throughout the environment, and increasing evidence has demonstrated their deleterious effects. A group of smaller, fluorinated compounds are becoming replacements for these "forever chemicals," though research suggests the smaller versions could also be harmful. Now, a study in Environmental Science & Technology reports that the levels of these substances in many indoor and human samples are similar to or higher than those of legacy PFAS. The most common are PFOS and PFOA—each are built with eight-carbon-long backbones and are considered to be perfluoroalkyl acids (PFAAs). "Short-chain" PFAAs, containing fewer than eight carbons, and "ultrashort-chain" PFAAs, with just two to three carbon atoms, have been thought to be suitable replacements for PFOS and PFOA. However, recent research has shown that their small size makes it easy for them to move throughout water supplies, and in vitro and in vivo tests have suggested that they could be more toxic than the longer compounds. So, Amina Salamova, Guomao Zheng and Stephanie Eick wanted to see if ultrashort PFAAs are accumulating in homes and in human bodies and understand how they might be getting there. Over 300 samples of dust, drinking water, serum and urine were collected from 81 people and their homes in the U.S., then analyzed for 47 different PFAAs and their precursors. Of these fluorinated compounds, 39 were detected, including ultra-short and short-chain compounds. For instance: - PFOS and PFOA were frequently detected in dust, drinking water and serum, but were less abundant than the shorter-chain PFAAs. - In most dust, drinking water and serum samples, two-carbon-long trifluoroacetic acid was the most predominant PFAA, often followed by three-carbon long perfluoropropanoic acid. - But in urine samples, the 5-carbon long perfluoropentanoic acid was the most abundant PFAA present. The researchers explain that the smaller PFAAs could slip through filters into drinking water or accumulate easily in household dust. Interestingly, dust samples from homes without carpets and homes that were vacuumed regularly contained substantially lower levels of PFAAs. From the data, the team determined that dust and water intake only contributed only about 20% of the total PFAA burden in these people. This result suggests that these compounds must primarily originate from other sources—many PFAA precursors can be found in consumer products, and some evidence suggests that they can break down into shorter-chain compounds in the environment or in the body. The researchers say that further investigation into ultra-short PFAA levels, their sources and their effects on human health is needed. More information: Elevated Levels of Ultrashort- and Short-Chain Perfuoroalkyl Acids in US Homes and People, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c06715. pubs.acs.org/doi/abs/10.1021/acs.est.2c06715 Journal information: Environmental Science & Technology Provided by American Chemical Society
Environmental Science
After puzzling scientists for decades, researchers have finally figured out what's making Bavaria's wild boars radioactive, even as other animals show few signs of contamination. Turns out, the animals are still significantly contaminated with radioactive fallout from nuclear weapons detonated over 60 years ago — not just from the Chernobyl disaster, as was previously thought. And the boars (Sus scrofa) are likely being contaminated by some of their favorite food — truffles. Bavaria, in southeastern Germany, was hit with radioactive contamination following the Chernobyl nuclear accident in April 1986, when a reactor exploded in Ukraine and deposited contaminants across the Soviet Union and Europe. Some radioactive material can persist in the environment for a very long time. Cesium-137 — which is associated with nuclear reactors like at Chernobyl — takes around 30 years for its levels to be halved (known as its half-life). In comparison, cesium-135, which is associated with nuclear weapon explosions, has a half life of 2.3 million years. Boars in Bavaria have continued to have high radioactivity levels since the Chernobyl disaster, even as contaminants in other forest species declined. It was long theorized that Chernobyl was the source of the radioactivity in boars — but something didn't add up. With cesium-137 having a half-life of 30 years, the boars' radioactivity should be declining, yet it is not. This is known as the "wild boar paradox." But now, in a new study published in the journal Environmental Science and Technology on Aug. 30, scientists found that fallout from nuclear weapons testing during the Cold War is behind the wild boar paradox, with radioactive material from both Chernobyl and nuclear weapons tests accumulating in fungi, such as deer truffles, that the boars consume. The researchers analyzed the meat of 48 boars in 11 Bavarian districts between 2019 and 2021. They used the ratio of cesium-135 to cesium-137 in the samples to determine the source. The specific ratios between these two isotopes are specific to each source of radiation, forming a unique fingerprint that researchers can use in analysis — a high ratio of cesium-135 to cesium-137 indicates nuclear weapon explosions, while a low ratio suggests nuclear reactors. They compared the isotopic fingerprint of the boar meat samples with soil samples from Fukushima and Chernobyl, as well as from historical human lung tissue collected in Austria. The lung tissue was processed in the 1960s and revealed signs of the isotopic fingerprint left by nuclear weapons testing during the Cold War. While no nuclear weapons were detonated near the study site, fallout from the tests spread in the atmosphere globally. Findings showed that 88% of samples taken exceeded the German limit for radioactive cesium. Between 10% and 68% of contamination came from nuclear weapons testing. The contaminants from both the weapons test and Chernobyl disaster seeped deep into the earth and were absorbed by underground truffles, explaining the wild boar paradox. Understanding the ecological persistence of radioactive contamination has been a pressing scientific problem since the first atomic bombs were dropped in 1945 over Japan. Fears over food safety following nuclear strikes or disasters at nuclear power plants are still not well understood in specific regional contexts. "This study illustrates that strategic decisions to conduct atmospheric nuclear tests 60-80 years ago still impact remote natural environments, wildlife, and a human food source today," the authors wrote. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Jacklin Kwan is a freelance journalist based in the United Kingdom who primarily covers science and technology stories. She graduated with a master's degree in physics from the University of Manchester, and received a Gold-Standard NCTJ diploma in Multimedia Journalism in 2021. Jacklin has written for Wired UK, Current Affairs and Science for the People.
Environmental Science
What Happens In Antarctica, Doesn’t Stay In Antarctica (Bloomberg Opinion) -- We hear a lot about climate wake-up calls. Here’s one you would do well not to ignore: Antarctica had the most extreme heatwave ever recorded. In March 2022, east Antarctica saw temperatures of up to 38.5C (101.3F) higher than average for the time of year. A so-called “atmospheric river” brought warm air and moisture from Australia into the heart of the frozen continent, raising temperatures to -10C from the norm of -50C. Had the UK’s 2022 heatwave — which saw the nation exceed 40C for the first time — been that severe, we would have hit 60C. It’s just one of many extreme events brought together in a new report published in the journal Frontiers in Environmental Science. What Antarctica’s future looks like is uncertain, but one sure thing is that continued fossil-fuel burning puts the world’s southernmost continent at increased risk of catastrophic cascades. The extreme heat in 2022, for instance, led to surface warming of land ice, the breaking up of sea-ice and the subsequent collapse of the Conger Ice Shelf – a frozen platform the size of Rome. Other events noted include record low sea ice levels, marine heatwaves and unprecedented surface melting. In terms of sea ice extent, this year has been particularly unusual. July’s sea-ice levels were three times further from the average than what had ever been seen previously. These events are detrimental to Antarctica’s iconic wildlife: Between 2018 and 2022, 42% of emperor penguin colonies likely experienced total or partial breeding failure due to fast ice breakup in at least one year. These are arguably climate-related tipping points — when something goes beyond the point of no return — already happening in the South Pole. When we see large icebergs shear off from the continent, large ice shelves collapsing and sea ice area reducing, what might not be appreciated is that these things cannot easily be fixed, if at all. Anna Hogg, co-author and associate professor in the School of Earth and Environment at the University of Leeds, says that we’ve never seen an ice shelf recover in our lifetimes. Irreparable losses in one of the most precious and unique areas on Earth are devastating and an ugly legacy for mankind’s actions. That’s reason enough to prevent further degradation where possible. But there’s another reason to worry. As Jane Rumble, co-author from the study and head of the Polar Regions Department for the UK’s Foreign and Commonwealth Office, told journalists before the report’s release “What happens in Antarctica, does not stay in Antarctica. It has global consequences.” Take sea levels, for example. Today, thanks to fossil-fuel burning, the Antarctic ice sheet contributes six times more mass to the ocean than it did three decades ago. The reservoir of ice on Antarctica’s ice sheet is vast – if it were to melt completely, which scientists don’t expect will happen anytime soon, it would raise global sea levels by 57 meters on average. Unlike the glaciers that would melt and then stop contributing to sea level rise, Antarctica would keep going and going — posing challenges that may be existential for some low-lying regions and coastal population centers, from Jakarta to Miami. The other fear is that Antarctica stops being our planet’s refrigerator and starts acting more like a radiator. At the moment, Antarctica’s ice reflects a large amount of solar radiation back into space, helping keep the world cool. Only 0.2-0.4% of the continent is exposed above the ice at the moment, but that proportion is likely to increase with further warming. That reduces the albedo – or reflectivity – of the surface and increases the heat absorbed by the planet. It’s an effect we’re already seeing in the Arctic, which is now warming four times faster than the rest of the planet. If Antarctica starts acting like the Arctic, that would have grave consequences for everywhere else. It might be many thousands of miles away, but you can bet that we’ll all feel the effects of a changing Antarctica. Just one more reason to add to the library of justifications for rapid and bold climate action. More From Bloomberg Opinion: - Earth's Climate Tipping Point Could Come in the 2030s: F.D. Flam - A 36-Year-Old Treaty Is Slowing the Arctic Melt: David Fickling - Rising Seas Are Next Crisis for World's Ports: Francis Wikinson This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Lara Williams is a Bloomberg Opinion columnist covering climate change. More stories like this are available on bloomberg.com/opinion ©2023 Bloomberg L.P.
Environmental Science
Hair products often contain ingredients that easily evaporate, so users may inhale some of these chemicals, potentially posing health repercussions. Now, researchers have studied emissions of these volatile organic compounds (VOCs), including siloxanes, which shine and smooth hair. The scientists report in ACS' Environmental Science & Technology that using these hair care products can change indoor air composition quickly, and common heat styling techniques -- straightening and curling -- increase VOC levels even more. Some prior studies have examined the amounts of siloxanes released from personal care products. But most focused on products that are washed off the body, such as skin cleansers, which might behave differently from products that are left on the hair, like creams or oils. In addition, most previous studies on siloxane emissions haven't looked at the real-time, rapid changes in indoor air composition that might occur while people are actively styling hair. Nusrat Jung and colleagues wanted to fill in the details about VOCs released from hair products, especially in real-world scenarios such as small bathrooms where they're typically applied. The researchers set up a ventilated tiny house where participants used their usual hair products -- including creams, sprays and oils -- and heated tools. Before, during and after hair styling, the team measured real-time emissions of VOCs including cyclic volatile methyl siloxanes (cVMS), which are used in many hair care products. The mass spectrometry data showed rapid changes in the chemical composition of air in the house and revealed that cVMS accounted for most of the VOCs that were detected. Emissions were influenced by product type and hair length, as well as the type and temperature of the styling tool. Longer hair and higher temperatures released higher amounts of VOCs. As a result of their findings, the researchers estimated that a person's potential daily inhalation of one cVMS, known as D5, could reach as much as 20 mg per day. In the experiments, turning on an exhaust fan removed most of the air pollutant from the room within 20 minutes after a hair care routine was completed, but the scientists note that this practice could affect outdoor air quality in densely populated cities. They say studies of the long-term human health impacts of siloxane exposure are urgently needed, because most findings are from animal studies. Story Source: Journal Reference: Cite This Page:
Environmental Science
Gowanus Canal Visit Offers an Educational Opportunity to Environmental Science and Policy Students For the past several years, Michael Musso, a lecturer in environmental health sciences and international and public affairs at Columbia’s Mailman School of Public Health, and Steven Chillrud, a research professor in geochemistry at Columbia Climate School’s Lamont-Doherty Earth Observatory, have organized an annual field trip to the Gowanus Canal during the Master of Public Administration in Environmental Science and Policy program’s summer term. The visit, planned with the help of the Friends and Residents of Greater Gowanus community group (FROGG), provides the students with insight into the impacts of environmental pollution, the resultant community struggles, and the action—or inaction—by local and federal authorities to clean up the canal and its surrounding areas. The Gowanus Canal is a 100-foot wide, 1.8-mile-long canal in Brooklyn, New York, that was designated by the Environmental Protection Agency as a Superfund site in 2010. Built in the mid-1800s, the Gowanus Canal used to be part of a major industrial transportation route and operated as a waste dumping site by gas and chemical plants and other heavy industry operations throughout the late 19th century, before the Clean Water Act of 1972. Outflows from New York’s outdated combined sewer system also released contaminants from untreated sewage and stormwater runoff into the Gowanus Canal. Despite multiple attempts to clean out the canal over the years, the pollution remains, affecting the lives of the local residents and businesses. This year, students were divided into four groups, each guided by a different member of FROGG, who shared the history behind the historic pollution of the Gowanus and their experiences fighting it. Throughout the day, the professors, students, and guides discussed the complexity of the situation at the community, regional, and federal levels, as well as the considerations for pollution cleanup. The guides also shared their firsthand experience witnessing the changes and infrastructural development over the years, such as the many industrial complexes along the canal that have been converted into art museums, residential buildings and other land uses. For the students, many of whom were seeing and learning about the Gowanus Canal for the first time, the field trip was also a welcome break from their intense schedule of summer courses. Christina Morano, an incoming Environmental Science and Policy student, said, “I really liked getting to know the tour guides and having the opportunity to hear them voice their opinions on development in the Gowanus neighborhood. Having a guided tour of the local area, it really helps bring additional insight to what we learned in class.” Another student, Saiarchana Darira, said, “I found it impactful to learn about the importance of considering the needs of a community when building a canal that neighbors their living space. Gowanus Canal is one of the most contaminated bodies of water in the U.S., yet there is still not enough public knowledge about its environmental health impacts, especially to non-English speaking residents living in the region. It is essential to find cross-cultural ways that transcend language boundaries to communicate with residents, so that communities can make informed consent before living in an area that neighbors a contaminated site.” The Environmental Science and Policy program was designed to cultivate the next generation of environmental policy makers and environmental science communicators. The annual Gowanus Canal trip, along with the other field trips the program organizes, plays a key role in exposing the students to the issues and injustices, as well as the communities, they will spend their careers advocating for. Columbia University’s collaboration with FROGG allows the students to develop a crucial first-person perspective on how environmental issues like water pollution and unregulated development affect local neighborhoods.
Environmental Science
When it comes to the United States phasing out PFAS, the “forever chemicals” are true to their nickname in more ways than one. It’s not going to be straightforward or swift to eliminate these substances from countless industries, even though they have been potentially linked to myriad health issues. Found in products like food packaging, clothes and firefighting foam, PFAS have contaminated drinking water sources nationwide since becoming commercially available in the middle of the last century, building up in the environment where they won’t break down for a very long time. A recent study concluded that rainwater, surface water and ground soil across the globe is extensively contaminated with these chemicals to a point that cannot be reversed without expensive, advanced technological intervention. “This stuff is toxic at incredibly low levels and it’s persistent — it stays there for hundreds of years in the groundwater, thousands of years,” said Graham Peaslee, a Notre Dame professor and researcher who’s tested many products for PFAS in his lab. “And that means the next generations will be drinking it, and that’s not the kind of legacy we want to leave our kids.” It’s a familiar story that has played out before, from DDT to PCBs. A hazardous chemical is widely used, its adverse health and environmental effects are revealed far after the fact, scientists and other concerned parties ring the alarm, and the substance in question finally garners federal attention, sometimes in the form of improved regulation or, more rarely, a full-stop ban. We’re well within the third act of that script when it comes to PFAS, with many researchers and consumers calling on industries and institutions to phase these chemicals out of their products, manufacturing processes and general use, and instead pursue safer alternatives that serve similar functions. The Environmental Protection Agency recently issued two updated interim drinking water health advisories for PFOA and PFOS — two legacy, or “long chain,” and well-studied PFAS that have been phased out of manufacturing in the U.S. but are still used in other parts of the world and products or materials that contain them can be imported. The agency also issued advisories for two newer, “short chain” PFAS known as PFBS and “GenX chemicals” that were developed to replace the legacy substances yet are still problematic from a health and environmental standpoint. Those EPA advisories don’t carry the force of law, PFAS are largely unregulated and nothing is stopping manufacturers from using the chemicals in their supply chains, which are often murky to begin with. Companies face limited pressure — at least at the federal level — to get them out of their supply chains. Multiple states, though, have taken their own legislative steps toward phasing PFAS out or outright banning them in certain products. Beyond the regulatory world, researchers are leading the way with a vision of what it means to address PFAS contamination at its source. Some companies are also voluntarily taking steps to help make that happen. It’s realistically going to take several more decades, Peaslee said, before we can truly get a handle on PFAS. But that doesn’t mean that efforts to stop further contamination by getting it out of existing manufacturing practices and products will be fruitless. What are PFAS, and why are they considered hazardous? The term “PFAS” stands for per- and polyfluoroalkyl substances. It refers to a family of thousands of different chemicals that have a wide range of commercial and industrial uses. These substances are particularly good at repelling things — their dual hydrophobic and hydrophilic properties help them resist water, plus oils and stains. These qualities help make products waterproof, stain-proof or non-stick, in addition to their use as in industrial lubricants. PFAS have been detected in goods ranging from cosmetics to period underwear to anti-fogging cloths and sprays for glasses, among many others. A 2020 study identified them across 200 different use categories. Only a handful of those thousands of chemicals have been well-studied to determine their impacts on human health. Many experts argue for approaching PFAS as a class of chemicals — as in assuming that less studied members of the chemical family may have health and environmental impacts akin to those that have been better researched, and making decisions around their use accordingly. Existing evidence suggests that high levels of exposure to PFAS – among those that have been better studied – may lead to increased cholesterol levels, decreased vaccine responses in children, higher risk of preeclampsia in pregnant people and increased risk of kidney and testicular cancer, and other outcomes, according to the Agency for Toxic Substances and Disease Registry. Graphic by Megan McGrew/PBS NewsHour In other words, limited research so far suggests that these chemicals can affect multiple systems in the body, said Courtney Carignan, an environmental epidemiologist and assistant professor at Michigan State University. “It seems that the property that makes them useful — that they’re very persistent and they have this one part of them that really likes water and the other part that does not — also seems to be what makes them problematic in the body,” Carignan said. Legacy PFAS like PFOA and PFOS were known to take years to leave the body, whereas the shorter chain ones more often in use today are shown to be expelled more along the timeframe of months. For consumers, labels can be confusing or misleading — a product may boast its “PFOA-free” status, for example, but that’s just one chemical within the PFAS family. Both legacy and shorter chain types persist in the environment and can have human health impacts regardless of how long they take for your body to eliminate them, which is why many experts maintain that there’s no world in which continuing their use is justified. “I’ve never met the good PFAS, and there are no such things,” Peaslee said. “They are all long-lived, they all bioaccumulate, a good number of them are shown to be toxic and the rest we just haven’t measured yet.” How do PFAS get into our bodies? Humans can be exposed to PFAS via ingestion, such as by drinking contaminated water or eating fish in which these chemicals have bioaccumulated. Inhalation is another route, and it can happen via indoor air — for example, if the furniture or carpeting in your home or office has been treated with PFAS to prevent stains — or outdoor air, particularly if you live close to a factory that emits PFAS through its stacks. When it comes to major sources of PFAS contamination in the U.S., “the biggest culprit to date” has been firefighting foam, also known as AFFF, Peaslee said. As of 2021, the Department of Defense was investigating nearly 700 military installations where this foam was used extensively, often during training operations, where it had ample opportunity to permeate the environment. Multiple institutions have made the switch to PFAS-free firefighting foam in recent years, or are at least in the process of doing so. Congress has ordered the Department of Defense, for example, to switch to PFAS-free firefighting foam by October 2024. But Peaslee noted that the transition isn’t quite that simple — for one thing, countless gallons of the older, fluorinated foam are still on the shelves at fire stations nationwide, and each container could contaminate hundreds of millions of gallons of water. Safely disposing of it is a massive task. The turnout gear that firefighters wear when they respond to fires is also often treated with PFAS in order to help it resist moisture and heat, and many are concerned that wearing and handling it could put them at additional risk. An independent committee facilitated by the National Fire Protection Association has recently drafted new proposed safety standards for that gear, which are open to public comment. Though exposure through consumer products is a reasonable concern, there are two even larger facets of the problem, said Shari Franjevic, who leads the GreenScreen For Safer Chemicals program at the nonprofit Clean Production Action. One is how that product came to exist in the first place – people who might work at a plant where PFAS are produced or heavily used are typically among the most most exposed to the hazardous chemicals. The other is where it will end up once it’s discarded, which is a problem for those who live nearby and are exposed through contaminated drinking water. Once a product that contains PFAS is thrown away, it can contaminate the environment in the form of leachate that eventually passes through our wastewater treatment systems, which were not designed to remove those chemicals, Carignan said. “I can wrap my hotdog or hamburger in this packaging, and the grease will never come through it,” Peaslee said, explaining the cycle. “That’s good, except that when we throw that wrapper away, 100% of that PFAS will come off in a landfill in 60 days, and then we’re all drinking it.” Getting PFAS out of products Plenty of products contain PFAS on purpose in order to perform a specific function. But to Franjevic and the GreenScreen program, there’s a distinction between intentionally added PFAS and those that most likely resulted from cross-contamination during the manufacturing process. She argues that “turning off the tap on PFAS” means prioritizing getting the chemicals out of products into which they’ve historically been added on purpose. GreenScreen helps companies by examining whether chemicals in their products have the potential to harm human health, like PFAS, and works on how to either swap them out with safer alternatives or reduce exposure if their use is absolutely essential. This comprehensive, hazard-first approach helps prevent manufacturers from going down the well-trod path of using substitutes that still come with a slew of their own health and environmental concerns. In the PFAS world, many researchers point to those shorter chain chemicals currently still in use that were considered solid replacements for legacy PFAS as an example of that phenomenon. Meanwhile, a plastic part that’s used in a broader product might not contain PFAS by design, but could still have detectable amounts of the forever chemicals when tested. That could be because the manufacturer uses a PFAS-containing release agent that helps each part pop out of its mold faster to speed up the production process, Franjevic said. Supply chains are often long, and there’s plenty of room for cross-contamination. In her view, it’s a first-things-first type of situation: Give companies a realistic pathway toward getting intentionally added PFAS out of their products, and then address impurities. “To notch down impurities now to really, really low thresholds puts almost an unfair burden [on manufacturers], and it’s not prioritizing where the biggest impact is,” she said. “And so we’re trying to be pragmatic about, ‘How do we really create the change we need to see in the world?’” Several states have passed legislation aimed at getting toxic chemicals out of consumer products, including Washington. After establishing what’s hazardous and what’s a viable alternative, the state can take steps to restrict the use of chemical of concern or mandate that consumers be notified if a product contains it, explained Rae Eaton, a chemist in the Hazardous Waste and Toxics Reduction Program at the Washington State Department of Ecology. Eaton works on a program that evaluates short-term food packaging — think takeout clamshell containers, bowls that hold hot soup or paper sandwich wrappers. PFAS are used in some of those materials to keep food from sticking to or soaking through its container before that packaging is discarded. “We’re using chemicals that can last for hundreds of years, sometimes for products that get used for 45 minutes, and then they go in the trash or they go in your compost,” Eaton said. Eaton noted that some compostable or recyclable food packaging contains PFAS, which is not good news for the industrial compost sites they’re designed for. She and her colleagues have released two reports on takeout-style packaging that analyzed a range of existing products and the purposes they serve, then detailed which alternative materials could be feasibly used in place of PFAS. It’s not a complete analysis of every alternative on the market, she said, but it does include a range of accessible options that are already in use. Some of those alternatives may be wax or clay-coated materials, ones that use polylactic acid (PLA), a biodegradable polymer that can break down under commercial composting conditions, or even switching to reusable packaging. Companies can use her team’s analysis as a resource on how to feasibly move away from PFAS-containing products and toward safer, more sustainable options. PFAS will be banned in nine types of food packaging in Washington by September 2024. Eaton said her team is now researching alternatives for longer-term food packaging, including microwaveable popcorn bags, baking paper and pet food bags, and actively soliciting input from businesses that make them, particularly if they already don’t use PFAS. What can governments and individuals do? In 1987, the Montreal Protocol aimed to phase out hazardous substances — including CFCs, or chlorofluorocarbons — that were known at the time to be depleting the ozone layer in Earth’s atmosphere. Today, that international agreement is largely considered a success — as of 2019, nations phased out 98 percent of ozone-depleting substances, and the hole in the ozone layer that prompted international cooperation was getting smaller, according to the UN Environment Program. But there’s no comparable international agreement or imperative on PFAS. Some environmentally minded companies and governments have led the charge on working to ban or phase out some of these chemicals. But it’s less clear how long it will take others to catch up – and change will depend on decision-makers committing to the effort. “There’s a combination of challenges that we have to overcome, [including] technical challenges to try and find replacements that work, but also the vested economic interests that we have to tackle,” said Ian Cousins, a professor in the department of environmental science at Stockholm University. He’s a leading proponent of a framework that depends on defining when and where the use of PFAS is actually essential. Plenty of companies are already interested in and working toward making a proactive pivot away from PFAS. But the U.S. regulatory system largely lacks teeth on this issue, and it’s not clear that federal officials will mandate that American companies stop using PFAS in their products and supply chains anytime soon. For now, when it comes to companies that aren’t taking initiative, a little consumer pressure can go a long way, Franjevic said. She encouraged concerned consumers to contact companies they care about and ask if their products contain PFAS or any other harmful chemicals, like phthalates. Corporations tend to track those types of requests, and when they get to a certain number, she added, they may take action. “If they get enough people asking, they will do the work,” Franjevic said. “It’ll get on their radar. So ask.”
Environmental Science
Researchers at the University of Notre Dame are adding to their list of consumer products that contain PFAS (per- and polyfluoroalkyl substances), a toxic class of fluorine compounds known as "forever chemicals." In a new study published in Environmental Science and Technology Letters, fluorinated high-density polyethylene (HDPE) plastic containers -- used for household cleaners, pesticides, personal care products and, potentially, food packaging -- tested positive for PFAS. Following a report conducted by the EPA that demonstrated this type of container contributed high levels of PFAS to a pesticide, this research demonstrates the first measurement of the ability of PFAS to leach from the containers into food as well as the effect of temperature on the leaching process. Results also showed the PFAS were capable of migrating from the fluorinated containers into food, resulting in a direct route of significant exposure to the hazardous chemicals, which have been linked to several health issues including prostate, kidney and testicular cancers, low birth weight, immunotoxicity and thyroid disease. "Not only did we measure significant concentrations of PFAS in these containers, we can estimate the PFAS that were leaching off creating a direct path of exposure," said Graham Peaslee, professor of physics in the Department of Physics and Astronomy at Notre Dame and an author of the study. It's important to note that these types of containers are not intended for food storage, but there is nothing preventing them from being used for food storage at the moment. Although not all HDPE plastic is fluorinated, the researchers noted, it's often impossible for a consumer to know whether a container has had that treatment. And indeed, Peaslee added, if substances like pesticides are stored in these containers, and then are used on agricultural crops, these same PFAS will get into human food sources that way. In 2021 the EPA announced its PFAS Strategic Roadmap -- promising to act on widespread exposure to PFAS. The plan includes developing a more comprehensive understanding of the health and environmental effects of PFAS exposure, preventing further contamination of air, land and water and addressing the need for cleanup of PFAS already in the environment. PFAS is often used in association with stain- or water-resistant products. For the study, Peaslee and graduate student Heather Whitehead tested HDPE containers that were treated with fluorine to create a thin layer of a fluoropolymer, as a means to impart chemical resistance and improve container performance over long storage periods. While these materials generally stay in the container wall, the manufacturing process can generate lots of smaller PFAS molecules, which are not polymers. Experiments were designed to measure the ability of these chemicals to migrate from the container to samples of different foods and solvents. Analysis of the containers found parts-per-billion levels of PFAS that could migrate into both solvents and food matrices in as little as one week. "We measured concentrations of PFOA that significantly exceeded the limit set by the EPA's 2022 Health Advisory Limits," said Peaslee. "Now, consider that not only do we know that the chemicals are migrating into the substances stored in them, but that the containers themselves work their way back into the environment through landfills. PFAS doesn't biodegrade. It doesn't go away. Once these chemicals are used, they get into the groundwater, they get into our biological systems, and they cause significant health problems." Peaslee and Whitehead measured PFAS concentrations in olive oil, ketchup and mayonnaise that had been in contact with the fluorinated containers for seven days at various temperatures. Based on the amount found in the different food samples, the study estimates enough PFAS could be ingested through food stored in the containers to be a significant risk of exposure. The containers are the latest products in a long list of those tested by Peaslee and his lab at Notre Dame, including cosmetics, firefighting gear, school uniforms and fast food wrappers. Story Source: Journal Reference: Cite This Page:
Environmental Science
More than 5,000 tons of toxic chemicals released from consumer products every year inside Californian homes, workplaces People often assume that the products they use every day are safe. Now a new study by Silent Spring Institute and the University of California, Berkeley, exposes how much people come into contact with toxic ingredients in products, used at home and at work, that could harm their health. Findings from the analysis could help state and federal agencies strengthen chemical regulations and guide manufacturers in making safer products. Many common products like shampoos, body lotions, cleaners, mothballs, and paint removers contain toxic volatile organic compounds or VOCs—chemicals that escape as gases, accumulate in indoor air, and cause a variety of health problems including cancer. Because companies, for the most part, are not required to disclose what it's in their products or how much, it's difficult to know what people might be exposed to and the potential health effects. "This study is the first to reveal the extent to which toxic VOCs are used in everyday products of all types that could lead to serious health problems," says lead author Kristin Knox, a scientist at Silent Spring Institute. "Making this information public could incentivize manufacturers to reformulate their products and use safer ingredients." For the analysis, Knox and her colleagues turned to an unlikely source of data: The California Air Resources Board (CARB). For more than 30 years, CARB has been tracking VOCs in consumer products in an effort to reduce smog. In the presence of sunlight, VOCs react with other air pollutants to form ozone, the main ingredient in smog. Under its Consumer Product Regulatory program, CARB periodically surveys companies that sell products in California, collecting information on a wide range of items—everything from hair spray to windshield wiper fluid. The data include information on the concentration of VOCs used in various types of products and how much of each product type is sold in the state. CARB does not share data on specific products. Reporting in the journal Environmental Science & Technology, the researchers analyzed the most recent CARB data, focusing on 33 VOCs listed under California's right-to-know law, Prop 65, because they cause cancer, birth defects, or other reproductive harm. The law requires companies that sell products in California to warn users if their products could expose them to significant amounts of these harmful chemicals. The team's analysis found more than 100 types of products contain Prop 65 VOCs. Of those, the researchers identified 30, including a dozen different types of personal care products, that deserve special scrutiny because they frequently contain harmful chemicals and may pose the greatest health risk. (Since CARB only reports on VOCs, many other toxic chemicals listed under Prop 65, such as lead, were not included in the analysis.) Products used on the job are especially concerning, the authors note, because workers often use many different types of products, each of which likely contains at least one hazardous chemical. For instance, nail and hair salon workers use nail polishes and polish removers, artificial nail adhesives, hair straighteners, and other cosmetics. According to the analysis, these types of products combined contain as many as 9 different Prop 65 VOCs. Janitors might use a combination of general cleaners, degreasers, detergents, and other maintenance products, which could expose them to more than 20 Prop 65 VOCs. "The same thing goes for auto and construction workers. All these exposures add up and might cause serious harm," says co-author Meg Schwarzman, a physician and environmental health scientist at the UC Berkeley School of Public Health who led the study. "At the most basic level, workers deserve to know what they're exposed to. But, ultimately, they deserve safer products and this study should compel manufacturers to make significant changes to protect workers' health." Of the 33 VOCs listed under Prop 65, the researchers identified the top 11 chemicals that manufacturers should eliminate from products because of the chemicals' high toxicity and widespread use. Other findings include: - Among products used on the body, formaldehyde was the most common Prop 65 VOC, and was found in nail polish, shampoo, makeup, and other types personal care items. - For products used in the home, general purpose cleaners, art supplies, and laundry detergents contained the most Prop 65 VOCs. - Adhesives contained more than a dozen different Prop 65 VOCs, highlighting that workers can be exposed to many toxic chemicals from using just one type of product. Finally, the team used the CARB data to calculate the total amount of Prop 65 VOCs emitted from consumer products indoors, and found more than 5,000 tons of volatile Prop 65 chemicals were released from products in the state of California in 2020. Nearly 300 tons of that came from mothballs (1,4-dichlorobenzene) alone. "Although Prop 65 has reduced the public's exposure to toxic chemicals both through litigation and by incentivizing companies to reformulate their products, people continue to be exposed to many unsafe chemicals," says co-author Claudia Polsky, Director of the Environmental Law Clinic at UC Berkeley School of Law. "This study shows how much work remains for product manufacturers and regulators nationwide, because the products in CARB's database are sold throughout the U.S." The new study offers solutions by highlighting the types of products manufacturers should reformulate to replace toxic VOCs with safer ingredients. The authors also suggest, based on their findings, that the U.S. Environmental Protection Agency consider regulating five additional chemicals under the Toxic Substances Control Act (TSCA). These chemicals include ethylene oxide, styrene, 1,3-dichloropropene, diethanolamine, and cumene. For more tips on how to limit everyday exposures to harmful VOCs and other chemicals of concern, download Silent Spring's Detox Me app. More information: Identifying toxic consumer products: Novel data set reveals air emissions of potent carcinogens, reproductive toxicants, and developmental toxicants, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c07247 Journal information: Environmental Science & Technology Provided by Silent Spring Institute
Environmental Science
For many people, a bout of Covid-19 gave a first taste (or rather a lack of it) of what it is like to lose their sense of smell. Known as "anosmia", loss of smell can have a substantial effect on our overall wellbeing and quality of life. But while a sudden respiratory infection might lead to a temporary loss of this important sense, your sense of smell may well have been gradually eroding away for years due to something else – air pollution. Exposure to PM2.5 – the collective name for small airborne pollution particles, largely from the combustion of fuels in vehicles, power stations and our homes – has previously been linked with "olfactory dysfunction", but typically only in occupational or industrial settings. But new research is now starting to reveal the true scale – and the potential damage caused by – the pollution we breathe in every day. And their findings have relevance for us all. On the underside of our brains, just above our nasal cavities, lies the olfactory bulb. This sensitive bit of tissue bristles with nerve endings and is essential for the enormously varied picture of the world we get from our sense of smell. It's also our first line of defence against viruses and pollutants entering the brain. But, with repeated exposure, these defences slowly get worn down – or breached. "Our data show there's a 1.6 to 1.7-fold increased [risk of] developing anosmia with sustained particulate pollution," says Murugappan Ramanathan Jr, a rhinologist at the Johns Hopkins School of Medicine, Baltimore. He has become one of the few experts in this field after he started to wonder if there was a link between the large numbers of patients he was seeing with anosmia and the environmental conditions where they lived. The simple question he wanted to answer was this: were a disproportionate number of anosmia patients living in areas of higher PM2.5 pollution? Until recently, the little scientific research on this topic included one Mexican study in 2006, which used strong coffee and orange odours to show that residents of Mexico City – which often struggles with air pollution – tended to have a poorer sense of smell on average than people living in rural areas of the country. With the help of colleagues – including environmental epidemiologist Zhenyu Zhang who created a map of historic air pollution data in the Baltimore area – Ramanathan set up a case-control study of data from 2,690 patients who had attended Johns Hopkins Hospital over a four year period. Around 20% had anosmia and most didn't smoke – a habit that is known to affect the sense of smell. Modern vehicles burning fossil fuels produce polluting nanoparticles of the size that can pass into the brain (Credit: Nake Batev/Anadolu Agency/Getty Images) Sensory Overload From the microplastics sprayed on farmland to the noxious odours released by sewage plants and the noise harming marine life, pollutants are seeping into every aspect of our existence. Sensory Overload explores the impact of pollution on all our senses and the long-term harm it is inflicting on humans and the natural world. Read some of the other stories from the series here: Sure enough, the levels of PM2.5 were found to be "significantly higher" in the neighbourhoods where patients with anosmia lived compared to healthy control participants. Even when adjusted for age, sex, race/ethnicity, body mass index, alcohol or tobacco use, the findings came up the same: "Even small increases in ambient PM2.5 exposure may be associated with anosmia". The finding has been echoed in other parts of the world in studies published this year. One recent study in Brescia, northern Italy, for example, found the noses of teenagers and young adults became less sensitive to smells the more nitrogen dioxide – another pollutant produced when fossil fuels are burned, in particular from vehicle engines – they were exposed to. Another year-long study in São Paulo, Brazil, also indicated that people living in areas with higher particulate pollution had an impaired sense of smell. But exactly how is pollution wrecking our ability to smell? According to Ramanathan there are two potential routes. One is that some of the pollution particles are passing through the olfactory bulb and getting directly into the brain, causing inflammation. "Olfactory nerves are in the brain but they have little holes at the base of skull where little fibres go into the nose, [looking] almost like little pieces of angel hair pasta," says Ramanathan. "They are exposed." In 2016, a team of British researchers found tiny metal particles in human brain tissue that appeared to have passed through the olfactory bulb. Barbara Maher, a professor of environmental science at Lancaster University in the UK who led the study, said at the time that the particles were "strikingly similar" to those found in airborne pollution next to busy roads (domestic fireplaces and log stoves were another possible source). Maher's study suggests that these nanoscale metal particles could, once in the brain, become toxic, contributing to oxidative brain damage that damages the neural pathways, although it still remains a theory. The other potential mechanism, says Ramanathan, may not even require pollution particles getting into the brain. By hitting the olfactory bulb on an almost daily basis, they cause inflammation and damage to the nerves directly, slowly wearing them away. Think of it almost like coastal erosion, where sandy, salty waves eat away at the shoreline; substitute waves with pollution-filled air, and shoreline with our nasal nerves. Modern combustion methods can create nanoparticles so fine that they are small enough to directly enter our bloodstream and brain tissue Unsurprisingly then, anosmia disproportionately affects older people, whose noses have been assaulted by air pollution for longer. More surprisingly, none of the Johns Hopkins patients lived in areas with excessively high air pollution – many lived in leafy areas of Maryland, and none were from pollution hotspots. It suggests that even low levels of air pollution could cause problems over a long enough period. A similar recent study has separately been carried out by the Aging Research Center at the Karolinska Institute, in Stockholm. Postdoctoral researcher Ingrid Ekström was puzzled by findings from the early 2000s that showed more than 5.8% of adults in Sweden had anosmia, and 19.1% had some form of olfactory dysfunction. Knowing that anosmia rates were higher in older people, Ekström and colleagues designed a study using 3,363 patients aged 60 and over. Using strongly scented "sniffing sticks" of 16 common household smells, participants received a score depending on the number they could correctly identify. As with the Baltimore study, the participants' home addresses were mapped and analysed according to municipal air pollution readings. And as in Baltimore, there was a strong correlation between higher pollution levels and poorer smelling ability. "They have been subjected to pollution throughout their lives," says Ekström. "We don't know exactly when their olfactory impairments started to decline.” But she is “confident” that long-term exposure to pollution was the cause, even at low levels. In 2021, The World Health Organization (WHO) changed its health-based guidelines for a maximum annual average exposure to PM2.5, reducing it from 10 to 5 micrograms per cubic metre (µg/m3). Stockholm, Sweden's capital, is one of the few major cities in the world that manages to stay below this level with an annual average of 4.2µg/m3. By comparison, Islamabad, in Pakistan, has an annual average PM2.5 levels of 41.1µg/m3 while it is 42.3µg/m3 in Bloemfontein, South Africa. This arguably makes the Stockholm findings even more relevant – if even Stockholm residents are having their senses eroded by low levels of pollution, then how much worse will it be in regions with high levels? It is also a reminder of how highly localised pollution can be, both outdoors and indoors. People's cooking methods and heating choices may be exposing them to higher levels than their neighbours. (Listen to learn how effective air purifiers are.) Meanwhile modern combustion methods from vehicle engines to the latest 'eco' wood stoves can create nanoparticles so fine that they barely register on PM2.5 readings, but are small enough to directly enter our bloodstream and brain tissue. Air pollution is known to cause a quarter of all deaths from heart disease and stroke, and nearly half of all deaths from lung disease. By comparison, perhaps, our sense of smell seems low down the list of concerns. But both Ramanathan and Ekström warn that we underestimate the importance of smell at our peril. Ekström's research speciality is dementia. And anosmia may be an early warning sign. "With dementia and especially with Alzheimer's Disease, we assume that [the] disease progression is actually starting several decades before we can see the first symptoms," says Ekström. Anosmia is one of the first symptoms. By the time Alzheimer's is diagnosed, "almost 90% of patients have anosmia", says Ekström. The exact link remains unknown, but one theory is that "environmental toxins enter the central nervous system via the olfactory bulb and cause damage, triggering this cascade effect that may ultimately lead to neuro-degeneration". The Maher Lancaster study, for example, found that metal nanoparticles were directly associated with the formation of 'senile plaques' – lesions on the brain and one of the neuropathological hallmarks of Alzheimer's disease and dementia. Despite such strong links, Ekström argues it is only recently that researchers have "opened their eyes to the olfactory sense" and its role in disease. Losing our sense of smell may seem trivial next to other health effects of air pollution, but this misses the important role it plays in our lives (Credit: Md Manik/Getty Images) Loss of smell has been linked to increased likelihood of depression and anxiety in various studies, and is known to play a role in obesity, weight loss, malnutrition and cases of food poisoning. The reasons are perhaps obvious – our noses play a key role in our experience of the world around us, affect our ability to taste food and help us avoid meals that have gone off. A poor sense of smell may mean that sufferers are likely to seek out stronger tasting food, which is very often salty and fatty. By contrast, a total loss of smell can put people off food and lose enjoyment from it, ultimately becoming underweight – a particular problem amongst the elderly. Ramanathan has seen many patients who "can't taste food, can't smell their wine, the things that gave them pleasure in life". He recalls one patient who was a professional sommelier, for whom developing anosmia was both personally and professionally devastating. Smell and taste are also linked to memory. "People don't remember what that pastry looked like that they ate in France, but they remember what the shop smelled like", says Ramanathan. Re-experiencing a particular smell can transport our memories straight back to that moment in pastry shop. This raises the question – albeit yet to be properly studied – whether the inverse could also true, and no longer being able to smell could impair our ability to create new memories in the same way. Anosmia could also be an indicator of other, wider health issues. Numerous studies, typically of smokers – for whom smell impairment persists even 15 years after quitting – have shown that olfactory dysfunction is significantly associated with increased mortality among older adults. One particular study even hypothesised that anosmia could be used as a predicator for greater likelihood to die – from any cause – amongst older adults over a five-year-period. In a study of 3,005 US adults aged 57 to 85, those with anosmia were found to be four times more likely to die than their peers five years later. The researchers concluded that deteriorating sense of smell could be a "bellwether" for the accumulation of toxins from the environment or slowed regeneration of cells. So, should we care that air pollution – to which we are all exposed – is impairing our sense of smell and causing anosmia? Clearly, the answer lies somewhere between "yes" and "hell yes". Ramanathan, for whom traffic pollution and waste incinerators top the local pollution concerns in Baltimore, says "air quality matters". "I think we need tight regulations and control," he says. Many people may not even realise the pollution they are exposed to, so they rely on politicians regulating it to protect the populations in the surrounding areas. "This is one of many [pollution-related] conditions," adds Ramanathan. "But this is kind of a special one, right? If you have COPD [chronic obstructive pulmonary disease] you could probably still enjoy your glass of wine. But not with this one." Ekström says tackling air pollution is not simple. World events can also cause unexpected shifts in behaviour – Ekström mentions anecdotally that winter wood burning has been on the rise in Stockholm as worried residents wean themselves off Russian gas. But even the every-day, low-level air pollution we are exposed to “should be taken more seriously", she says. And what's more, “olfactory impairment should definitely be taken more seriously”, too. * Tim Smedley is author of Clearing The Air: the Beginning and the End of Air Pollution, published by Bloomsbury. -- If you liked this story, sign up for the weekly bbc.com features newsletter, called "The Essential List" – a handpicked selection of stories from BBC Future, Culture, Worklife, Travel and Reel delivered to your inbox every Friday.
Environmental Science
It turns out that there were five votes on the Supreme Court to support the notion that the Clean Water Act doesn't say what it clearly says and doesn't mean what it clearly means. The carefully manufactured conservative majority cracked enough to let Justice Brett Kavanaugh, of all people, sneak away. But, on Thursday, Justice Sam Alito was able to corral the other five behind a ludicrous opinion with no basis in any law and even less basis in environmental science. The case was Sackett v. EPA and it dealt with the EPA's power to regulate not only the country's bodies of water, but the wetlands that are, as the Clean Water Act clearly states, adjacent to them. These wetlands play a vital role in the survival of the various rivers and lakes to which they are adjacent. They also are nature's own flood control devices. Hence, the EPA's clear intent to protect the wetlands as well as the main bodies of water. Alas, the authors of the law didn't bank on Alito's gift for the language arts. From The New York Times: Writing for five justices, Justice Samuel A. Alito Jr. said that the Clean Water Act does not allow the agency to regulate discharges into wetlands near bodies of water unless they have “a continuous surface connection” to those waters. The decision was a second major blow to the E.P.A.’s authority and to the power of administrative agencies generally. Last year, the court limited the E.P.A.’s power to address climate change under the Clean Air Act. For the benefit of the strict constructionists in our audience, it should be noted that Alito simply dispensed with the word "adjacent" and that the phrase "continuous surface connection" appears nowhere in the Clean Water Act. Alito simply doesn't like the law, so he refashions it to his own liking. It is spectacularly dishonest even by Alito's standards, which are considerable. Which was obvious even to Kavanaugh, who wrote in a concurrence: By narrowing the act’s coverage of wetlands to only adjoining wetlands,” he wrote, “the court’s new test will leave some long-regulated adjacent wetlands no longer covered by the Clean Water Act, with significant repercussions for water quality and flood control throughout the United States. Justice Elena Kagan concurred with Kavanaugh's concurrence, linking it to another, earlier industry-friendly decision that tap-danced on the Clean Air Act. ...the majority’s non-textualism barred the E.P.A. from addressing climate change by curbing power plant emissions in the most effective way. Here, that method prevents the E.P.A. from keeping our country’s waters clean by regulating adjacent wetlands. The vice in both instances is the same: the court’s appointment of itself as the national decision maker on environmental policy. As the Times explains: The decision was nominally unanimous, with all the justices agreeing that the homeowners who brought the case should not have been subject to the agency’s oversight because the wetlands on their property were not subject to regulation in any event.) This bit of dark legerdemain is deeply ominous, Steve Bannon's "destruction of the administrative state" gussied up in judicial finery. The final blow could fall in the next term. In May, the Court agreed to hear a case called Loper Bright Enterprises, et al. v. Raimondo, Secretary of Commerce., et al in which the power of any and all federal agencies to regulate pretty much anything at all. There clearly are five votes right now for a return to the status quo ante of the Roosevelt administration. The Theodore Roosevelt administration, that is. Charles P Pierce is the author of four books, most recently Idiot America, and has been a working journalist since 1976. He lives near Boston and has three children.
Environmental Science
Shaggy-haired, tusked pigs roam free in the woods of Germany and Austria. Although these game animals look fine, some contain radioactive cesium at levels that render their meat unsafe to eat. Previously, scientists hypothesized that the contamination stemmed from the 1986 Chernobyl nuclear power plant accident. But now, researchers in ACS’ Environmental Science & Technology report that nuclear weapon fallout from 60 to 80 years ago also contributes significantly to the wild boars’ persistent radioactivity. Radioactive cesium, a byproduct of nuclear weapons explosions and nuclear energy production, poses risks to public health when it enters the environment. And the environment across Europe got a large pulse of radioactive cesium contamination following the Chernobyl power plant accident 37 years ago. Most of that radioactivity originated from cesium-137, but a much longer-lived form, called cesium-135, can also be produced during nuclear fission. Over time, cesium-137 has declined in most game animals, but wild boars’ radioactivity levels haven’t changed substantially. Their meat continues to exceed regulatory limits for consumption, in some places leading to less hunting and consequently contributing to the overpopulation of the animals in Europe. Because the radioactive cesium levels haven’t changed as expected, Georg Steinhauser, Bin Feng and colleagues wanted to investigate the amount and origin of that contamination in wild boars from Germany. The researchers worked with hunters to collect wild boar meat from across Southern Germany and then measured the samples’ cesium-137 levels with a gamma-ray detector. To determine the origin of the radioactivity, the team compared the amount of cesium-135 to cesium-137 with a sophisticated mass spectrometer. Previous studies showed that this ratio clearly indicates sources: A high ratio points to nuclear weapons explosions, whereas a low ratio implicates nuclear reactors. The team observed that 88% of the 48 meat samples exceeded German regulatory limits for radioactive cesium in food. For the samples with elevated levels, the researchers calculated the ratios of cesium-135 to cesium-137, and found that nuclear weapons testing supplied between 10 and 68% of the contamination. And in some samples, the amount of cesium from weapons alone exceeded regulatory limits. The researchers propose that the mid-20th century weapons tests were an underappreciated source of radioactive cesium to German soil, which was also unevenly impacted by the Chernobyl accident. Contamination from both sources have been taken up by the wild boars’ food, such as underground truffles, contributing to their persistent radioactivity. The researchers say that future nuclear accidents or explosions could worsen these animals’ contamination, potentially impacting food safety for decades, as this study shows. The authors acknowledge funding from the Bavarian Academy for Hunting and Nature and an Alexander von Humboldt Foundation Postdoctoral Fellowship. The paper’s abstract will be available on Aug. 30 at 8 a.m. Eastern time here: http://pubs.acs.org/doi/abs/10.1021/acs.est.3c03565 The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org. Journal Environmental Science & Technology Article Title Disproportionately High Contributions of 60 Year OldWeapons-137Cs Explain the Persistence of Radioactive Contamination in Bavarian Wild Boars Article Publication Date 30-Aug-2023
Environmental Science
Palaeontologists at University College Cork (UCC) in Ireland have discovered X-ray evidence of proteins in fossil feathers that sheds new light on feather evolution. Previous studies suggested that ancient feathers had a different composition to the feathers of birds today. The new research, however, reveals that the protein composition of modern-day feathers was also present in the feathers of dinosaurs and early birds, confirming that the chemistry of feathers originated much earlier than previously thought. The research, published today in Nature Ecology and Evolution, was led by palaeontologists Dr Tiffany Slater and Prof. Maria McNamara of UCC's School of Biological, Earth, and Environmental Science, who teamed with scientists based at Linyi University (China) and the Stanford Synchrotron Radiation Lightsource (USA). The team analysed 125-million-year-old feathers from the dinosaur Sinornithosaurus and the early bird Confuciusornis from China, plus a 50-million-year-old feather from the USA. "It's really exciting to discover new similarities between dinosaurs and birds," Dr Slater says. "To do this, we developed a new method to detect traces of ancient feather proteins. Using X-rays and infrared light we found that feathers from the dinosaur Sinornithosaurus contained lots of beta-proteins, just like feathers of birds today." To help interpret the chemical signals preserved in the fossil feathers, the team also ran experiments to help understand how feather proteins break down during the fossilization process. "Modern bird feathers are rich in beta-proteins that help strengthen feathers for flight," Dr Slater says. "Previous tests on dinosaur feathers, though, found mostly alpha-proteins. Our experiments can now explain this weird chemistry as the result of protein degradation during the fossilization process. So although some fossil feathers do preserve traces of the original beta-proteins, other fossil feathers are damaged and tell us a false narrative about feather evolution." This research helps answer a long-standing debate about whether feather proteins, and proteins in general, can preserve in deep time. Prof. Maria McNamara, senior author on the study, said "Traces of ancient biomolecules can clearly survive for millions of years, but you can't read the fossil record literally because even seemingly well-preserved fossil tissues have been cooked and squashed during fossilization. We're developing new tools to understand what happens during fossilization and unlock the chemical secrets of fossils. This will give us exciting new insights into the evolution of important tissues and their biomolecules. " Story Source: Journal Reference: Cite This Page:
Environmental Science
Toilet paper is an unexpected source of PFAS in wastewater, study says Wastewater can provide clues about a community's infectious disease status, and even its prescription and illicit drug use. But looking at sewage also provides information on persistent and potentially harmful compounds, such as per- and polyfluoroalkyl substances (PFAS), that get released into the environment. Now, researchers publishing in Environmental Science & Technology Letters report an unexpected source of these substances in wastewater systems—toilet paper. PFAS have been detected in many personal care products, such as cosmetics and cleansers, that people use every day and then wash down the drain. But not many researchers have considered whether toilet paper, which also ends up in wastewater, could be a source of the chemicals. Some paper manufacturers add PFAS when converting wood into pulp, which can get left behind and contaminate the final paper product. In addition, recycled toilet paper could be made with fibers that come from materials containing PFAS. So, Timothy Townsend and colleagues wanted to assess this potential input to wastewater systems, and test toilet paper and sewage for these compounds. The researchers gathered toilet paper rolls sold in North, South and Central America; Africa; and Western Europe and collected sewage sludge samples from U.S. wastewater treatment plants. Then they extracted PFAS from the paper and sludge solids and analyzed them for 34 compounds. The primary PFAS detected were disubstituted polyfluoroalkyl phosphates (diPAPs)—compounds that can convert to more stable PFAS such as perfluorooctanoic acid, which is potentially carcinogenic. Specifically, 6:2 diPAP was the most abundant in both types of samples but was present at low levels, in the parts-per-billion range. Then, the team combined their results with data from other studies that included measurements of PFAS levels in sewage and per capita toilet paper use in various countries. They calculated that toilet paper contributed about 4% of the 6:2 diPAP in sewage in the U.S. and Canada, 35% in Sweden and up to 89% in France. Despite the fact that North Americans use more toilet paper than people living in many other countries, the calculated percentages suggest that most PFAS enter the U.S. wastewater systems from cosmetics, textiles, food packaging or other sources, the researchers say. They add that this study identifies toilet paper as a source of PFAS to wastewater treatment systems, and in some places, it can be a major source. More information: Jake T. Thompson et al, Per- and Polyfluoroalkyl Substances in Toilet Paper and the Impact on Wastewater Systems, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00094 Journal information: Environmental Science & Technology Letters Provided by American Chemical Society
Environmental Science
By Mischa Dijkstra, Frontiers science writer Temple of the Great Jaguar at Tikal, a UNESCO world heritage site in Guatemala. Image credit:  Leonid Andronov,/ Shutterstock.com A new review shows that the soil in the cities of the ancient Maya are heavily polluted with mercury. As vessels filled with mercury and objects painted with cinnabar have been found at many Maya sites, the authors conclude that the Maya were heavy users of mercury and mercury-containing products. This resulted in severe and dangerous pollution in their day, which persists even today. The cities of the ancient Maya in Mesoamerica never fail to impress. But beneath the soil surface, an unexpected danger lurks there: mercury pollution. In a review article in Frontiers in Environmental Science, researchers conclude that this pollution isn’t modern: it’s due to the frequent use of mercury and mercury-containing products by the Maya of the Classic Period, between 250 and 1100 CE. This pollution is in places so heavy that even today, it poses a potential health hazard for unwary archeologists. Lead author Dr Duncan Cook, an associate professor of Geography at the Australian Catholic University, said: “Mercury pollution in the environment is usually found in contemporary urban areas and industrial landscapes. Discovering mercury buried deep in soils and sediments in ancient Maya cities is difficult to explain, until we begin to consider the archeology of the region which tells us that the Maya were using mercury for centuries.” ► Read original article► Download original article (pdf) Ancient anthropogenic pollution For the first time, Cook and colleagues here reviewed all data on mercury concentrations in soil and sediments at archeological sites across the ancient Maya world. They show that at sites from the Classical Period for which measurements are available –  Chunchumil in today’s Mexico, Marco Gonzales, Chan b’i, and Actuncan in Belize, La Corona, Tikal, Petén Itzá, Piedras Negras, and Cancuén in Guatemala, Palmarejo in Honduras, and Cerén, a Mesoamerican ‘Pompeii’, in El Salvador – mercury pollution is detectable everywhere except at Chan b’i. Concentrations range from 0.016 ppm at Actuncan to an extraordinary 17.16 ppm at Tikal. For comparison, the Toxic Effect Threshold (TET) of mercury in sediments is defined as 1 ppm. Heavy users of mercury What caused this prehistoric mercury pollution? The authors highlight that sealed vessels filled with ‘elemental’ (ie, liquid) mercury have been found at several Maya sites, for example Quiriqua in Guatemala, El Paraíso in Honduras, and the former multi-ethnic megacity Teotihuacan in Central Mexico. Elsewhere in the Maya region, archeologists have found objects found objects painted with mercury-containing paints, mainly made from the mineral cinnabar. The authors conclude that the ancient Maya frequently frequently used cinnabar and mercury-containing paints and powders for decoration. This mercury could then have leached from patios, floor areas, walls, and ceramics, and subsequently spread into the soil and water. “For the Maya, objects could contain ch’ulel, or soul-force, which resided in blood. Hence, the brilliant red pigment of cinnabar was an invaluable and sacred substance, but unbeknownst to them it was also deadly and its legacy persists in soils and sediments around ancient Maya sites,” said co-author Dr Nicholas Dunning, a professor at the University of Cincinnati. As mercury is rare in the limestone that underlies much of the Maya region, they speculate that elemental mercury and cinnabar found at Maya sites could have been originally mined from known deposits on the northern and southern confines of the ancient Maya world, and imported to the cities by traders. Health hazards and the ‘Mayacene’ All this mercury would have posed a health hazard for the ancient Maya: for example, the effects of chronic mercury poisoning include damage to the central nervous system, kidneys, and liver, and cause tremors, impaired vision and hearing, paralysis, and mental health problems. It’s perhaps significant that one of the last Maya rulers of Tikal, Dark Sun, who ruled around 810 CE, is depicted in frescoes as pathologically obese. Obesity is a known effect of metabolic syndrome, which can be caused by chronic mercury poisoning. More research is needed to determine whether mercury exposure played a role in larger sociocultural change and trends in the Maya world, such as those towards the end of the Classic Period. Second author Dr Tim Beach, a professor at the University of Texas at Austin, said: “We conclude that even the ancient Maya, who barely used metals, caused mercury concentrations to be greatly elevated in their environment. This result is yet more evidence that just like we live today in the ‘Anthropocene’, there also was a ‘Maya anthropocene’ or ‘Mayacene’. Metal contamination seems to have been effect of human activity through history.”   REPUBLISHING GUIDELINES: Open access and sharing research is part of Frontiers’ mission. Unless otherwise noted, you can republish articles posted in the Frontiers news site — as long as you include a link back to the original research. Selling the articles is not allowed.
Environmental Science
Noppawat Tom Charoensinphon/Getty Images toggle caption As Amherst, Mass., writes its rules for where to put solar, some local environmentalists worry about farmland and forests getting lost to solar projects. Other local environmentalists worry that overly restrictive solar rules would limit the town's ability to fight climate change. Noppawat Tom Charoensinphon/Getty Images As Amherst, Mass., writes its rules for where to put solar, some local environmentalists worry about farmland and forests getting lost to solar projects. Other local environmentalists worry that overly restrictive solar rules would limit the town's ability to fight climate change. Noppawat Tom Charoensinphon/Getty Images Janet McGowan and Steven Roof both live in the town of Amherst, Mass., and they have a lot in common. They live on the same road, in old houses built amid rolling farmland and maple and birch trees. Both care deeply about the environment and understand that climate change poses a profound threat to people and ecosystems. But they are split over how their community should rein in the emissions making the planet hotter. Amherst is writing its bylaws for where and how to locate solar projects in the town, and McGowan, a mediator and lawyer who is on the working group to help write those bylaws, has concerns. McGowan says she isn't anti-solar, but like many other residents, she worries about the town's farmland and forests getting converted into solar projects. "To me, it just seems viscerally wrong and counterintuitive," she says. "It just seems really odd to me to cut down a forest to put up a solar facility." Roof is overcome with a different worry. A professor of earth and environmental science at Hampshire College, he travels with students to the Arctic to study the effects of global warming. He worries that local fears about conserving farms and forests could lead Amherst to enact overly restrictive regulations that limit the town's role in tackling climate change. "We're at a tipping point," Roof says. "If we don't turn to renewable energy and stop burning fossil fuels, in 10 or 15 years our ecosystems are going to be ravaged from climate change, and there's going to be much greater costs." McGowan and Roof are like lots of neighbors across the U.S. who are increasingly at odds over where to locate larger solar projects. Sometimes these disagreements are between people who deny climate science and those who accept that human-caused climate change is happening. But these days, the disputes are often between people like McGowan and Roof, who both accept climate science. "It's environmentalist versus environmentalist," McGowan says. For some people, addressing climate change is paramount, and building large renewable energy projects is worth certain trade-offs because of the greater harms that loom if our grid continues to be dominated by planet-heating fossil fuels. For others, conserving land, habitats or biodiversity is their core value, and they push for larger renewable projects to be built somewhere else. A new report from the Sabin Center for Climate Change Law at Columbia University identified more than a dozen solar projects that encountered opposition from local conservation groups and environmentalists. Many projects were ultimately delayed, canceled or significantly reduced in size as a result. The sometimes-difficult choices over solar stem from the fact that society has delayed climate action too long to save everything we want to save, says Michael Gerrard, environmental law professor at Columbia University. While some disagreements between neighbors over what to conserve are inevitable, he says these conflicts can delay renewable energy projects just as climate science demands their urgent adoption. "There's some very tough trade-offs that have to be involved," Gerrard says. "We are now in the midst of the sixth extinction in geologic time. It's going to get much worse as the climate worsens. If we don't act in a very strong way, we're going to lose far more species. We have to make some sacrifices now in order to avoid far greater losses in the years and decades to come." Brendan Smialowski/AFP via Getty Images toggle caption Across the U.S., some opposition to larger solar projects comes from people who deny climate science. But some people opposing solar projects believe in climate change and are opposing solar on environmental and conservation grounds. Brendan Smialowski/AFP via Getty Images Conservationists worry about maintaining the integrity of habitats Among conservationists in Amherst there isn't a lot of disagreement that the climate emergency is real, says Stephanie Ciccarello, sustainability director for the town. "Everybody seems to be on the same page that yes, climate change is an issue. It's an emergency. We have to do something to address it. But at the same time, how we get to that point, there's some — maybe, I would say — differing opinions," Ciccarello says. The town of Amherst has a goal of carbon neutrality, or getting planet-heating emissions to zero, by 2050. That will require renewables, including solar, but the question is where. Proposals for large solar, especially near homes, have historically faced opposition in Amherst, Ciccarello says. In 2016, a solar array on top of a landfill got scrapped because it was identified as being a habitat of the endangered grasshopper sparrow. Ultimately, the town built a solar array on a different landfill, but it took seven years. Today, people in the town still worry about solar disturbing habitats for local animals, and recreation areas, says longtime resident Rob Kusner, professor of mathematics at UMass Amherst who considers himself a conservationist. "One can find oneself in a forest without anyone else around other than moose and deer and the sounds of birds," he says. "The top concern is [solar] interrupts the integrity of the forest." Amherst already permanently conserves about 30% of its forest, agricultural and open space lands. The town's Energy & Climate Action Committee estimates the amount of land needed for solar on the ground would be comparatively small — about 1%-2% of the town's total land area. Kusner also fears that solar built in the town's forests could cause erosion and contaminate drinking water supplies. He points to a nearby town, Williamsburg, Mass., where a solar developer paid more than $1 million to settle allegations that it damaged local wetlands and polluted part of a river when constructing a solar project in 2018. That project was built on a former sand and gravel pit. But most U.S. solar projects in forested areas do not cause erosion and pollution to the water supply, says Jordan Macknick, lead energy-water-land analyst for the National Renewable Energy Laboratory, a research organization primarily funded by the Department of Energy. "Most solar projects that are going to be built are not going to be on those steep hillsides. They're not going to be on former sand and gravel pits, which might be a little bit more vulnerable," Macknick says. He adds: "We haven't seen this happening even in other projects that we've seen developed near wetlands or on wetlands. Or even projects that have been developed on former forest land." Debating how trees and solar reduce planet-heating emissions McGowan also worries about cutting down forests, because trees store carbon. "It's holding carbon in the wood, but also there's a ton of carbon being held in the soil, in the ground, and they're not even sure how much is in the roots," she says. But when it comes to reducing net greenhouse gas emissions, solar typically reduces more emissions than forests, according to Jonathan Thompson, research director at Harvard Forest, a department of Harvard University. The town of Amherst recently invited Thompson to a Zoom meeting to talk about the benefits of solar compared to forests, and forests compared to solar. Thompson told the group that forests provide key benefits for biodiversity, water systems and recreation. But to the surprise of some in the meeting, Ciccarello says, the forest ecologist gave strong arguments in favor of solar, strictly from an emissions-reduction perspective. That's because putting solar on the grid can displace fossil fuel plants that often make more emissions than forests can absorb and store. Andrew Caballero-Reynolds/AFP via Getty Images toggle caption Some environmentalists argue for putting solar on rooftops to reduce impacts on habitats and biodiversity. But energy experts say that while rooftop solar will be part of the solution, we will also need larger solar on the ground to meet climate goals. And rooftop solar often costs more than solar on the ground. Andrew Caballero-Reynolds/AFP via Getty Images Some environmentalists argue for putting solar on rooftops to reduce impacts on habitats and biodiversity. But energy experts say that while rooftop solar will be part of the solution, we will also need larger solar on the ground to meet climate goals. And rooftop solar often costs more than solar on the ground. Andrew Caballero-Reynolds/AFP via Getty Images Conservationists often want rooftop solar. Will that be enough? Many community members in Amherst who favor conservation say they don't oppose solar. But they want it on residential and commercial rooftops and parking lots, arguing that these spaces have already been taken out of use as fields and forests. "I would think most people would instinctively think that you'd put it on the built environment first, and then reach for the farms and the forest second or last," McGowan says. "To cut it all down to put up a solar array makes zero sense to me when you can just put it over Target." But while rooftop solar will be part of the climate solution, it will not be enough to meet clean energy needs going forward, says Jesse Jenkins, professor of engineering at Princeton University who has studied scenarios to reduce emissions. He says for the U.S. to reach its climate goals, the country will need to take lots of land out of commission — at least temporarily — for new solar projects. That's because the country isn't just going to need renewables to replace fossil fuel power from coal and gas plants. The energy transition is going to mean a lot more new demand for electricity — because of the growing adoption of things like electric vehicles instead of combustion cars, and electric heat pumps instead of gas furnaces, Jenkins says. "What our research shows is that we need to be deploying tens of gigawatts of new solar every year for the next few years, scaling up to hundreds of gigawatts by 2030," he says. "We're not going to get there with rooftops alone." (A gigawatt is enough electricity to power about 750,000 homes, according to the Energy Information Administration.) Another obstacle for rooftop solar is cost. Solar power created from rooftops or parking lot canopies is often a lot more expensive than power from the kind of larger solar projects on the ground that towns like Amherst are considering, says Dwayne Breger, director of the Clean Energy Extension at UMass Amherst and, along with McGowan, a member of the group writing the town's solar bylaws. (He's also on the town's Energy & Climate Action Committee, along with Roof.) According to a recent report from the financial services firm Lazard, installing and operating residential rooftop solar can be about 70% more costly than community and commercial scale solar because of factors like economies of scale and a lower ability to orient the panels toward the sun. Because solar has a much larger land footprint than fossil fuel plants and because of projections of greater electricity demand, Breger thinks his community needs to realize the energy transition will require more land for solar. "People don't really have a good grasp of how big that is," he says. "In my mind, quote-unquote 'sacrificing' a small percentage of our open land to solar is a small price to pay for contributing what we — the town, the commonwealth and the nation, the world — need to address climate change." Ryan Kellman/NPR toggle caption The sometimes-difficult choices over solar stem from the fact that society has delayed climate action too long to save everything we want to save, says Michael Gerrard, environmental law professor at Columbia University. "There's some very tough trade-offs that have to be involved," Gerrard says. Ryan Kellman/NPR Engaging conservationists early on can sometimes work Whether the U.S. reaches its climate goals could come down to places like Amherst, where environmentalists don't always agree on how to deploy renewables. Jason Albritton, program director for North America climate change mitigation at the Nature Conservancy, a conservation nonprofit, says it is possible to have better outcomes for potential land conflicts by engaging conservationists early on, and trying to reduce the overall footprint of renewable projects. In a recent study, the Nature Conservancy found that one solution for reducing land impacts is "agrivoltaics," or solar projects that are elevated high enough to allow for working farmland or animal grazing underneath the panels. Developers can also plant pollinator-friendly flowers under solar panels to help endangered bee populations. Albritton says his study calls for more community engagement, especially with conservationists. Without that buy-in, he says, "we just won't reach our climate goals." In Amherst, the town recently completed a survey to get the community's thoughts and perspectives on solar. About 90% of the over 500 respondents ranked putting solar on parking lot canopies or rooftops as their top choices for solar development. More than a quarter of respondents wrote that conserving forest was one of their top concerns; about a fifth said conserving farmlands was one of their top concerns. Breger takes some comfort in the fact that last year the town council considered a moratorium on all solar development, but voted against it. Instead, Amherst decided to make new solar bylaws and a map outlining where solar can go. Breger and McGowan say they hope to have the drafts of the bylaws done by the end of the summer. Breger hopes the town will not write severely restrictive regulations like some neighboring communities. "We want to be careful with our solar development," he says, "but not be over restrictive. We want to address our climate emergency."
Environmental Science
Manganese in Central Valley water threatens fetuses and children Water in California's Central Valley contains enough manganese to cause cognitive disabilities and motor control issues in children, and Parkinson's-like symptoms in adults. A naturally occurring metal, manganese is found in water supplies throughout the world. It is regulated as a primary contaminant in many Southeast Asian countries where the climate causes it to leach into groundwater. However, in the U.S. it is regulated only as a secondary contaminant, meaning no maximum level is enforced. A new UC Riverside-led study shows that, among Central Valley communities, the highest concentrations of manganese are in private, untreated well water systems. However, the researchers also found it in public water systems at higher concentrations than what studies have shown can have adverse health effects. The study, published in the journal Environmental Science & Technology, not only measured levels of manganese in Central Valley water supplies, but also mapped the highest concentration areas according to annual income levels. Overall, the research team estimates nearly half of all domestic well water users in the Central Valley live in disadvantaged communities, as defined by annual income. Within this population, nearly 89% have a high likelihood of accessing water that is highly contaminated with manganese. "It is a relatively small number of people, compared to the total population of the state, who are getting the tainted water. But for them, the health risks are high," said Samantha Ying, UCR soil scientist and principal study investigator. "These people are particularly concentrated in disadvantaged communities, so if they wanted to monitor and treat the water, they would have a hard time doing so," Ying said. Point-of-use treatment options range from oxidation and precipitation filters to water softeners, chlorination, and reverse osmosis systems. But devices for monitoring water quality can cost up to $400 annually, and treatments for manganese-tainted water are just as expensive. "It is possible to purchase filters for manganese, but a lot of people cannot afford them. We are hoping people in these communities can be subsidized to buy treatment options," Ying said. Previously, the research team found that manganese-contaminated groundwater tends to occur in relatively shallow depths, compared to arsenic. They wondered if digging deeper wells would avoid the manganese contamination. Unfortunately, that strategy is unlikely to be effective. "Using existing groundwater model predictions of manganese concentrations at deeper depths did not change the number of wells likely to be contaminated," Ying said. The conditions that cause arsenic and manganese to leach are similar, so they tend to show up in groundwater in tandem. Arsenic has long been regulated as a primary contaminant in the U.S. "Wells are labeled unsafe if they contain arsenic, but not if they contain manganese," Ying said. "Thus, the number of wells considered safe may be greatly overestimated." Furthermore, the researchers used a benchmark of 300 parts per billion of manganese to assess water quality. This is a level of manganese contamination that some studies have associated with neurological development issues, particularly for fetuses and infants during early growth periods. It is likely though that adverse effects can occur at lower levels. "New studies from Canada, where manganese is now a primary contaminant, show there may be effects at 100 parts per billion," Ying said. "We were being conservative at 300." This study focused on the Central Valley in part because the conditions that cause manganese to move from aquifer materials into water are so prevalent there. It is likely that drinking water from wells in other parts of the state is similarly tainted. Over 1.3 million Californians rely on unmonitored private wells. "The population being exposed is much bigger than we might think. There are a lot of communities statewide drinking from private wells," Ying said. More information: Miranda L. Aiken et al, Disparities in Drinking Water Manganese Concentrations in Domestic Wells and Community Water Systems in the Central Valley, CA, USA, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c08548 Journal information: Environmental Science & Technology Provided by University of California - Riverside
Environmental Science
Scientists show how Acetobacterium helps break down butadiene 1,3-butadiene (BD) is widely used in the production of rubber, thermoplastic resins and nylon. Long-term exposure to BD-contaminated environments can cause eye pain, blurred vision, coughing and drowsiness, and increase the incidence of leukemia. The International Agency for Research on Cancer has classified BD as a class one human carcinogen. Therefore, elucidating the process and mechanism and microbial transformation of BD may help to properly assess its environmental risk. Yang Yi and Yan Jun from the Institute of Applied Ecology of the Chinese Academy of Sciences have established an anaerobic BD biotransformation microcosm using river sediment as an inoculation source. They found that in the microcosmic system BD underwent a hydrogenation reaction and was rapidly converted to 1-butene with concomitant acetate production and a growing population of Acetobacterium. The researchers assembled a 94% complete Acetobacterium genome map through metagenomic sequencing and bioinformatics analysis, and found that the dominant Acetobacterium population was a new strain of the species A. wieringae, designated as strain N. The strain N genome contains several genes encoding flavoprotein oxidoreductases and short-chain dehydrogenases/reductases, and the researchers believed that these genes may be involved in the anaerobic biohydrogenation of BD. This study demonstrates for the first time the importance of Acetobacterium in BD conversion. In addition, the researchers suggest that mining the hydrogenase-related genes may provide a new route for the development of efficient biocatalysts for the industrial conversion of BD to 1-butene. This study was published in Environmental Science & Technology. More information: Yi Yang et al, Biohydrogenation of 1,3-Butadiene to 1-Butene under Acetogenic Conditions by Acetobacterium wieringae, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c05683 Journal information: Environmental Science & Technology Provided by Chinese Academy of Sciences
Environmental Science
Michael Regan, administrator of the Environmental Protection Agency (EPA), speaks during an event at the EPA headquarters in Washington, D.C., U.S., on Monday, Dec. 20, 2021.Samuel Corum | Bloomberg | Getty ImagesThe Environmental Protection Agency on Saturday launched an office that will focus on supporting and delivering grant money to minority communities in the U.S. disproportionally affected by pollution and other environmental issues.The Office of Environmental Justice and External Civil Rights is made up of more than 200 EPA staff in 10 U.S. regions and will be led by a Senate-confirmed assistant administrator. The office will oversee the delivery of a $3 billion climate and environmental justice block grant program created by the recently passed Inflation Reduction Act, which includes $60 billion for environmental justice initiatives.EPA Administrator Michael Regan made the announcement on Saturday in Warren County, North Carolina, a predominantly Black community that protested the operation of a hazardous waste landfill four decades ago and consequently ignited the environmental justice movement."With the launch of a new national program office, we are embedding environmental justice and civil rights into the DNA of EPA and ensuring that people who've struggled to have their concerns addressed see action to solve the problems they've been facing for generations," Regan said in a statement.Early in his presidency, Biden vowed that environmental justice would be a core component of his climate agenda and signed an executive order that launched the Justice40 Initiative, which requires federal agencies to deliver at least 40% of benefits from specific funding to disadvantaged communities overburdened by pollution.Research published in the journal Environmental Science and Technology Letters found that communities of color are systematically exposed to higher levels of air pollution than white communities due to a federal housing discrimination practice called redlining. Black Americans are also 75% more likely than white Americans to live near facilities that produce hazardous waste, according to the Clean Air Task Force, and are three times more likely to die from exposure to air pollutants."For decades, communities of color and low-income communities have faced disproportionate impacts from environmental contamination," said Robert Bullard, a professor of urban planning and environmental policy at Texas Southern University. "EPA's efforts under this new office will deliver progress for the communities that need action now."The office will enforce civil rights laws and deliver new grants and technical assistance in affected communities. It will also work with other EPA offices to incorporate environmental justice into the agency's programs.
Environmental Science
Forget what the sleek ships of Star Trek would have you believe, it turns out humanity's most famous spacecraft is even dustier than the average home. The International Space Station (ISS) may not be a hunk of junk, but - 25 years after its initial launch - it's become chock-full of potentially harmful chemicals. In a first of its kind study, UK and US researchers teamed up to analyse dust samples from air filters on board and compared them to organic contaminants found in our Earthly homes. Among the recognisable materials were those used in building and window sealant, stain removers, furniture fabrics, and electronic equipment. Some are even classed as persistent organic pollutants under the Stockholm Convention, a global treaty aiming to eliminate their production and use due to their impact on human health and the environment. Researchers believe they may have found their way aboard via astronauts' cameras, music players, tablets, and clothing brought up from our home planet. High levels of radiation can speed up the ageing of materials, including the breakdown of goods into micro and nano plastics that can become airborne in the microgravity setting of the ISS. These particles settle across the station, and must be vacuumed to ensure the onboard air filters perform efficiently. While air inside the ISS is constantly recirculated, with eight to 10 changes per hour, it's unknown the extent to which this removes all of these harmful chemicals. Some vacuum bags were returned to Earth for the study, with one shipped to the University of Birmingham. Read more: Satellite deliberately crashed in world first New phone wallpaper? See stunning new image of dying star Professor Stuart Harrad said concentrations of organic contaminants in the ISS's dust "often exceeded" the average amount found in homes and other indoor settings across the US and Western Europe. Generally, they were within the range found on Earth. Prof Harrad said the findings could guide the design of future space stations, with several hoping to launch by 2030. These include private ventures; a joint project by the US, European, Canadian, and Japanese space agencies; and Russia's own hub for when it leaves the ISS programme in 2024. "Our findings have implications for future space stations and habitats, where it may be possible to exclude many contaminant sources by careful material choices in the early stages of design and construction," Prof Harrad said. The research is published in the Environmental Science and Technology Letters journal.
Environmental Science
The expansion of the universe could be a mirage, a potentially controversial new study suggests. This rethinking of the cosmos also suggests solutions for the puzzles of dark energy and dark matter, which scientists believe account for around 95% of the universe's total energy and matter but remain shrouded in mystery. Scientists know the universe is expanding because of redshift, the stretching of light's wavelength towards the redder end of the spectrum as the object emitting it moves away from us. Distant galaxies have a higher redshift than those nearer to us, suggesting those galaxies are moving ever further from Earth. More recently, scientists have found evidence that the universe's expansion isn't fixed, but is actually accelerating faster and faster. This accelerating expansion is captured by a term known as the cosmological constant, or lambda. The cosmological constant has been a headache for cosmologists because predictions of its value made by particle physics differ from actual observations by 120 orders of magnitude. The cosmological constant has therefore been described as "the worst prediction in the history of physics." Cosmologists often try to resolve the discrepancy between the different values of lambda by proposing new particles or physical forces but Lombriser tackles it by reconceptualizing what's already there.. "In this work, we put on a new pair of glasses to look at the cosmos and its unsolved puzzles by performing a mathematical transformation of the physical laws that govern it," Lombriser told Live Science via email. In Lombriser's mathematical interpretation, the universe isn't expanding but is flat and static, as Einstein once believed. The effects we observe that point to expansion are instead explained by the evolution of the masses of particles — such as protons and electrons — over time. In this picture, these particles arise from a field that permeates space-time. The cosmological constant is set by the field's mass and because this field fluctuates, the masses of the particles it gives birth to also fluctuate. The cosmological constant still varies with time, but in this model that variation is due to changing particle mass over time, not the expansion of the universe. In the model, these field fluctuations result in larger redshifts for distant galaxy clusters than traditional cosmological models predict. And so, the cosmological constant remains true to the model's predictions. "I was surprised that the cosmological constant problem simply seems to disappear in this new perspective on the cosmos," Lombriser said. A recipe for the dark universe Lombriser's new framework also tackles some of cosmology's other pressing problems, including the nature of dark matter. This invisible material outnumbers ordinary matter particles by a ratio of 5 to 1, but remains mysterious because it doesn't interact with light. Lombriser suggested that fluctuations in the field could also behave like a so-called axion field, with axions being hypothetical particles that are one of the suggested candidates for dark matter. These fluctuations could also do away with dark energy, the hypothetical force stretching the fabric of space and thus driving galaxies apart faster and faster. In this model, the effect of dark energy, according to Lombriser, would be explained by particle masses taking a different evolutionary path at later times in the universe. In this picture "there is, in principle, no need for dark energy," Lombriser added. Post-doctoral researcher at the Universidad ECCI, Bogotá, Colombia, Luz Ángela García, was impressed with Lombriser's new interpretation and how many problems it resolves. "The paper is pretty interesting, and it provides an unusual outcome for multiple problems in cosmology," García, who was not involved in the research, told Live Science. "The theory provides an outlet for the current tensions in cosmology." However, García urged caution in assessing the paper's findings, saying it contains elements in its theoretical model that likely can't be tested observationally, at least in the near future. Editor's note: This article was corrected at 1:30 p.m. ET on June 20, to reflect that redshift is evidence of cosmic expansion, but not evidence of accelerated cosmic expansion. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Robert Lea is a science journalist in the U.K. who specializes in science, space, physics, astronomy, astrophysics, cosmology, quantum mechanics and technology. Rob's articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University
Cosmology & The Universe
Nasa has released an image of far-flung galaxies as they were 13bn years ago, the first glimpse from the most powerful telescope ever launched into space that promises to reshape our understanding of the dawn of the universe itself.The small slice of the universe, called SMACS 0723, has been captured in sharp detail by the James Webb space telescope (JWST), showing the light from many different twinkling galaxies, among the oldest in the universe. Joe Biden, who unveiled the image at a White House event, called the moment “historic” and said it provides “a new window into the history of our universe”.“It’s hard to even fathom,” said the US president. “It’s astounding. It’s an historic moment for science and technology, for America and all of humanity.”Bill Nelson, administrator of Nasa, said the image showed the light of galaxies bending around other galaxies, traveling for billions of years before reaching the telescope. “We are looking back more than 13 billion years,” he said, adding that more images to be released by the space agency will reach back further, to around 13.5 billion years, close to the estimated start point of the universe itself. “We are going back almost to the beginning,” he said.The release of the image is a preview of a series of high-resolution color pictures from the James Webb space telescope that will be shown off by Nasa on Tuesday. They will include “the deepest image of our universe that has ever been taken” according to Nelson.Experts have said the telescope, three decades in the making and launched last year, could revolutionize our understanding of the cosmos by providing detailed infrared images of the universe, detailing galaxies as they appeared 13 billion years in the past.The $10bn telescope is able to peek inside the atmospheres of exoplanets and observe some of the oldest galaxies in the universe by using a system of lenses, filters, prisms to detect signals in the infrared spectrum, which is invisible to the human eye. The system has so far “performed flawlessly”, according to Marcia Rieke, professor of astronomy at University of Arizona.“Webb can see backwards in time just after the big bang by looking for galaxies that are so far away, the light has taken many billions of years to get from those galaxies to ourselves,” said Jonathan Gardner, deputy senior project scientist at Nasa, during a recent news conference. “Webb is bigger than Hubble so that it can see fainter galaxies that are further away.”The telescope, which is a joint endeavor with the European Space Agency, has been in development since the mid-1990s and was finally propelled into space in December. It is described as the most powerful telescope ever to be sent into space and is currently around one million miles from Earth, performing its task of scanning ancient galaxies.The initial goal of the project was to see the first stars and galaxies formed following the big bang, watching “the universe turn the lights on for the first time”, as Eric Smith, Webb program scientist and Nasa scientist, put it. The telescope should be considered “one of humanity’s great engineering achievements,,” said Kamala Harris, the US vice-president.“The whole observatory is performing stunningly well,” said Gillian Wright, director of the UK Astronomy Technology Centre in Edinburgh, also principal investigator for the mid infrared (MIRI) instrument on JWST.“It’s hard to take in how fantastic it has turned out to be. It is utterly amazing.”Nasa said Webb has five initial cosmic targets for observation, including the Carina nebula, a sort of celestial nursery where stars form. The nebula is around 7,6000 light years away and is home to many enormous stars, several times larger than the sun.Other areas of focus include WASP-96 b, a giant planet outside our solar system that is made mainly of gas; the southern ring nebula, an expanding cloud of gas surrounding a dying star that’s 2,000 light years from Earth, and Stephan’s quintet, notable for being the first compact galaxy group discovered in 1877. Images from these targets will be unveiled by Nasa on Tuesday.“It’s exhilarating to see the fantastic James Webb Space Telescope image released today,” said Richard Ellis, professor of astrophysics at University College London who was part of the committee that first conceived the telescope.“As we are ourselves made of the material synthesized in stars over the past 13 billion years, JWST has the unique ability to trace back to our own origins in this remarkable universe. Everyone can take part in this amazing adventure.”
Cosmology & The Universe
The James Webb Space Telescope’s first peek at the distant universe unveiled galaxies that appear too big to exist. Six galaxies that formed in the universe’s first 700 million years seem to be up to 100 times more massive than standard cosmological theories predict, astronomer Ivo Labbé and colleagues report February 22 in Nature. “Adding up the stars in those galaxies, it would exceed the total amount of mass available in the universe at that time,” says Labbé, of the Swinburne University of Technology in Melbourne, Australia. “So you know that something is afoot.” Science News headlines, in your inbox Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday. Thank you for signing up! There was a problem signing you up. The telescope, also called JWST, released its first view of the early cosmos in July 2022 (SN: 7/11/22). Within days, Labbé and his colleagues had spotted about a dozen objects that looked particularly bright and red, a sign that they could be massive and far away. “They stand out immediately, you see them as soon as you look at these images,” says astrophysicist Erica Nelson of the University of Colorado Boulder. Measuring the amount of light each object emits in various wavelengths can give astronomers an idea of how far away each galaxy is, and how many stars it must have to emit all that light. Six of the objects that Nelson, Labbé and colleagues identified look like their light comes from no later than about 700 million years after the Big Bang. Those galaxies appear to hold up to 10 billion times the mass of our sun in stars. One of them might contain the mass of 100 billion suns. “You shouldn’t have had time to make things that have as many stars as the Milky Way that fast,” Nelson says. Our galaxy contains about 60 billion suns’ worth of stars — and it’s had more than 13 billion years to grow them. “It’s just crazy that these things seem to exist.” In the standard theories of cosmology, matter in the universe clumped together slowly, with small structures gradually merging to form larger ones. “If there are all these massive galaxies at early times, that’s just not happening,” Nelson says. One possible explanation is that there’s another, unknown way to form galaxies, Labbé says. “It seems like there’s a channel that’s a fast track, and the fast track creates monsters.” But it could also be that some of these galaxies host supermassive black holes in their cores, says astronomer Emma Curtis-Lake of the University of Hertfordshire in England, who was not part of the new study. What looks like starlight could instead be light from the gas and dust those black holes are devouring. JWST has already seen a candidate for an active supermassive black hole even earlier in the universe’s history than these galaxies are, she says, so it’s not impossible. Subscribe to Science News Get great science journalism, from the most trusted source, delivered to your doorstep. Finding a lot of supermassive black holes at such an early era would also be challenging to explain (SN: 3/16/18). But it wouldn’t require rewriting the standard model of cosmology the way extra-massive galaxies would. “The formation and growth of black holes at these early times is really not well understood,” she says. “There’s not a tension with cosmology there, just new physics to be understood of how they can form and grow, and we just never had the data before.” To know for sure what these distant objects are, Curtis-Lake says, astronomers need to confirm the galaxies’ distances and masses using spectra, more precise measurements of the galaxies’ light across many wavelengths (SN: 12/16/22). JWST has taken spectra for a few of these galaxies already, and more should be coming, Labbé says. “With luck, a year from now, we’ll know a lot more.”
Cosmology & The Universe
Back in the mid-1990s, cosmologists—who study the origin, composition and structure of the universe—were beginning to worry that they were facing a crisis. For starters, two astronomers had observed that a huge swath of the cosmos, a billion light-years or so across, was moving in a direction inconsistent with the general expansion of the universe. Worse, astrophysicists using the Hubble Space Telescope, then relatively new, had determined that the cosmos was between eight billion and 12 billion years old. The problem: even the high end of that range couldn’t account for stars known to be closer to 14 billion years old, leading to the nonsensical implication that the stars existed before the universe did. “If you ask me,” astrophysicist Michael Turner told Time magazine at the time, “either we’re close to a breakthrough or we’re at our wits’ end.” But the first observation was never confirmed. And the impossibly old stars were explained a few years later with the discovery that a mysterious, and still unknown, dark energy had turbocharged the expansion of the universe, making it look younger than it actually is. Now, however, cosmologists are facing a brand-new problem—or rather a couple of problems. The Hubble constant (named, as the telescope is, for Edwin Hubble, who discovered the expansion of the universe in the 1920s) is the number that shows how fast the cosmos is expanding; it’s been measured with greater and greater accuracy over the past few decades. Yet there’s still some uncertainty because two independent methods of calculating it have come up with different answers, giving rise to what’s called the “Hubble tension.” Although the numbers aren’t dramatically different, they’re enough at odds to worry theorists. “In particle physics,” said David Gross of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, at a conference in 2019, “we wouldn’t call it a tension or a problem but rather a crisis.” Another issue is that the tendency of matter to clump together in the early universe is inconsistent with how it clumps together today. Known as the sigma-eight, or S8, tension, it is like a “little brother or sister of the Hubble tension.... So [it is] less significant but worth keeping an eye on,” says Adam Riess of the Space Telescope Science Institute, who shared of the 2011 Nobel Prize in Physics for his co-discovery of dark energy. Both problems could signal that scientists are misunderstanding something big about physics, and a recent paper in the journal Physical Review Letters adds to the suspicion that this might be the case—for the S8 tension, at least. In the so-called standard model of cosmology, the universe started off almost but not quite uniformly dense. We know that because the oldest light we can see, known as the cosmic microwave background, shows only tiny variations in temperature from one point on the sky to the next, reflecting variations in the density of energy and matter in the cosmos. As the universe expanded, gravity, as described by Einstein’s general theory of relativity, amplified those variations to create the huge variations we see today in the form of clusters and superclusters of galaxies. That process is somewhat suppressed, however, by dark energy—the still mysterious force causing the expansion of the universe to accelerate rather than slow down—which pushes matter apart before the density variations can get even greater. In the new paper, scientists argue that this suppression of clustering is too large to explain with the standard model. Not only that, says Robert Caldwell, a cosmologist at Dartmouth College, who did not participate in the new study, “it seems like the timing of whatever’s causing the acceleration is not in synchrony with the effect on the clumpiness,” he explains. That is to say, the suppression of the growth of the so-called large-scale structure of the universe—the web of galaxies, clusters and other cosmic structures that are bound by gravity—begins to kick in later than you’d expect to see from dark energy alone. This observation suggests that some theory of gravity other than general relativity might conceivably be at play, the authors argue. “It’s a thought-provoking analysis,” says Benjamin Wandelt of the Lagrange Institute in France, who also wasn’t involved in the study. “Exciting if true—but changing general relativity is a high price to pay.” So is it true? The answer so far is that nobody knows for sure. “It’s an interesting paper,” says David Weinberg, chair of the astronomy department at the Ohio State University, who wasn’t involved in the study, “but I wouldn’t say it’s a big deal on its own.” The investigation does, however, “fit into a larger set of papers that are maybe finding a discrepancy between the level of matter clustering in the present-day universe, compared to what we would predict based on what we observe in the cosmic microwave background,” he says. These discrepancies would be small enough to make theorists wary that they might not be significant at all, except that they all tend to point in the same direction, with modern-day density variations below what you’d expect, based on the standard model. “If they’re real,” Weinberg says, “the implications are very profound because you would probably have to modify the theory of gravity on cosmological scales in order to explain it.” And, he adds, “that’s not easy to do.” (To be clear, this kind of change would be different from “modified Newtonian dynamics,” or MOND, a theory of modified gravity proposed to explain away dark matter. Here, too, the idea of tinkering with general relativity has been tough for astrophysicists to entertain.) What might be different in this case is that the authors—Nhat-Minh Nguyen, Dragan Huterer and Yuewei Wen, all at the University of Michigan—didn’t set out to solve the problem of the S8 tension. They were interested in whether the history of the universe’s expansion was consistent with the history of structure growth. “We expected,” says Nguyen, lead author of the paper, “that they would, in fact, be consistent.” When the researchers found this wasn’t the case, he adds, they went back and rechecked their analysis to make sure they weren’t missing something. “But we found that we weren’t,” Nguyen says. The inconsistency, it turned out, might be explained by some additional force layered on top of gravity and dark energy—a force that would add to the tendency of dark energy to tamp down structure formation. Or it could suggest that dark energy itself became stronger at some point, Caldwell says. “That’s what excited me about the paper,” he adds. Caldwell doesn’t consider the paper definitive, though. Jo Dunkley, a physicist at Princeton University, who also wasn’t involved with the work, agrees. “This is interesting,” she says, “but to me, it is too soon to say that this shows significant evidence of a problem” with the standard model of cosmology. And a few scientists, including David Spergel, former chair of astrophysics at Princeton and now president of the Simons Foundation, think the argument isn’t very convincing. “[The authors] ignore recent measurements that are consistent with standard theory,” says Spergel, who wasn’t part of the study. “And as this paper argues, analyses of large-scale structure at [nearby distances] are probably underestimating the important role that galaxy winds play in driving gas out of galaxies. I’m not sure I would have published this paper.” On Spergel’s first point, Nguyen agrees that he and his colleagues need to do more research. “We’re looking into more datasets from new, presumably independent experiments of the same observables,” he says. But Nguyen also points out that in the “recent measurements” that Spergel cites, the latter’s team actually references Nguyen and his colleagues’ latest work and the idea of tweaking with general relativity as a possible solution to the S8 tension. And, Nguyen argues, “the community is still divided over the role of [winds] in reconciling S8.” In short, everyone, including Nguyen and his co-authors, agree that their results are not definitive. “It’s useful to play these exercises,” says Nico Hamaus of the Ludwig Maximilian University of Munich in Germany. “That’s exactly how you find loopholes in the models, and if we can really substantiate such things, that really means there’s something going on that we don’t understand.” But even if definitive confirmation comes, the Hubble tension remains, and almost everyone agrees that problem is a much bigger deal. And “tensions” aren’t even the only things that keep cosmologists up at night. In a recent op-ed in the New York Times entitled “The Story of Our Universe May Be Starting to Unravel,” astrophysicist Adam Frank of the University of Rochester and Marcelo Gleiser of Dartmouth College cite the thorniest issues facing cosmology. They focus primarily on the Hubble tension (but, interestingly, not the S8 tension) and also point to discoveries by the James Webb Space Telescope of surprisingly large galaxies that formed surprisingly soon after the big bang. “We may be at a point,” they write, “where we need a radical departure from the standard model, one that may even require us to change how we think of the elemental components of the universe, possibly even the nature of space and time.” In other words, stay tuned.
Cosmology & The Universe
If you've been following the astronomy community on Twitter or, perhaps, Captain America himself, you've likely come across a story about the James Webb Space Telescope's latest find: The "oldest galaxy we've ever seen."This is exactly what we were promised from the James Webb Space Telescope. Only a week ago, the world's collective jaw hit the floor when the first stunning images were revealed. Now, the telescope is getting a proper start on its myriad science programs, but researchers have already had access to a ton of data collected during JWST's commissioning phase and released early to researchers across the globe.That's how we ended up finding "the oldest galaxy" so quickly. Scientists pored through a particular dataset looking for far-off galaxies and found a candidate they've dubbed GL-z13, a call-back to the current confirmed record holder, GNz11. There's more work to be done to confirm GL-z13 is actually the new record holder -- some of which will require more time pointing Webb at the galaxy -- but even so, several publications have already crowned this galaxy the universal champion. So how did we get here? And is this "the oldest galaxy" ever seen? Over the last 24 hours, two different research groups uploaded papers (one here, the other here) to arXiv detailing their search for very distant galaxies in the James Webb data. The website "arXiv" (I pronounce it ARK-SIV because I am a heathen, but others assure me it's pronounced "archive") is a pre-print repository, a place for scientists to drop studies so they can be quickly disseminated to peers. It's a great place to quickly get new research out into the world, particularly for astronomy and astrophysics, with the caveat being the findings have not typically been peer-reviewed -- an important checkpoint for validating the study and its methods. Hubble and James Webb Space Telescope Images Compared: See the Difference See all photos I don't want to poop the party for GL-z13, but I do want to exercise just a teensy bit of caution. In communicating findings with such certainty, there's potential for readers to lose trust in scientists if it turns out GL-z13 is something else entirely. Several astronomers I spoke with believe the data is quite compelling and the galaxy likely does reside a long (loooong) way away, but until there's confirmation, GL-z13 can't take the title of "oldest galaxy." And to some, even that title itself is a bit misleading. You see, GL-Z13 isn't really "the oldest galaxy ever" -- it comes from a time when the universe was barely 330 million years old. That means, if confirmed, this is probably the youngest galaxy ever seen, according to Nick Seymour, an astrophysicist at Curtin University in Western Australia."At 330 million years after the Big Bang, it can't be more than 100 million years old at best," Seymour said. "Hence, this really is a baby galaxy at the dawn of time."Getting excited about record-breaking space feats is a given. As a science journalist, I do this practically every day. But in reporting on new discoveries, it's important to convey uncertainty. In headlines, in social posts, in the way we discuss scientific progress. Uncertainty is key -- and should be celebrated in the sciences. The tale of GL-z13 is a wonderful one and it's only just beginning. Astronomers now have to study it a whole lot more to make sure the distances are correct."There's obviously a lot of follow-up work to do, but it really is sort of a glimpse of where things are going with James Webb," said Michael Brown, an astrophysicist at Monash University. It was only in April this year, before Webb was scouring the cosmos, that astronomers announced they may have discovered the most distant galaxy yet, HD1. That galaxy is believed to be from a time when the universe was about 330 million years old. Brown noted at the time it was worth being cautious about handing over the title to HD1 because the data might point to a galaxy billions of light-years closer to Earth. To confirm it's distance, just like with GL-z13, we need more observations.Know what telescope might be able to do that? You guessed it, JWST.We're fascinated with records being broken but perhaps the most interesting point from all of this is that if Webb works as well as expected (and it seems to work better than even scientists dreamed), the title for "oldest galaxy" will change hands as much as WWE's 24/7 Hardcore Championship. We'll be finding new galaxies from even further back in time at a pace we couldn't dream of.If that's the case, I expect it won't be too long before the record tumbles.
Cosmology & The Universe
A first-of-its-kind experiment simulating the cosmos with ultracold potassium atoms suggests that in a curved, expanding universe pairs of particles pop up out of empty space Physics 9 November 2022 An experiment with cold atoms suggests that particles pop up out of empty spacevitacop / Alamy Stock Photo An analogue of a tiny, expanding universe has been created out of extremely cold potassium atoms. It could be used to help us understand cosmic phenomena that are exceedingly difficult to directly detect, such as pairs of particles that may be created out of empty space as the universe expands. Markus Oberthaler at Heidelberg University in Germany and his colleagues cooled more than 20,000 potassium atoms in a vacuum, using lasers to slow them down and lower their temperature to about 60 nanokelvin, or 60 billionths of a degree kelvin above absolute zero. At this temperature, the atoms formed a cloud about the width of a human hair and, instead of freezing, they became a quantum, fluid-like phase of matter called a Bose-Einstein condensate. Atoms in this phase can be controlled by shining light on them – using a tiny projector, the researchers precisely set the atoms’ density, arrangement in space and the forces they exert on each other. By changing these properties, the team made the atoms follow an equation called a space-time metric, which, in an actual, full-scale universe, determines how curved it is, how fast light travels and how much light must bend near very massive objects. This is the first experiment that has used cold atoms to simulate a curved and expanding universe, says Oberthaler. Read more: Frozen cloud of molecules acts as a single quantum object When the researchers used their projector to make atoms mimic an expanding universe, the atoms moved in exactly the kind of ripple pattern that would be expected if pairs of particles were popping into existence – a phenomenon called particle pair production. The researchers say this suggests that the particle pairs can be produced in an expanding universe, like our own. Alessio Celi at the Autonomous University of Barcelona in Spain says that the new experiment is a very precise playground for putting together quantum effects and gravity. Physicists don’t quite know how the two combine in the universe we live in, but experiments with ultracold atoms may let them try out some ideas, and they could inspire new targets for observations in our much larger and more complex cosmos, he says. Stefan Floerchinger at the University of Jena in Germany, who was part of the research team, says that future experiments with the same system could lead to a better understanding of quantum properties of our universe. Journal reference: Nature, DOI: 10.1038/s41586-022-05313-9 More on these topics: cosmology physics quantum physics
Cosmology & The Universe
The print that artist Erika Blumenfeld shows me is an expanse of deep blue, a rich color that speaks of romance and night. It’s stippled with gold marks, some as lines, some as arrows, some as dots. Her art is formed by ink on paper—but it’s rooted in century-old artifacts, inspired by unsung astronomy pioneers, and animated by a quest to understand light.At the Harvard College Observatory in Cambridge, Massachusetts, three floors of metal cabinets house more than 550,000 glass plates, most of them eight by 10 inches, a photographic negative format dating from the mid-19th century. These plates recorded astronomical data from telescopes trained on celestial regions and objects. One side bears the print of light from distant stars; the other side had been marked with equations, arrows, circles, letters, and other notations by women who were hired to interpret the data. From 1885 until the 1950s, hundreds of so-called women computers studied the plates. They discovered how variations in brightness of specific stars revealed their energy output, a relationship that provided a way to measure great distances. They examined a star’s light spectrum and determined that the intensities of the star’s colors indicated its chemical composition. They counted and cataloged galaxies. With such discoveries, these women laid the foundation for modern astrophysics. They left marks of many kinds: on some of the plates, only a few arrows or characters; on others, notes from conversations between women across decades, each striving to better understand the universe. Then the marks were removed from roughly 470,000 plates. So that the world’s researchers could access the plates’ historic trove of astronomical data, the collection needed to be digitized. In the early 2000s, Harvard astrophysicist Jonathan Grindlay began a project that’s now nearly complete: Digital Access to a Sky Century @ Harvard, or DASCH, an archive of digital scans from the bulk of the collection. According to Grindlay, getting the clearest image of a glass plate’s astronomical data requires eliminating all marks on the other side of the plate before scanning. This is the process: Each plate is placed on a table with its nonastronomical-data side—that is, the area where the women computers had written their observations, measurements, and notes—facing up. After an overhead camera photographs that side, the plate is moved to an area where all marks are erased by scrubbing with an ethanol-water mixture and, if necessary, scraping with a razor blade.By the time Blumenfeld heard about this process in 2019, more than 400,000 plates had been scanned. “When I learned that they were actually wiping the plates clean of the marks, I was deeply saddened,” she tells me. She set out to preserve the beauty and meaning of the marks, if only on a handful of plates. In honor of the women whose work inspired her own, Blumenfeld calls her art “Tracing Luminaries.”Blumenfeld’s father says the first word she spoke—standing in her crib, pointing at the fixture overhead—was “light.” Her fascination with light “was there somehow from the beginning,” she says.That affinity led her to take up photography in high school in the late 1980s and later earn a degree in it from Parsons School of Design. In the years between, Blumenfeld created art inspired by light and ways to capture it. But she draws a distinction: Unlike some artists and photographers, she’s not interested in capturing the way light reflects off landscapes or people. She aims to capture the light itself. Starting in the late 1990s, Blumenfeld began building novel lensless cameras uniquely geared to collecting celestial light, from the faint to the squintingly bright. As she made art from the beams of lunar phases, sun cycles, and solstices, Blumenfeld launched what would become continuing engagements with scientific researchers and data.“I’m always looking for connections,” Blumenfeld says, like the shared traits that she believes connect scientists and artists: an inquisitive nature, strong powers of observation, a gift for thinking deeply about the natural world. She sees those attributes clearly in the words and drawings on the glass plates. “The marks are the material evidence of the women’s passion for and devotion to their research,” she says, and to “the stars themselves.”Art that would preserve the women computers’ contributions—that’s what Blumenfeld wanted to create. By 2019 she had devised a plan to transfer the marks themselves—the ink laid by the women’s hands—from the plates onto another material. Harvard gave Blumenfeld permission to try that approach with 50 select plates, starting in mid-March 2020. But before the artist could begin, the COVID-19 pandemic forced Harvard to close its facilities to visitors. The DASCH project’s in-house work continued: photographing a plate’s hand-inscribed side, wiping it clean, then scanning its astronomical-data side. As the pandemic stretched on, plates that Blumenfeld had hoped to use in her art slipped out of reach. Some of the plates with distinctive markings or historical value were permanently secured in a special archive (named the Williamina Fleming Collection, after one of Harvard’s groundbreaking women computers and astronomers). Other plates that Blumenfeld had hoped to protect were run through the scanning process, all of their marks removed. Without access to the physical plates, Blumenfeld resorted to working virtually. She spent weeks looking through thousands of plate photographs in the DASCH digital-image portal. Eventually, she chose images of six plates: observations made from 1892 through 1923, including views of both the Small and Large Magellanic Clouds, the Taurus and Pegasus constellations, and Jupiter with its eighth moon.Blumenfeld shared the digital images of the plates with printmaking collaborators at the design and visual arts school of Washington University in St. Louis. The team mapped a creation process from this first step: “We basically did the inverse of what DASCH did,” Blumenfeld says. “They wiped the marks; I wiped the stars,” so the marks could stand alone.Based on those marks, each piece took shape through a combination of historical art techniques, new technology, and materials that struck Blumenfeld as the stuff of stars. The result: “Tracing Luminaries,” a portfolio of six gold leaf prints. It tells the story of women who studied light to understand the universe, who saw the stars in a way others of their time did not; of a love language to those stars across generations; and an effort to honor that language and those women.From the beginning, Blumenfeld tells me, “my whole idea was to return their marks to the stars somehow.” She may have, in her way: by bringing those who left the marks—and their discoveries and achievements—out from the shadows into the light.Liz Kruesi is a science journalist focused on cosmology and astronomy. Her last essay for the magazine was about the Fermi Gamma-ray Space Telescope.This story appears in the August 2022 issue of National Geographic magazine.
Cosmology & The Universe
Three scientists have won $100,000 for their work on new ways to study the large-scale structure of the universe — the enormous tendrils of criss-crossing matter which hide evidence of our universe's fundamental forces. Mikhail Ivanov, of MIT, Oliver Philcox, of Columbia University and the Simons Foundation, and Marko Simonović, of the University of Florence, won the New Horizons Prize in Physics "for contributions to our understanding of the large-scale structure of the universe and the development of new tools to extract fundamental physics from galaxy surveys." The New Horizons award is given each year to early career researchers by the Breakthrough Prize Foundation, and the prize money donated by tech billionaires Sergey Brin, Priscilla Chan and Mark Zuckerberg, Ma Huateng, Jack Ma, Yuri and Julia Milner, and Anne Wojcicki. A second prize was also awarded this year to Alexandru Lupsasca, of Vanderbilt University, and Michael Johnson, of Harvard University for their work chasing mysterious black hole photon spheres. Inside the cosmological collider According to the standard model of cosmology, the universe began taking shape after the Big Bang, when the young cosmos swarmed with particles of both matter and antimatter, which popped into existence only to annihilate each other upon contact. Most of the universe's building blocks wiped themselves out this way. If they had done so completely, no galaxies, stars, or planets would have formed. Yet the universe was saved by tiny perturbations in the rapidly expanding fabric of space-time, which enabled some pockets of the plasma to survive. As the roiling particle-antiparticle broth of the young cosmos expanded, its molten filaments moved outwards to form an interconnected soap-sud structure of thin films surrounding countless, mostly empty voids. The universe exists as a map of those earliest particle interactions, which are frozen in time along strands and structures — today the birthing grounds of galaxies such as our own — whose forms hint at the mysterious, primordial forces that shaped them. "If you imagine taking the Large Hadron Collider at CERN and scaling it up by a factor of a trillion or a trillion trillions, this is the sort of particle collider that you actually have operating in the early Universe," Oliver Philcox, told Live Science. "And anything weird that happens, it's going to affect the distribution of matter." Detecting where matter just after the Big Bang can reveal early particle interactions that occurred during the inflation that followed, a moment when the universe expanded exponentially fast for a mere fraction of a second. If we view the galaxies as the petrified remains of these earliest moments, we can search for hints of particle physics in the super early universe, Philcox said. "So it is sometimes called the 'cosmological collider' — like a particle collider on the scale of the whole universe," Philcox added. Until recently, owing to both theoretical as well as experimental limitations, physicists studying how our universe evolved mainly focused on the Cosmic Microwave Background (CMB) — the leftover radiation from the Big Bang that exists as a 2D image burned into every corner of the sky. This can be explained by a simple theorem, only including linear terms, called cosmological perturbation theory. However, a growing ability to map the universe's cosmic web and a desire to understand mysterious phenomena such as dark matter and dark energy (neither of which are explained by current cosmology) has driven physicists to look at large scale structures. Dot-mapping a cosmic hurricane Yet astronomical cartography on these structures enormous is hard. Galaxies are produced by complicated astrophysical processes sculpted by the universe's expansion and the collapse of its matter. When large structures get close to each other, non-linear effects such as virialization (when two gravitational objects spiral into orbit) take hold; and when they are far away, relativistic effects from the expansion of the universe disrupt equations. "A good analogy could be water waves. If our universe is an ocean, the CMB fluctuations are tiny ripples on its surface. A galaxy then would be a tsunami, or a hurricane," Mikhail Ivanov told Live Science. "Water ripples can be easily described within basic fluid dynamics developed centuries ago. This is, in essence, cosmological perturbation theory. A hurricane is impossible to describe with pen and paper, we can run some expensive computer simulations for it, but they are highly uncertain." To skirt these mathematical headwinds, the researchers have been contributing to a theory called effective field theory (EFT) for large scale structures, as well as building several statistical tools that will help them describe and analyze how galaxies interact. As linear equations to describe the early universe break down at both ends of the cosmic scale, EFT smoothes out the picture by viewing the cosmos from just the right distance for our two best descriptions of gravity (Newtonian mechanics and general relativity) to be applicable with only minor adjustments. Theorists working on EFT have compared this to viewing a Pointillist painting: set the order of magnitude we view the universe at and we see it clearly — not too close for its small-scale chaos, nor too far for relativistic warping. This has given physicists a powerful new tool with which to view the cosmos, enabling predictions about its very earliest beginnings that can be tested. "These new ideas can generate new science cases for future galaxy surveys," Marko Simonović told Live Science. "As the new data start arriving in the coming years, it will certainly be very exciting to see what we can learn about our universe beyond what we already know and what surprises are waiting for us along the way." Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.
Cosmology & The Universe
Three scientists have won $100,000 for their work on new ways to study the large-scale structure of the universe — the enormous tendrils of criss-crossing matter which hide evidence of our universe's fundamental forces. Mikhail Ivanov, of MIT, Oliver Philcox, of Columbia University and the Simons Foundation, and Marko Simonović, of the University of Florence, won the New Horizons Prize in Physics "for contributions to our understanding of the large-scale structure of the universe and the development of new tools to extract fundamental physics from galaxy surveys." The New Horizons award is given each year to early career researchers by the Breakthrough Prize Foundation, and the prize money is donated by tech billionaires Sergey Brin, Priscilla Chan and Mark Zuckerberg, Yuri and Julia Milner, and Anne Wojcicki. A second prize was also awarded this year to Alexandru Lupsasca, of Vanderbilt University, and Michael Johnson, of Harvard University for their work chasing mysterious black hole photon spheres. Inside the cosmological collider According to the standard model of cosmology, the universe began taking shape after the Big Bang, when the young cosmos swarmed with particles of both matter and antimatter, which popped into existence only to annihilate each other upon contact. Most of the universe's building blocks wiped themselves out this way. If they had done so completely, no galaxies, stars, or planets would have formed. Yet the universe was saved by tiny perturbations in the rapidly expanding fabric of space-time, which enabled some pockets of the plasma to survive. As the roiling particle-antiparticle broth of the young cosmos expanded, its molten filaments moved outwards to form an interconnected soap-sud structure of thin films surrounding countless, mostly empty voids. Today, the universe exists as a map of those earliest particle interactions, which are frozen in time along strands and structures of an enormous cosmic web (today the birthing grounds of galaxies such as our own). This web's form hints at the mysterious, primordial forces that shaped it. "If you imagine taking the Large Hadron Collider at CERN and scaling it up by a factor of a trillion or a trillion trillions, this is the sort of particle collider that you actually have operating in the early Universe," Oliver Philcox, told Live Science. "And anything weird that happens, it's going to affect the distribution of matter." Detecting where matter was just after the Big Bang can reveal early particle interactions that occurred during the inflation that followed, a moment when the universe expanded exponentially fast for a mere fraction of a second. If we view the galaxies as the petrified remains of these earliest moments, we can search for hints of particle physics in the super early universe, Philcox said. "So it is sometimes called the 'cosmological collider' — like a particle collider on the scale of the whole universe," Philcox added. Until recently, owing to both theoretical as well as experimental limitations, physicists studying how our universe evolved mainly focused on the Cosmic Microwave Background (CMB) — the leftover radiation from the Big Bang that exists as a 2D image burned into every corner of the sky. This can be explained by a simple theorem, only including linear terms, called cosmological perturbation theory. However, a growing ability to map the universe's cosmic web and a desire to understand mysterious phenomena such as dark matter and dark energy (neither of which are explained by current cosmology) has driven physicists to look at the large scale structures of the web directly. Dot-mapping a cosmic hurricane Yet astronomical cartography on these structures enormous is hard. Galaxies are produced by complicated astrophysical processes sculpted by the universe's expansion and the collapse of its matter. For instance, when large structures get close to each other, non-linear effects such as virialization (when gravitational objects spiral into a stable orbit) take hold. When they are far away, relativistic effects from the expansion of the universe warp space-time, also disrupting linear equations. "A good analogy could be water waves. If our universe is an ocean, the CMB fluctuations are tiny ripples on its surface. A galaxy then would be a tsunami, or a hurricane," Mikhail Ivanov told Live Science. "Water ripples can be easily described within basic fluid dynamics developed centuries ago. This is, in essence, cosmological perturbation theory. A hurricane is impossible to describe with pen and paper, we can run some expensive computer simulations for it, but they are highly uncertain." To skirt these mathematical headwinds, the researchers have been contributing to a theory called effective field theory (EFT) for large scale structures, as well as building several statistical tools that will help them analyze how galaxies interact. As linear equations to describe the early universe break down at both ends of the cosmic scale, EFT smooths out the picture by simplifying galaxies as dots, and viewing their positions in the cosmos at just the right distance for our two best descriptions of gravity (Newtonian mechanics and general relativity) to be applicable with only minor adjustments. Theorists working on EFT have compared this to viewing a Pointillist painting: set the order of magnitude we view the universe at and we see it clearly — not too close for its small-scale chaos, nor too far for relativistic warping. This has given physicists a powerful new tool with which to view the cosmos, enabling them to make testable predictions about its very earliest beginnings. "These new ideas can generate new science cases for future galaxy surveys," Marko Simonović told Live Science. "As the new data start arriving in the coming years, it will certainly be very exciting to see what we can learn about our universe beyond what we already know and what surprises are waiting for us along the way."
Cosmology & The Universe
New map of the universe’s cosmic growth supports Einstein’s theory of gravity For millennia, humans have been fascinated by the mysteries of the cosmos. Unlike ancient philosophers imagining the universe’s origins, modern cosmologists use quantitative tools to gain insights into its evolution and structure. Modern cosmology dates back to the early 20th century, with the development of Albert Einstein’s theory of general relativity. Now, researchers from the Atacama Cosmology Telescope (ACT) collaboration have submitted a set of papers to The Astrophysical Journal featuring a groundbreaking new map of dark matter distributed across a quarter of the sky, extending deep into the cosmos, that confirms Einstein’s theory of how massive structures grow and bend light over the 14-billion-year life span of the universe. The new map uses light from the cosmic microwave background (CMB) essentially as a backlight to silhouette all the matter between us and the Big Bang. “It’s a bit like silhouetting, but instead of just having black in the silhouette, you have texture and lumps of dark matter, as if the light were streaming through a fabric curtain that had lots of knots and bumps in it,” said Suzanne Staggs, director of ACT and Henry DeWolf Smyth Professor of Physics at Princeton University. “The famous blue and yellow CMB image [from 2003] is a snapshot of what the universe was like in a single epoch, about 13 billion years ago, and now this is giving us the information about all the epochs since.” “It’s a thrill to be able to see the invisible, to uncover this scaffold of dark matter that holds our visible star-filled galaxies,” said Jo Dunkley, a professor of physics and astrophysical sciences, who leads the analysis for ACT. “In this new image, we can see directly the invisible cosmic web of dark matter that surrounds and connects galaxies.” “Usually, astronomers can only measure light, so we see how galaxies are distributed across the universe; these observations reveal the distribution of mass, so primarily show how the dark matter is distributed through our universe,” said David Spergel, Princeton’s Charles A. Young Professor of Astronomy on the Class of 1897 Foundation, Emeritus, and the president of the Simons Foundation. “We have mapped the invisible dark matter distribution across the sky, and it is just as our theories predict,” said co-author Blake Sherwin, a 2013 Ph.D. alumnus of Princeton and a professor of cosmology at the University of Cambridge, where he leads a large group of ACT researchers. “This is stunning evidence that we understand the story of how structure in our universe formed over billions of years, from just after the Big Bang to today.’ He added: “Remarkably, 80% of the mass in the universe is invisible. By mapping the dark matter distribution across the sky to the largest distances, our ACT lensing measurements allow us to clearly see this invisible world.” “When we proposed this experiment in 2003, we had no idea the full extent of information that could be extracted from our telescope,” said Mark Devlin, the Reese Flower Professor of Astronomy at the University of Pennsylvania and the deputy director of ACT, who was a Princeton postdoc from 1994-1995. “We owe this to the cleverness of the theorists, the many people who built new instruments to make our telescope more sensitive, and the new analysis techniques our team came up with.” This includes a sophisticated new model of ACT's instrument noise by Princeton graduate student Zach Atkins. Despite making up most of the universe, dark matter has been hard to detect because it doesn’t interact with light or other forms of electromagnetic radiation. As far as we know, dark matter only interacts with gravity. To track it down, the more than 160 collaborators who have built and gathered data from the National Science Foundation’s Atacama Cosmology Telescope in the high Chilean Andes observed light emanating following the dawn of the universe’s formation, the Big Bang — when the universe was only 380,000 years old. Cosmologists often refer to this diffuse CMB light that fills our entire universe as the “baby picture of the universe.” The team tracked how the gravitational pull of massive dark matter structures can warp the CMB on its 14-billion-year journey to us, just as antique, lumpy windows bend and distort what we can see through them. “We’ve made a new mass map using distortions of light left over from the Big Bang,” said Mathew Madhavacheril, a 2016-2018 Princeton postdoc who is the lead author of one of the papers and an assistant professor in physics and astronomy at the University of Pennsylvania. “Remarkably, it provides measurements that show that both the ‘lumpiness’ of the universe, and the rate at which it is growing after 14 billion years of evolution, are just what you’d expect from our standard model of cosmology based on Einstein’s theory of gravity.” Sherwin added, “Our results also provide new insights into an ongoing debate some have called ‘The Crisis in Cosmology.’” This “crisis” stems from recent measurements that use a different background light, one emitted from stars in galaxies rather than the CMB. These have produced results that suggest the dark matter was not lumpy enough under the standard model of cosmology and led to concerns that the model may be broken. However, the ACT team’s latest results precisely assessed that the vast lumps seen in this image are the exact right size. “While earlier studies pointed to cracks in the standard cosmological model, our findings provide new reassurance that our fundamental theory of the universe holds true,” said Frank Qu, lead author of one of the papers and a Cambridge graduate student as well as a former Princeton visiting researcher. “The CMB is famous already for its unparalleled measurements of the primordial state of the universe, so these lensing maps, describing its subsequent evolution, are almost an embarrassment of riches,” said Staggs, whose team built the detectors that gathered this data over the past five years. “We now have a second, very primordial map of the universe. Instead of a ‘crisis,’ I think we have an extraordinary opportunity to use these different data sets together. Our map includes all of the dark matter, going back to the Big Bang, and the other maps are looking back about 9 billion years, giving us a layer that is much closer to us. We can compare the two to learn about the growth of structures in the universe. I think is going to turn out to be really interesting. That the two approaches are getting different measurements is fascinating.” ACT, which operated for 15 years, was decommissioned in September 2022. Nevertheless, more papers presenting results from the final set of observations are expected to be submitted soon, and the Simons Observatory will conduct future observations at the same site, with a new telescope slated to begin operations in 2024. This new instrument will be capable of mapping the sky almost 10 times faster than ACT. Of the co-authors on the ACT team’s series of papers, 56 are or have been Princeton researchers. More than 20 scientists who were junior researchers on ACT while at Princeton are now faculty or staff scientists themselves. Lyman Page, Princeton’s James S. McDonnell Distinguished University Professor in Physics, was the former principal investigator of ACT. The pre-print articles highlighted in this release are available on act.princeton.edu and will appear on the open-access arXiv.org. They have been submitted to the Astrophysical Journal. This work was supported by the U.S. National Science Foundation (AST-0408698, AST-0965625 and AST-1440226 for the ACT project, as well as awards PHY-0355328, PHY-0855887 and PHY-1214379), Princeton University, the University of Pennsylvania, and a Canada Foundation for Innovation award. Team members at the University of Cambridge were supported by the European Research Council. Nathi Magubane from the University of Pennsylvania contributed to this story.
Cosmology & The Universe
Published July 12, 2022 6:12PM North Texans eager to learn more about space with new James Webb Telescope images The Hubble Telescope was highly successful, but it is aging. The new replacement is providing the deepest view of the cosmos we've ever seen before. FORT WORTH, Texas - NASA has released more stunning images taken by the new James Webb Telescope. They are the sharpest infrared photos of the universe ever seen. The Hubble Telescope was highly successful, but it is aging. The new replacement is providing the deepest view of the cosmos we've ever seen before. The images first captured by the James Webb Telescope reveal the oldest documented light that's traveled in time more than 13 billion years.  The transcending Webb Telescope rocketed to space in December and replaces the aging Hubble scope. Nick Baczewski at the Fort Worth Museum of Science and History breaks it down in a show-and-tell conversation. "Hubble, when it launched, was operational back in 1990. This changed how the public viewed space. It changed the way we actually look at space because we can see these amazingly detailed structures and this is just horrible times ten." "We saw the news articles this morning talking about the first images coming back from the web satellite and thought it was interesting," said parent Chalie Galligan. "We came to the museum today for just other reasons to enjoy the museum and saw the sign and thought what a great opportunity." The plan is to use the telescope to peer back so far that scientists get a glimpse of the early days of the universe and a closer look at cosmic objects even our own solar system.  The Webb Telescope is more than one million miles from Earth and is sending home images we can reach out and touch.
Cosmology & The Universe
According to the BBC, the Atacama Cosmology Telescope (ACT) in Chile has traced the distribution of dark matter "on a quarter of the sky and across almost 14 billion years of time." From the report: In the image [here], the colored areas are the portions of the sky studied by the telescope. Orange regions show where there is more mass, or matter, along the line of sight; purple where there is less. Typical features are hundreds of millions of light-years across. The grey/white areas show where contaminating light from dust in our Milky Way galaxy has obscured a deeper view. The distribution of matter agrees very well with scientific predictions. ACT observations indicate that the "lumpiness" of the Universe and the rate at which it has been expanding after 14 billion years of evolution are just what you'd expect from the standard model of cosmology, which has Einstein's theory of gravity (general relativity) at its foundation. Recent measurements that used an alternative background light, one emitted from stars in galaxies rather than the CMB, had suggested the Universe lacked sufficient lumpiness. Another tension concerns the rate at which the Universe is expanding - a number called the Hubble constant. When [the European Space Agency's Planck observatory] looked at temperature fluctuations across the CMB, it determined the rate to be about 67 kilometres per second per megaparsec (A megaparsec is 3.26 million light-years). Or put another way - the expansion increases by 67km per second for every 3.26 million light-years we look further out into space. A tension arises because measurements of the expansion in the nearby Universe, made using the recession from us of variable stars, clocks in at about 73km/s per megaparsec. It's a difference that can't easily be explained. ACT, employing its lensing technique to nail down the expansion rate, outputs a number similar to Planck's. "It's very close - about 68km/s per megaparsec," said Dr Mathew Madhavacheril from the the University of Pennsylvania. ACT team-member Prof Blake Sherwin from Cambridge University, UK, added: "We and Planck and several other probes are coming in on the lower side. Obviously, you could have a scenario where both the measurements are right and there's some new physics that explains the discrepancy. But we're using independent techniques, and I think we're now starting to close the loophole where we could all be riding this new physics and one of the measurements has to be wrong." Papers describing the new results have been submitted to The Astrophysical Journal and posted on the ACT website. ACT observations indicate that the "lumpiness" of the Universe and the rate at which it has been expanding after 14 billion years of evolution are just what you'd expect from the standard model of cosmology, which has Einstein's theory of gravity (general relativity) at its foundation. Recent measurements that used an alternative background light, one emitted from stars in galaxies rather than the CMB, had suggested the Universe lacked sufficient lumpiness. Another tension concerns the rate at which the Universe is expanding - a number called the Hubble constant. When [the European Space Agency's Planck observatory] looked at temperature fluctuations across the CMB, it determined the rate to be about 67 kilometres per second per megaparsec (A megaparsec is 3.26 million light-years). Or put another way - the expansion increases by 67km per second for every 3.26 million light-years we look further out into space. A tension arises because measurements of the expansion in the nearby Universe, made using the recession from us of variable stars, clocks in at about 73km/s per megaparsec. It's a difference that can't easily be explained. ACT, employing its lensing technique to nail down the expansion rate, outputs a number similar to Planck's. "It's very close - about 68km/s per megaparsec," said Dr Mathew Madhavacheril from the the University of Pennsylvania. ACT team-member Prof Blake Sherwin from Cambridge University, UK, added: "We and Planck and several other probes are coming in on the lower side. Obviously, you could have a scenario where both the measurements are right and there's some new physics that explains the discrepancy. But we're using independent techniques, and I think we're now starting to close the loophole where we could all be riding this new physics and one of the measurements has to be wrong." Papers describing the new results have been submitted to The Astrophysical Journal and posted on the ACT website.
Cosmology & The Universe
After fears that Europe's space scope was toast, its first images look mighty fine Here's looking at Euclid Astronomers are breathing a sigh of relief that the 600-megapixel Euclid wide-angle space telescope's instruments appear to be working well, despite discovering a gap in the orbiter's hull that allowed sunlight to leak through and contaminate some images. Launched a month ago, Euclid will snap billions of galaxies to help astronomers piece together the largest three-dimensional map of the universe. The telescope hasn't yet started its official observations – studying the impacts of dark matter and dark energy – but the European Space Agency has successfully tested the spacecraft's instruments. The latest images, captured by its VISible instrument (VIS) and Near-Infrared Spectrometer and Photometer (NISP), show the shapes of faraway galaxies covering a small region of the sky. They prove that Euclid is working well, according to Carole Mundell, ESA's director of science. "Our teams have worked tirelessly since the launch of Euclid on July 1 and these first engineering images give a tantalising glimpse of the remarkable data we can expect from Euclid," she declared in a statement. Getting over the dazzle problem The space agency wasn't always so confident about the observatory's abilities – especially after it found an odd light pattern affecting some of its images. The issue was later found to be from sunlight streaming into the spacecraft through a tiny gap. The issue only appeared when Euclid was at specific angles to the Sun and scientists realized they could snap clear images by avoiding orienting the telescope in certain directions that allowed sunlight to shine through its cracks. "After more than 11 years of designing and developing Euclid, it's exhilarating and enormously emotional to see these first images," enthused Euclid project manager Giuseppe Racca. "It's even more incredible when we think that we see just a few galaxies here, produced with minimum system tuning. The fully calibrated Euclid will ultimately observe billions of galaxies to create the biggest ever 3D map of the sky." Euclid's VIS instrument snaps detailed images of distant galaxies at visible wavelengths to capture their individual shapes. Astronomers can study each object more carefully by analyzing infrared data from its NISP device, which splits light from each galaxy and star to calculate its distance from Earth. Astroboffins using the scope will then use this information to build a 3D map of the universe dating back to ten billion years and covering over a third of the sky. Scientists hope that effort will reveal secrets of how dark matter and dark energy work. Little is known about these two mysterious components, but scientists estimate that they make up 95 percent of the universe and drive its expansion. - Europe's Euclid telescope launches to figure out dark energy, the universe, and everything - Supernova peekaboo could provide clues to our universe's age - Balloon-borne telescope returns first photos in search for dark matter "We don't know what dark energy is," Mike Seiffert, a project scientist working at NASA's Jet Propulsion Laboratory who contributed to the Euclid mission, previously told The Register. "We know so little about it because its effect on Earth – or the Solar System, or of our own galaxy – is extremely small. It is only by looking at the largest scales in the universe that we can detect it at all." Scientists will examine the distribution of matter in the universe and how it behaves across different distances. Across shorter distances, gravity is attractive and brings matter together – but at longer distances, dark energy takes over and drives matter further apart. Yannick Mellier, an astronomer at the Institut d'Astrophysique de Paris – part of the Euclid Consortium backing the project – declared "The outstanding first images obtained using Euclid's visible and near-infrared instruments open a new era to observational cosmology and statistical astronomy. They mark the beginning of the quest for the very nature of dark energy, to be undertaken by the Euclid Consortium." ®
Cosmology & The Universe
The problem with studying the universe around us is that it is simply too big. The stars overhead remain too far away to interact with directly, so we are relegated to testing our theories on the formation of the galaxies based on observable data. Simulating these celestial bodies on computers has proven an immensely useful aid in wrapping our heads around the nature of reality and, as Andrew Pontzen explains in his new book, The Universe in a Box: Simulations and the Quest to Code the Cosmos, recent advances in supercomputing technology are further revolutionizing our capability to model the complexities of the cosmos (not to mention myriad Earth-based challenges) on a smaller scale. In the excerpt below, Pontzen looks at the recent emergence of astronomy-focused AI systems, what they're capable of accomplishing in the field and why he's not too worried about losing his job to one. Adapted from THE UNIVERSE IN A BOX: Simulations and the Quest to Code the Cosmos by Andrew Pontzen published on June 13, 2023 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2023 Andrew Pontzen. As a cosmologist, I spend a large fraction of my time working with supercomputers, generating simulations of the universe to compare with data from real telescopes. The goal is to understand the effect of mysterious substances like dark matter, but no human can digest all the data held on the universe, nor all the results from simulations. For that reason, artificial intelligence and machine learning is a key part of cosmologists’ work. Consider the Vera Rubin Observatory, a giant telescope built atop a Chilean mountain and designed to repeatedly photograph the sky over the coming decade. It will not just build a static picture: it will particularly be searching for objects that move (asteroids and comets), or change brightness (flickering stars, quasars and supernovae), as part of our ongoing campaign to understand the ever-changing cosmos. Machine learning can be trained to spot these objects, allowing them to be studied with other, more specialized telescopes. Similar techniques can even help sift through the changing brightness of vast numbers of stars to find telltale signs of which host planets, contributing to the search for life in the universe. Beyond astronomy there are no shortage of scientific applications: Google’s artificial intelligence subsidiary DeepMind, for instance, has built a network that can outperform all known techniques for predicting the shapes of proteins starting from their molecular structure, a crucial and difficult step in understanding many biological processes. These examples illustrate why scientific excitement around machine learning has built during this century, and there have been strong claims that we are witnessing a scientific revolution. As far back as 2008, Chris Anderson wrote an article for Wired magazine that declared the scientific method, in which humans propose and test specific hypotheses, obsolete: ‘We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.’ I think this is taking things too far. Machine learning can simplify and improve certain aspects of traditional scientific approaches, especially where processing of complex information is required. Or it can digest text and answer factual questions, as illustrated by systems like ChatGPT. But it cannot entirely supplant scientific reasoning, because that is about the search for an improved understanding of the universe around us. Finding new patterns in data or restating existing facts are only narrow aspects of that search. There is a long way to go before machines can do meaningful science without any human oversight. To understand the importance of context and understanding in science, consider the case of the OPERA experiment which in 2011 seemingly determined that neutrinos travel faster than the speed of light. The claim is close to a physics blasphemy, because relativity would have to be rewritten; the speed limit is integral to its formulation. Given the enormous weight of experimental evidence that supports relativity, casting doubt on its foundations is not a step to be taken lightly. Knowing this, theoretical physicists queued up to dismiss the result, suspecting the neutrinos must actually be traveling slower than the measurements indicated. Yet, no problem with the measurement could be found – until, six months later, OPERA announced that a cable had been loose during their experiment, accounting for the discrepancy. Neutrinos travelled no faster than light; the data suggesting otherwise had been wrong. Surprising data can lead to revelations under the right circumstances. The planet Neptune was discovered when astronomers noticed something awry with the orbits of the other planets. But where a claim is discrepant with existing theories, it is much more likely that there is a fault with the data; this was the gut feeling that physicists trusted when seeing the OPERA results. It is hard to formalize such a reaction into a simple rule for programming into a computer intelligence, because it is midway between the knowledge-recall and pattern-searching worlds. The human elements of science will not be replicated by machines unless they can integrate their flexible data processing with a broader corpus of knowledge. There is an explosion of different approaches toward this goal, driven in part by the commercial need for computer intelligences to explain their decisions. In Europe, if a machine makes a decision that impacts you personally – declining your application for a mortgage, maybe, or increasing your insurance premiums, or pulling you aside at an airport – you have a legal right to ask for an explanation. That explanation must necessarily reach outside the narrow world of data in order to connect to a human sense of what is reasonable or unreasonable. Problematically, it is often not possible to generate a full account of how machine-learning systems reach a particular decision. They use many different pieces of information, combining them in complex ways; the only truly accurate description is to write down the computer code and show the way the machine was trained. That is accurate but not very explanatory. At the other extreme, one might point to an obvious factor that dominated a machine’s decision: you are a lifelong smoker, perhaps, and other lifelong smokers died young, so you have been declined for life insurance. That is a more useful explanation, but might not be very accurate: other smokers with a different employment history and medical record have been accepted, so what precisely is the difference? Explaining decisions in a fruitful way requires a balance between accuracy and comprehensibility. In the case of physics, using machines to create digestible, accurate explanations which are anchored in existing laws and frameworks is an approach in its infancy. It starts with the same demands as commercial artificial intelligence: the machine must not just point to its decision (that it has found a new supernova, say) but also give a small, digestible amount of information about why it has reached that decision. That way, you can start to understand what it is in the data that has prompted a particular conclusion, and see whether it agrees with your existing ideas and theories of cause and effect. This approach has started to bear fruit, producing simple but useful insights into quantum mechanics, string theory, and (from my own collaborations) cosmology. These applications are still all framed and interpreted by humans. Could we imagine instead having the computer framing its own scientific hypotheses, balancing new data with the weight of existing theories, and going on to explain its discoveries by writing a scholarly paper without any human assistance? This is not Anderson’s vision of the theory-free future of science, but a more exciting, more disruptive and much harder goal: for machines to build and test new theories atop hundreds of years of human insight.
Cosmology & The Universe
The images from the James Webb Space Telescope show our solar system’s largest planet in stunning detail, providing a valuable glimpse of the inner workings of the gas giant. In this wide-field view, Webb sees Jupiter with its faint rings, which are a million times fainter than the planet, and two tiny moons called Amalthea and Adrastea.NASA, ESA, CSA, Jupiter ERS Team; image processing by Ricardo Hueso (UPV/EHU) and Judy SchmidtAug. 23, 2022, 5:07 PM UTCIn between spotting distant galaxy clusters, busy star-forming regions and never-before-seen cosmic features, NASA's James Webb Space Telescope has trained its eyes on a subject closer to home, capturing spectacular new views of auroras, giant storms and swirling clouds on Jupiter.The images, released Monday by NASA, show our solar system's largest planet in stunning detail, providing a valuable glimpse of the inner workings of the gas giant.Imke de Pater, an emeritus professor of astronomy at the University of California, Berkeley, said she was surprised by Webb's observations of Jupiter."We hadn’t really expected it to be this good, to be honest," de Pater, who led the observations of Jupiter with colleagues from the Paris Observatory, said in a statement.NASA’s James Webb Space Telescope has captured new images of Jupiter. This image comes from the observatory’s Near-Infrared Camera (NIRCam), which has three specialized infrared filters that showcase details of the planet.NASA, ESA, CSA, Jupiter ERS Team; image processing by Judy SchmidtThe planet's signature Great Red Spot, a roiling storm so large it could engulf Earth, appears white in the images because it is reflecting sunlight. Also visible in the wide-field view are Jupiter's faint rings, which scientists say are a million times fainter than the planet itself, and two tiny moons named Amalthea and Adrastea.The images show auroras extending to high altitudes at Jupiter’s northern and southern poles, the researchers said, adding that different infrared filters also captured clouds and hazy features on the planet. In the wide-field view, background galaxies appear as fuzzy smudges in the lower left of the frame, the scientists said.In this wide-field view, Webb sees Jupiter with its faint rings, which are a million times fainter than the planet, and two tiny moons called Amalthea and Adrastea. The fuzzy spots in the lower background are likely galaxies “photobombing” this Jovian view.NASA, ESA, CSA, Jupiter ERS Team; image processing by Ricardo Hueso (UPV/EHU) and Judy Schmidt."It's really remarkable that we can see details on Jupiter together with its rings, tiny satellites, and even galaxies in one image," de Pater said in the statement.The views were captured by the Webb telescope's Near-Infrared Camera, which uses three infrared filters to see details that are undetectable to the human eye in visible light. The researchers worked with a California-based citizen scientist named Judy Schmidt to process data from the Webb observatory and turn those observations into images, complete with artificial coloring to highlight the planet's many features.The $10 billion Webb telescope is designed to study the earliest stars and galaxies in the universe. Researchers have said that Webb could unlock mysteries from as far back as 100 million years after the Big Bang — observations that could help astronomers understand how the modern universe came to be.Denise Chow is a reporter for NBC News Science focused on general science and climate change.
Cosmology & The Universe
Using supernovae to study neutrinos’ strange properties New study offers hope to long-standing scientific problem In a new study, researchers have taken an important step toward understanding how exploding stars can help reveal how neutrinos, mysterious subatomic particles, secretly interact with themselves. One of the less well-understood elementary particles, neutrinos rarely interact with normal matter, and instead travel invisibly through it at almost the speed of light. These ghostly particles outnumber all the atoms in the universe and are always passing harmlessly through our bodies, but due to their low mass and lack of an electric charge they can be incredibly difficult to find and study. But in a study published today in the journal Physical Review Letters, researchers at The Ohio State University have established a new framework detailing how supernovae – massive explosions that herald the death of collapsing stars – could be used as powerful tools to study how neutrino self-interactions can cause vast cosmological changes in the universe. “Neutrinos only have very small rates of interaction with typical matter, so it’s difficult to detect them and test any of their properties,” said Po-Wen Chang, lead author of the study and a graduate student in physics at Ohio State. “That’s why we have to use astrophysics and cosmology to discover interesting phenomena about them.” Thought to have been important to the formation of the early universe, neutrinos are still puzzling to scientists, despite having learned that they originate from a number of sources, such as in nuclear reactors or the insides of dying stars. But by calculating how self-interactions would affect the neutrino signal from Supernova 1987A, the nearest supernova observed in modern times, researchers found that when neutrinos do interact with themselves, they form a tightly coupled fluid that expands under relativistic hydrodynamics – a branch of physics that deals with how flows impact solid objects in one of two different ways. In the case of what’s called a “burst outflow,” the team theorizes that much like popping a highly pressurized balloon in the vacuum of space would push energy outward, a burst produces a neutrino fluid that moves in all directions. The second case, described as a “wind outflow,” imagines a highly pressurized balloon with many nozzles, wherein neutrinos escape at a more constant flow rate, similar to a jet of steady wind. While the wind-outflow theory is more likely to take place in nature, said Chang, if the burst case is realized, scientists could see new observable neutrino signatures emitted from supernovae, allowing unprecedented sensitivity to neutrino self-interactions. One of the reasons it’s so vital to understand these mechanisms is that if neutrinos are acting as a fluid, that means they are acting together, as a collective. And if the properties of neutrinos are different as a collective than individually, then the physics of supernovae could experience changes too. But whether these changes are due solely to the burst case or the outflow case remains to be seen. “The dynamics of supernovae are complicated, but this result is promising because with relativistic hydrodynamics we know there’s a fork in the road in understanding how they work now,” said Chang. Still, further research needs to be done before scientists can cross off the possibility of the burst case happening inside supernovae as well. Despite these uncertainties, the study is a huge milestone in answering the decades-old astrophysical issue of how neutrinos actually scatter when ejected from supernovae, said John Beacom, co-author of the study and a professor of physics and astronomy at Ohio State. This study found that in the burst case, unprecedented sensitivity to neutrino self-interactions is possible even with sparse neutrino data from SN 1987A and conservative analysis assumptions. “This problem has lain basically untouched for 35 years,” said Beacom. “So even though we were not able to completely solve how neutrinos affect supernovae, what we’re excited about is that we were able to make a substantial step forward.” Down the road, the team hopes their work will be used as a stepping stone to further investigate neutrino self-interactions. Yet because only about two or three supernovae happen per century in the Milky Way, it’s likely researchers will have to wait decades more to collect enough new neutrino data to prove their ideas. “We’re always praying for another galactic supernova to happen somewhere and soon, but the best we can do is try to build on what we know as much as possible before it happens,” said Chang. Other co-authors were Ivan Esteban, Todd Thompson and Christopher M. Hirata, all of Ohio State. This work was supported by the National Science Foundation, NASA, and the David & Lucile Packard Foundation.
Cosmology & The Universe
Home News Science & Astronomy The James Webb Space Telescope (JWST) has captured its first image of the solar system ice giant Neptune, revealing the planet in a whole new light.The image gives astronomers their best look at Neptune's icy rings for 32 years, since the Voyager 2 spacecraft flew past the planet on its way out of the solar system. "It has been three decades since we last saw those faint, dusty bands, and this is the first time we've seen them in the infrared," Heidi Hammel, a planetary scientist at Association of Universities for Research in Astronomy (AURA), said in a statement (opens in new tab).Excitingly, in addition to the previously known bright, narrow Neptunian rings, the new James Webb Space Telescope image also shows some fainter dust rings around Neptune that even Voyager 2's up-close-and-personal visit to the planet in 1989 couldn't reveal — rings that scientists have never seen before. Related: James Webb Space Telescope's 1st images of Mars reveal atmosphere secretsWebb's Near-Infrared Camera (NIRCam) image of Neptune, taken on 12 July 2022, brings the planet's rings into full focus for the first time in more than three decades. (Image credit: NASA, ESA, CSA, and STScI) (opens in new tab)What appears to be missing from the JWST Neptune image is the characteristic blue color that has come to be associated with the ice giant from photos taken by the Hubble Space Telescope. This blue color, which is caused by methane in the planet's atmosphere, is absent because the JWST sees Neptune in near-infrared light. Because methane in the planet's icy clouds absorbs light strongly at these wavelengths, the planet appears fairly dark to the JWST in regions not covered by bright, high-altitude clouds.Another prominent feature seen in the JWST image is a series of bright patches in Neptune's southern hemisphere. These represent high-altitude ice clouds in the ice giant's atmosphere reflecting sunlight before the methane in the clouds absorb it. JWST's image also highlights a continuous band of high-latitude clouds surrounding a previously-known vortex located at Neptune's southern pole.A thin and faint line of brightness can also be spotted circling the planet's equator which may indicate the global circulation of Neptune's atmosphere driving winds and storms across the ice giant. The image also shows something intriguing at Neptune's northern pole. At this point in Neptune's 164-Earth-years-long orbit around the sun, its northern pole is just out of view from the JWST's position almost 1 million miles (1.5 million kilometers) from Earth. Yet, the most powerful space telescope ever created has still managed to spot an intriguing brightness in the region of Neptune's north pole. The JWST images also provide scientists with a look at seven of Neptune's moons. In particular, just above the ice giant in the zoomed-out version of its view of Neptune is a bright point of light that represents the moon Triton. This Neptunian moon is coated by a frozen layer of condensed nitrogen and appears so bright, outshining the methane-darkened Neptune, because it reflects around 70% of the sunlight that falls on it. In this version of Webb's Near-Infrared Camera (NIRCam) image of Neptune, the planet’s visible moons are labeled. Neptune has 14 known satellites, and seven of them are visible in this image. (Image credit: NASA, ESA, CSA, and STScI) (opens in new tab)At a distance from the sun that is 30 times that distance between Earth and our star, Neptune may seem distant. But, this is a cosmic stone's throw in comparison to the galaxies and stars billions of light-years away that the JWST has been tailored to observe. The Neptune image further demonstrates that even though the JWST was created to view extremely distant cosmic objects, looking back in time to the universe as it existed billions of years ago, it is still delivering important and ground-breaking results from inside the solar system.Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab).  Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com. Robert Lea is a science journalist in the U.K. whose articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space, Newsweek and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University. Follow him on Twitter @sciencef1rst.
Cosmology & The Universe
Galactic explosion offers astrophysicists new insight into the cosmos Using data from the James Webb Space Telescope's first year of interstellar observation, an international team of researchers was able to serendipitously view an exploding supernova in a faraway spiral galaxy. The study, published recently in the Astrophysical Journal Letters, provides new infrared measurements of one of the brightest galaxies in our cosmic neighborhood, NGC 1566, also known as the Spanish Dancer. Located about 40 million miles away from Earth, the galaxy's extremely active center has led it to become especially popular with scientists aiming to learn more about how star-forming nebulae form and evolve. In this case, scientists were able to survey a Type 1a supernova—the explosion of a carbon-oxygen white dwarf star, which Michael Tucker, a fellow at the Center for Cosmology and AstroParticle Physics at The Ohio State University and a co-author of the study, said researchers caught by mere chance while studying NGC 1566. "White dwarf explosions are important to the field of cosmology, as astronomers often use them as indicators of distance," said Tucker. "They also produce a huge chunk of the iron group elements in the universe, such as iron, cobalt and nickel." The research was made possible thanks to the PHANGS-JWST Survey, which, due to its vast inventory of star cluster measurements, was used to create a reference dataset to study in nearby galaxies. By analyzing images taken of the supernova's core, Tucker and co-author Ness Mayker Chen, a graduate student in astronomy at Ohio State who led the study, aimed to investigate how certain chemical elements are emitted into the surrounding cosmos after an explosion. For instance, light elements like hydrogen and helium were formed during the big bang, but heavier elements can be created only through the thermonuclear reactions that happen inside supernovas. Understanding how these stellar reactions affect the distribution of iron elements around the cosmos could give researchers deeper insight into the chemical formation of the universe, said Tucker. "As a supernova explodes, it expands, and as it does so, we can essentially see different layers of the ejecta, which allows us to probe the nebula's core," he said. Powered by a process called radioactive decay—wherein an unstable atom releases energy to become more stable—supernovas emit radioactive high-energy photons like uranium-238. In this instance, the study specifically focused on how the isotope cobalt-56 decays into iron-56. Using data from JWST's near-infrared and mid-infrared camera instruments to investigate the evolution of these emissions, researchers found that more than 200 days after the initial event, supernova ejecta was still visible at infrared wavelengths that would have been impossible to image from the ground. "This is one of those studies where if our results weren't what we expected, it would have been really concerning," he said. "We've always made the assumption that energy doesn't escape the ejecta, but until JWST, it was only a theory." For many years, it was unclear whether fast-moving particles produced when cobalt-56 decays into iron-56 seeped into the surrounding environment, or were held back by the magnetic fields supernovas create. Yet by providing new insight into the cooling properties of supernova ejecta, the study confirms that in most circumstances, ejecta doesn't escape the confines of the explosion. This reaffirms many of the assumptions scientists have made in the past about how these complex entities work, Tucker said. "This study validates almost 20 years' worth of science," he said. "It doesn't answer every question, but it does a good job of at least showing that our assumptions haven't been catastrophically wrong." Future JWST observations will continue to help scientists develop their theories about star formation and evolution, but Tucker said that further access to other types of imaging filters could help test them as well, creating more opportunities to understand wonders far beyond the edges of our own galaxy. "The power of JWST is really unparalleled," said Tucker. "It's really promising that we're accomplishing this kind of science and with JWST, there's a good chance we'll not only be able to do the same for different kinds of supernovas, but do it even better." More information: Ness Mayker Chen et al, Serendipitous Nebular-phase JWST Imaging of SN Ia SN 2021aefx: Testing the Confinement of 56Co Decay Energy, The Astrophysical Journal Letters (2023). DOI: 10.3847/2041-8213/acb6d8 Journal information: Astrophysical Journal Letters Provided by The Ohio State University
Cosmology & The Universe
By Ivan Baldry - professor of Astrophysics, Liverpool John Moores UniversityScientists have long tried to explain the origin of a mysterious, large and anomalously cold region of the sky. In 2015, they came close to figuring it out as a study showed it to be a “supervoid” in which the density of galaxies is much lower than it is in the rest of the universe. However, other studies haven’t managed to replicate the result. More recently, research led by Durham University, submitted for publication in the Monthly Notices of the Royal Astronomical Society, suggests the supervoid theory doesn’t hold up. Intriguingly, that leaves open a pretty wild possibility – the cold spot might be the evidence of a collision with a parallel universe. But before you get too excited, let’s look at how likely that would actually be.The cold spot can be seen in maps of the “cosmic microwave background” (CMB), which is the radiation left over from the birth of the universe. The CMB is like a photograph of what the universe looked like when it was 380,000 years old and had a temperature of 3,000 degrees Kelvin. What we find is that it is very smooth with temperature deviations of less than one part in 10,000. These deviations can be explained pretty well by our models of how the hot universe evolved up to an age of 380,000 years. However the cold spot is harder to work out. It is an area of the sky about five degrees across that is colder by one part in 18,000. This is readily expected for some areas covering about one degree – but not five. The CMB should look much smoother on such large scales.The power of galaxy dataSo what caused it? There are two main possibilities. One is that it could be caused by a supervoid that the light has travelled through. But it could also be a genuine cold region from the early universe. The authors of the new research tried to find out by comparing new data on galaxies around the cold spot with data from a different region of the sky. The new data was obtained by the Anglo-Australian Telescope, the other by the GAMA survey.The GAMA survey, and other surveys like it, take the “spectra” of thousands of galaxies. Spectra are images of light captured from a galaxy and spread out according to its wavelengths. This provides a pattern of lines emitted by the different elements in the galaxy. The further away the galaxy is, the more the expansion of the universe shifts these lines to appear at longer wavelengths than they would appear on Earth. The size of this so-called “redshift” therefore gives the distance to the galaxy. Spectra coupled with positions on the sky can give us 3D maps of galaxy distributions.But the researchers concluded that there simply isn’t a large enough void of galaxies to explain the cold spot – there was nothing too special about the galaxy distribution in front of the cold spot compared to elsewhere.So if the cold spot is not caused by a supervoid, it must be that there was a genuinely large cold region that the CMB light came from. But what could that be? One of the more exotic explanations is that there was a collision between universes in a very early phase.Controversial interpretationThe idea that we live in a “multiverse” made up of an infinite number of parallel universes has long been considered a possibility. But physicists still disagree about whether it could represent a physical reality or whether it’s just a mathematical quirk. It is a consequence of important theories like quantum mechanics, string theory and inflation.Quantum mechanics oddly states that any particle can exist in “superposition” – which means it can be in many different states simultaneously (such as locations). This sounds bizarre but it has been observed in laboratories. For example, electrons can travel through two slits at the same time – when we are not watching. But the minute we observe each slit to catch this behaviour, the particle chooses just one. That is why, in the famous “Shroedinger’s cat” thought experiment, an animal can be alive and dead at the same time.But how can we live with such strange implications? One way to interpret it is to choose to accept that all possibilities are true, but that they exist in different universes. So, if there is mathematical backing for the existence of parallel universes, is it so crazy to think that the cold spot is an imprint of a colliding universe? Actually, it is extremely unlikely.There is no particular reason why we should just now be seeing the imprint of a colliding universe. From what we know about how the universe formed so far, it seems likely that it is much larger than what we can observe. So even if there are parallel universes and we had collided with one of them – unlikely in itself – the chances that we’d be able to see it in the part of the universe that we happen to be able to observe on the sky are staggeringly small.The paper also notes that a cold region of this size could occur by chance within our standard model of cosmology – with a 1%-2% likelihood. While that does make it unlikely, too, it is based on a model that has been well tested so we cannot rule it out just yet. Another potential explanation is in the natural fluctuations in mass density which give rise to the CMB temperature fluctuations. We know these exist on all scales but they tend to get smaller toward large scales, which means they may not be able to create a cold region as big as the cold spot. But this may simply mean that we have to rethink how such fluctuations are created.It seems that the cold spot in the sky will continue to be a mystery for some time. Although many of the explanations out there seem unlikely, we don’t necessarily have to dismiss them as pure fantasy. And even if it takes time to find out, we should still revel in how far cosmology has come in the last 20 years. There’s now a detailed theory explaining, for the most part, the glorious temperature maps of the CMB and the cosmic web of galaxies which span across billions of light years.Source: The Conversation
Cosmology & The Universe
The first full-color image from NASA's James Webb Space Telescope, a revolutionary apparatus designed to peer through the cosmos to the dawn of the universe, shows the galaxy cluster SMACS 0723, known as Webb’s First Deep Field, in a composite made from images at different wavelengths taken with a Near-Infrared Camera and released July 11, 2022. NASA, ESA, CSA, STScI, Webb ERO Production Team/Handout via REUTERS Register now for FREE unlimited access to Reuters.comWASHINGTON, July 11 (Reuters) - U.S. President Joe Biden, pausing from political pressures to bask in the glow of the cosmos, on Monday released the debut photo from NASA's James Webb Space Telescope - an image of a galaxy cluster revealing the most detailed glimpse of the early universe ever seen.The White House sneak peek of Webb's first high-resolution, full-color image came on the eve of a larger unveiling of photos and spectrographic data that NASA plans to showcase on Tuesday at the Goddard Space Flight Center in suburban Maryland.The $9 billion Webb observatory, the largest and most powerful space science telescope ever launched, was designed to peer through the cosmos to the dawn of the known universe, ushering in a revolutionary era of astronomical discovery.Register now for FREE unlimited access to Reuters.comThe image showcased by Biden and NASA chief Bill Nelson showed the 4.6 billion-year-old galaxy cluster named SMACS 0723, whose combined mass acts as a "gravitational lens," distorting space to greatly magnify the light coming from more distant galaxies behind it.At least one of the faint, older specs of light appearing in the "background" of the photo - a composite of images of different wavelengths of light - dates back more than 13 billion years, Nelson said. That makes it just 800 million years younger than the Big Bang, the theoretical flashpoint that set the expansion of the known universe in motion some 13.8 billion years ago."It's a new window into the history of our universe," Biden said before the picture was unveiled. "And today we're going to get a glimpse of the first light to shine through that window: light from other worlds, orbiting stars far beyond our own. It's astounding to me."He was joined at the Old Executive Office Building of the White House complex by Vice President Kamala Harris, who chairs the U.S. National Space Council.FROM GRAIN OF SAND IN THE SKYOn Friday, the space agency posted a list of five celestial subjects chosen for its showcase debut of Webb. These include SMACS 0723, a bejeweled-like sliver of the distant cosmos that according to NASA offers "the most detailed view of the early universe to date." It also constitutes the deepest and sharpest infrared image of the distant cosmos ever taken.The thousands of galaxies were captured in a tiny patch of the sky roughly the size of a grain of sand held at arm's length by someone standing on Earth, Nelson said.Webb was constructed under contract by aerospace giant Northrop Grumman Corp . It was launched to space for NASA and its European and Canadian counterparts on Christmas Day 2021 from French Guiana, on the northeastern coast of South America.The highly anticipated release of its first imagery follows six months of remotely unfurling Webb's various components, aligning its mirrors and calibrating instruments.With Webb now finely tuned and fully focused, scientists will embark on a competitively selected list of missions exploring the evolution of galaxies, the life cycles of stars, the atmospheres of distant exoplanets and the moons of our outer solar system.Built to view its subjects chiefly in the infrared spectrum, Webb is about 100 times more sensitive than its 30-year-old predecessor, the Hubble Space Telescope, which operates mainly at optical and ultraviolet wavelengths.The much larger light-collecting surface of Webb's primary mirror - an array of 18 hexagonal segments of gold-coated beryllium metal - enables it to observe objects at greater distances, thus further back in time, than Hubble or any other telescope.All five of Webb's introductory targets were previously known to scientists. Among them are two enormous clouds of gas and dust blasted into space by stellar explosions to form incubators for new stars - the Carina Nebula and the Southern Ring Nebula, each thousands of light years away from Earth.The collection also includes a galaxy clusters known as Stephan's Quintet, which was first discovered in 1877 and encompasses several galaxies described by NASA as "locked in a cosmic dance of repeated close encounters."NASA will also present Webb's first spectrographic analysis of an exoplanet - one roughly half the mass of Jupiter that lies more than 1,100 light years away - revealing the molecular signatures of filtered light passing through its atmosphere.Register now for FREE unlimited access to Reuters.comReporting by Jeff Mason in Wasington; Writing and additional reporting by Steve Gorman in Los Angeles; Additional reporting by Joey Roulette in Washington; Editing by Richard ChangOur Standards: The Thomson Reuters Trust Principles.
Cosmology & The Universe
Scientists could soon test Einstein's theory of general relativity by measuring the distortion of time. According to new research published June 22 in the journal Nature Astronomy, the newly proposed method turns the edge of space and time into a vast cosmic lab to investigate if general relativity can account for dark matter — a mysterious, invisible form of matter that can only be inferred by its gravitational influence on the universe's visible matter and energy — as well as the accelerating expansion of the universe due to dark energy. The method is ready to be tested on future surveys of the deep universe, according to the study authors. General relativity states that gravity is the result of mass warping the fabric of space and time, which Einstein lumped into a four-dimensional entity called space-time. According to relativity, time passes more slowly close to a massive object than it does in a mass-less vacuum. This change in the passing of time is called time distortion. Since its introduction in 1915, general relativity has been tested extensively and has become our best description of gravity on tremendous scales. But scientists aren't yet sure if it can explain invisible dark matter and dark energy, which together account for around 95% of the energy and matter in the universe. "Time distortion predicted by general relativity has already been measured very precisely at small distances," Camille Bonvin, lead study author and an associate professor at the University of Geneva, told Live Science via email. "It has been measured for planes flying around the Earth, for stars in our galaxy, and also for clusters of galaxies. We propose a method to measure the distortion of time at very large distances." The method suggests testing time distortion by measuring redshift, the change in the frequency of light an object emits as it moves away from us. Bonvin said the difference here is that this technique measures redshift caused as light attempts to climb out of a gravitational well, a "dent" in space-time created by a massive object. "This climb changes the frequency of the light because time passes at different rates inside and outside of the gravitational well," she said. "As a consequence, the color of the light is changed; it is shifted to red. … By measuring gravitational redshift, we obtain a measurement of the distortion of time." Time to test general relativity Time distortion suggests that time is not absolute in our universe but rather passes at varying rates depending on gravitational fields.This idea is not exclusive to general relativity. "Time distortion exists in all modern theories of gravity," Bonvin said. "However, the amplitude of the time distortion — how much the presence of a massive object slows down time — varies from theory to theory." In general relativity, the distortions of time and space are predicted to be the same; in other theories of gravity, this is not always the case. That means that by measuring the distortion of time and comparing it to the distortion of space, physicists can test the validity of general relativity. The team's new method could also test another leading theory of the cosmos: Euler's formula, which astronomers use to calculate the movement of galaxies. Specifically, the team's proposed measurement of time distortion could prove whether dark matter obeys Euler's equation, as prior studies of time distortion have presumed. "We have never observed a particle of dark matter directly. We have only felt its presence gravitationally," Bonvin said. "As a consequence, we don't know if dark matter obeys the Euler equation. It may very well be that dark matter is affected by additional forces or interactions in our universe besides gravity. If this is the case, then dark matter will not obey the Euler equation." The team's method could be employed by future missions, including the European Space Agency's Euclid telescope, which is set to launch in July, and the Dark Energy Spectroscopic Instrument, which is three years into its five-year survey of the universe. "It will be possible to measure the distortion of time with the data delivered by these surveys," Bonvin said. "This is very interesting because, for the first time, we will be able to compare the distortion of time with that of space, to test if general relativity is valid, and we will also be able to compare the distortion of time with the velocity of galaxies, to see if Euler's equation is valid. With one new measurement, we will be able to test two fundamental laws." Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Robert Lea is a science journalist in the U.K. who specializes in science, space, physics, astronomy, astrophysics, cosmology, quantum mechanics and technology. Rob's articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University
Cosmology & The Universe
NEWYou can now listen to Fox News articles! NASA released the latest image from its James Webb Space Telescope on Tuesday, showing tens of thousands of young stars in a stellar nursery dubbed the "Cosmic Tarantula."The nebula, located 161,000 light-years away, is the largest star-forming region of all galaxies close to the Milky Way. Radiation from young stars, which glow pale blue, has hollowed out a cavity in the nebula that can be seen in the center of the image.  NASA released an image of a ‘Cosmic Tarantula’ captured with the James Webb Space Telescope's Near-Infrared Camera.  (NASA, ESA, CSA, STScI, Webb ERO Production Team)"Only the densest surrounding areas of the nebula resist erosion by these stars’ powerful stellar winds, forming pillars that appear to point back toward the cluster," NASA explained. "These pillars contain forming protostars, which will eventually emerge from their dusty cocoons and take their turn shaping the nebula."NASA RELEASES JAMES WEBB SPACE TELESCOPE IMAGE OF PHANTOM GALAXYAstronomers have long studied the Tarantula Nebula, which got its namesake due to its resemblance to a burrowing tarantula's home, but Webb's Near-Infrared Camera brought it into clearer focus than ever before.  next A photo of exoplanet 65426 b released by NASA.  (NASA) prev This image from the James Webb Space Telescope shows the heart of M74, otherwise known as the Phantom Galaxy.  (ESA/Webb, NASA & CSA, J. Lee and the PHANGS-JWST Team)CLICK HERE TO GET THE FOX NEWS APPIt's the latest revelation about the universe revealed by Webb, which is orbiting 1 million miles from Earth. The telescope can capture infrared radiation to get a new view of planets, stars and galaxies, as well as the first-ever direct image of a planet outside our solar system.  Paul Best is a reporter for Fox News Digital. Story tips can be sent to Paul.best@fox.com and on Twitter: @KincaidBest.
Cosmology & The Universe
Space September 1, 2022 / 8:19 PM / CBS News New James Webb Space Telescope images released Images from James Webb Space Telescope reveal more of the universe 05:27 NASA's James Webb Space Telescope has captured its first direct image of a planet located outside of our solar system. NASA on Thursday revealed images of the exoplanet, dubbed HIP 65426 b, as seen through four different light filters. "This is a transformative moment, not only for Webb but also for astronomy generally," Sasha Hinkley, the associate professor of physics and astronomy at the University of Exeter in the United Kingdom, said, according to NASA. "It was really impressive how well the Webb coronagraphs worked to suppress the light of the host star."Hinkley led the observations of HIP 65426 b with an international team that included members of the European Space Agency and the Canadian Space Agency, NASA said.  NASA's James Webb Space Telescope was able to capture the first direct image of a planet located outside of our solar system.  James Webb Space Telescope Located 355 light-years from Earth, the exoplanet is about six to twelve times the mass of Jupiter, according to NASA. It's only about 15 to 20 million years old, which is relatively young for a planet. Earth, by comparison, is 4.5 billion years old, NASA said.  The exoplanet is a gas giant with no rocky surface, meaning the planet is uninhabitable, NASA said.  HIP 65426 b was first discovered in 2017, but the Webb Telescope was able to capture the clearest images of the exoplanet to date.  According to NASA, taking direct images of exoplanets is challenging because of the brightness of the stars they orbit. But because HIP 65426 b is about 100 times farther from its host star than Earth is from the Sun, Webb was able to capture the planet separate from the star, NASA said."Obtaining this image felt like digging for space treasure," Aarynn Carter, a postdoctoral researcher at the University of California, Santa Cruz, who led the analysis of the images, said. "At first all I could see was light from the star, but with careful image processing I was able to remove that light and uncover the planet."The Webb Telescope, the most expensive science probe ever built, launched earlier this year, with the goal of studying the origins of the universe. Webb has already beamed back the most detailed images of space seen to date, and scientists are eager to combine its findings with past revelations to continue piecing together our universe's history.  "I think what's most exciting is that we've only just begun," Carter added. "There are many more images of exoplanets to come that will shape our overall understanding of their physics, chemistry, and formation. We may even discover previously unknown planets, too." Sophie Lewis contributed reporting. Thanks for reading CBS NEWS. Create your free account or log in for more features. Please enter email address to continue Please enter valid email address to continue
Cosmology & The Universe
Egyptian-American astrophysicist Sarafina El-Badry Nance’s debut memoir, Starstruck, offers a window on what it is like growing up to be a scientist today as a woman of colour. Nance, 30, is a passionate communicator of cosmology, and an advocate for women’s health, after a preventive double mastectomy. The book intertwines her personal story with explanations of what we know about the universe. Nance is completing her PhD at the University of California, Berkeley, where she is studying exploding stars or supernovae. Isn’t this a young age to be writing a memoir? You still have so much of your personal and professional life ahead of you. It is, but I don’t think it means it isn’t the right time. It is immensely challenging and scarring to push through educational systems and institutions built for straight white men. There is a value in sharing my experience now. My hope is the book resonates with other young women, but also anyone who has felt othered or sought to belong. It is also for anyone curious about the cosmos. Where does your passion for astronomy come from? I fell in love with the night sky when I was four or five. I would listen to StarDate [a US National Public Radio show], drawn in by the ethereal voice of its then presenter. But ultimately, it was the way these glimmering objects I was seeing contextualised everything. From a young age I felt a lot of anxiety. I was sensitive to my parents’ dynamic [they argued a lot] and I felt pressure to succeed at school. The vastness of the night sky gave me a sense of reprieve because I felt so small. That feeling has never left me and it continues to act as a ballast when I get overwhelmed. Women and minorities remain underrepresented in Stem (science, technology, engineering and mathematics) and it is perhaps little wonder given your experiences. They range from being told as a 10-year-old by an astronomer visiting your science camp: “Astronomy isn’t for you”, to a physics professor showering your class with jokes about sex workers and infidelity. How did this affect you? Those type of comments, compounded over time, created an insidious belief that I didn’t belong and never would. It’s hard to identify the difference between your worth and what somebody else tells you your worth is. And so much happens subconsciously. How can we move the needle on representation? First and foremost, we need to stop thinking women and people of colour aren’t interested in Stem. That’s just not the case and they are being pushed out of Stem fields or never end up pursuing them because of it. Then we need more allies to support people throughout their journey. I was lucky to have some incredible mentors, who did happen to be white men. They used their privilege and power to support me in accessing opportunities. A necessary ingredient in dismantling systems of oppression is those with privilege and power stepping up. You underwent a preventive double mastectomy and breast reconstruction in 2019, when you were 26. The procedure is somewhat controversial because its benefit isn’t guaranteed… When I was 23, my dad was diagnosed with highly aggressive metastatic prostate cancer (he is still here and doing well, considering). Genetic testing revealed both he and I carried the BRCA2 genetic mutation, which is inherited and increases the risk of many different cancers, including breast, ovarian and prostate. I started the recommended monitoring protocol, which is getting a breast MRI every year, when I entered graduate school. My first one came back with a suspicious mass. Thankfully, it was benign, but I knew I didn’t want to have to go through a lifetime with this anxiety. Through my preventive double mastectomy, I have reclaimed some agency: there is no guarantee I’ll never get breast cancer, but it has drastically reduced my odds from 87% to less than 5%. It is such an individual decision – everybody has different risk factors, family histories and ways that they want to mitigate their risk – but, for me, it was absolutely right and I have no regrets. You did a swimsuit photoshoot for Sports Illustrated in 2022. How did that come about and aren’t you bothered by the objectification of women the magazine promotes? There was an open call and a friend who knew what I had gone through recovering from my surgery encouraged me to apply. The application process was submitting a video about my passions and my surgery decision – not bikini photos! Of course, I chafe against the way society tends to think that a woman’s worth is in her body. But I did this for me, to re-establish a relationship with my body, not for anyone else. Impostor syndrome – this feeling of non-belonging that particularly crops up for women and minorities – is something that has deeply affected you, resulting in anxiety and panic attacks. How do you combat it? Finding communities of people who look like you helps, as do supportive mentors. But the way I think about impostor syndrome has also evolved. I used to think I had generated it. But the reality is, it is my body recognising I am in a place that is not created or maintained for someone like me. And that is not imaginary: our broader systems and institutions inform these feelings of non-belonging. I will probably always live with [impostor syndrome]. But rather than self-flagellating internal narratives about not being smart enough or good enough (which I then reprimand myself for), I am trying to turn things back on the system. What is the focus of your PhD and what comes next? I am using explosions of massive, single-star systems [Type IIP supernovae] to try to work out the current rate of the expansion of the universe. We know the universe is expanding, and this expansion is accelerating due to this unseeable force we call dark energy, but we don’t know exactly how fast. Type Ia supernovae, which have been used historically because they all explode with the same brightness, give different rates. I am using these other types of supernovae to try to resolve the tension. I am planning to graduate within the next year. I’m not sure what’s next, but I’m excited to combine my love for science, space and communication in hopefully unique ways. What advice would you give young women who want an astronomy career? Don’t let anybody tell you that you’re not cut out for something. Nobody gets to determine what you love or how you love it. Systems of privilege will inevitably show up in different ways that make it difficult, but as long as you feel safe and it is rewarding, keep at it.
Cosmology & The Universe
In late 2021, Salvatore Torquato, on sabbatical from Princeton's Department of Chemistry, reached across the aisle as it were and invited a young astrophysicist at the Institute for Advanced Study to apply the tools of statistical mechanics to his own work on the distribution of galaxies. The astrophysicist, Oliver Philcox, now a postdoc at the Simons Foundation, was intrigued. A year-long collaboration ensued. The questions at the heart of their unusual partnership were straightforward: can the statistical descriptors Torquato has worked with throughout his career find application in unlikely places like cosmology, and can they accurately characterize the complexity in the distribution of galaxies? The answer to both questions: yes, indeed. Their collaboration came to fruition this week with a paper in Physical Review X, "The Disordered Heterogeneous Universe: Galaxy Distribution and Clustering Across Length Scales." In it, the researchers demonstrate they can uncover useful information about the spatial distribution of galaxies from a few descriptors more commonly used to classify the microstructure of materials. Astrophysicists have long explored questions about the large-scale structure of the Universe through standard tools of physical cosmology. What Torquato and Philcox did was offer proof that a new array of descriptors can be used to characterize structural data across length scales, from the atomic scale to the largest scale in the Universe … including the Universe. Torquato uses the word "zoology" to capture the array of theoretical and computational techniques he uses in his work. What he means is: applying statistical descriptors that describe complex materials microstructures to determine their physical and chemical properties at the macroscale. Applying these techniques on the largest scale to locate similarities, Torquato and Philcox treated galaxies as a cloud of individual points akin to particles in a material. "So, okay, I have two regions of space: it can be the galaxies and then everything outside the galaxies. Among other things, you can study the holes between the galaxies similarly to the way you would study the structure of materials," said Torquato, a theoretical chemist and the Lewis Bernard Professor of Natural Sciences, Professor of Chemistry and the Princeton Materials Institute.. "If I say, I want to put a ball between the galaxies that doesn't touch any of the galaxies, how big a ball do I need? You could apply that statistical question to any complicated structure, whether it's the distribution of galaxies or the distribution of atoms. That's the beauty of it. "Interestingly," he added, "the unique structure of the Universe provides new challenges to ascertain even better descriptors for describing terrestrial materials." Philcox, formerly a graduate student in Princeton's Department of Astrophysical Sciences, embraced this "zoology" to enlarge his own toolbox. A key example was his use of the pair-connectedness function, which Philcox defines as a particular way to characterize materials by looking at the distribution of pairs of points. "Delving into the zoology with Sal certainly led to some interesting discoveries of statistics used in materials science that could be used in cosmology, but hadn't yet, the pair-connectedness function being the most notable one," said Philcox. "Conventional cosmological statistics answer the question: if I pick two points at random, what is their separation, probabilistically? "The pair-connectedness function does a similar thing but includes topological information. Essentially, it groups the particles in a material into connected structures, then looks at the distribution of separations between two points within that structure, rather than globally." Using this and other functions, researchers were able to generate tables of numbers that served as a measure of order or disorder across length scales. When applied to questions of spatial relationships between galaxies, the tools underscored a kind of correlated disorder -- a complex structural property that is "definitely" not random. "We're asking exactly the same questions about large-scale structure that cosmologists have always asked using more standard descriptors: how do we describe this structure; how do we characterize it; how do we quantify it; what can we get from it in terms of the physics," Torquato said. "We're just using some new theoretical tools to do so." Added Philcox: "I think it's an important message that there are some conceptually very simple tools that can allow us to extract new information about the Universe, particularly with regard to its clustering, that are quite orthogonal to what's already used. We're excited to see how these can be used in practice." Story Source: Journal Reference: Cite This Page:
Cosmology & The Universe
By Janie Hoormann - The University of QueenslandBlack holes can form when a massive star dies. Stars have a lot of mass which means there is a lot of gravity pulling in on the star. Gravity is the same force that keeps you on Earth so you don’t float into space! These stars are also made up of very hot gas which lets off a lot of heat. This creates a force which pushes on the star from the inside out.Normally the pull from gravity and the push from the heat balance each other out. But, as the star gets older, it burns up all of the fuel and there isn’t anything left to push out anymore. Now gravity takes over and all of the mass of the star falls in on itself into a single point. This is what we call a black hole.You will never be able to escape a black holeBecause black holes are made up of a lot of mass squished into a very small area of space (in science speak we say black holes are very dense) they create a lot of gravity. This pulls in anything that gets too close.The pull they create is so strong that if you get too close to a black hole – even if you are travelling away from it at the fastest speed it is possible to go – you will never be able escape. This is what astronomers call the event horizon. Once you are inside the event horizon of the black hole you will never be able to leave. Black holes were given that name because if you were to take a picture of one, you wouldn’t be able to see anything. No light would be able to escape the black hole and make it to the camera (and after all, all a camera does is record light). You would just see a picture of the universe with a dark circle around where the black hole sits.Sadly, it is really hard to get a camera good enough to take pictures like that. Instead, astronomers study black holes by looking at the stuff that is getting sucked into the black holes, before it gets too close and goes past the event horizon. There is no way for us to see what happens once you get inside.So, where do they lead to?Now to the big question: what happens once you go into a black hole and past the event horizon? The answer is that we don’t actually know yet. We are still trying to figure that out!One idea is that black holes form things called wormholes. You can read this Curious Kids article to find out all about wormholes.These wormholes act as tunnels between two different parts of the space. This means that you could step into a black hole and end up in a completely different part of our universe. You might even end up in a different universe!Astronomers have spent a lot of time trying to describe how wormholes could form and work. We won’t know for sure if that is really what happens once you go through a black hole though until we figure out a way to see it happen.Maybe one day you will become a scientist and help us find these answers. Your excellent question shows you are on the right track.Source: The Conversation
Cosmology & The Universe
Earth has been hit by blast of energy from a dead star so powerful that scientists can't fully explain it. The intense gamma rays – detected using a vast system of telescopes in Namibia – would sizzle humans to a crisp if we were exposed to them. They originate from the Vela Pulsar around 1,000 light years from Earth, which has already been compared in appearance to the mask from the Phantom of the Opera. Pulsars are the remains of a massive star that blew up an estimated 10,000 years ago as a supernova, then collapsed in on itself. British astronomer Dame Jocelyn Bell Burnell was the first person to discover a pulsar in 1967, but this study marks the highest energy rays from a pulsar yet seen. Sadly, it doesn't mean that aliens are trying to contact us, according to study author Arache Djannati-Atai from the Astroparticle & Cosmology (APC) laboratory in France. 'It is true that when they were first discovered back in 1967, the sources were named LGM1 and LGM2 for little green men, but that was almost a joke,' he told MailOnline. 'We know for sure pulsars are corpses of massive stars and there is no need for any alien intelligence to produce the signals that we see on Earth.' Pulsars are described as left-overs of stars that spectacularly exploded in a supernova, the largest explosion that takes place in space. These pulsars emit rotating beams of electromagnetic radiation, somewhat like cosmic lighthouses. If their beam sweeps across our solar system, we see flashes of radiation at regular time intervals. These flashes, also called pulses of radiation, can be searched for in different energy bands of the electromagnetic spectrum. 'These dead stars are almost entirely made up of neutrons and are incredibly dense,' said HESS scientist and study author Emma de Oña Wilhelmi. 'A teaspoon of their material has a mass of more than five billion tonnes, or about 900 times the mass of the Great Pyramid of Giza.' One particular pulsar that's long been of interest to scientists is the Vela Pulsar, located about 1,000 light years in the Southern sky in the constellation Vela. Vela Pulsar is only about 12 miles in diameter and makes over 11 complete rotations every second, faster than a helicopter rotor. As Vela Pulsar whips around, it spews out a jet of charged particles that race out along the pulsar’s rotation axis at about 70 per cent of the speed of light. Using the High Energy Stereoscopic System (HESS) telescope observatory in Namibia, the scientists studied gamma rays – which have the smallest wavelengths but the most energy of any wave in the electromagnetic spectrum – being emitted from the Vela Pulsar. The energy of these gamma rays clocked in at 20 tera-electronvolts, or about 10 trillion times the energy of visible light. This is an order of magnitude larger than in the case of the Crab pulsar, the only other pulsar detected in the teraelectronvolt energy range. Scientists think that the source of this radiation may be fast electrons produced and accelerated in the pulsar's magnetosphere – its system of magnetic fields. Much like planets including Earth, pulsars have a magnetosphere, an invisible forcefield that funnels jets of particles out along the two magnetic poles. The magnetosphere is made up of plasma and electromagnetic fields that surround and co-rotate with the star. According to the study authors, the Vela Pulsar now officially holds the record as the pulsar with the highest-energy gamma rays discovered to date, which could revise existing models of astronomy. 'This discovery is important as we have made a significant progress in probing pulsars at their extreme energy limit,' Djannati-Atai told MailOnline. 'Within the zoo of cosmic beasts pulsars are indeed fantastic objects – as neutron stars, they are extremely dense states of matter and have very intense magnetic fields. 'Probing at their energy limit the phenomena taking place in pulsars and their environment helps us to improve or even to revise our theoretical models of the processes and physical conditions there. 'It also provides for a better understanding or other very dense and highly magnetised objects which act as cosmic accelerators, e.g. the magnetospheres of blackholes.' The new study has been published in the journal Nature Astronomy.
Cosmology & The Universe
Home News The simulation shows the moon forming from the shattered remains of Theia and parts of Earth's ejected mantle. (Image credit: Dr Jacob Kegerreis ) The moon could have formed immediately after a cataclysmic impact that tore off a chunk of Earth and hurled it into space, a new study has suggested.Since the mid-1970s, astronomers have thought that the moon could have been made by a collision between Earth and an ancient Mars-size protoplanet called Theia; the colossal impact would have created an enormous debris field from which our lunar companion slowly formed over thousands of years.But a new hypothesis, based on supercomputer simulations made at a higher resolution than ever before, suggests that the moon's formation might not have been a slow and gradual process after all, but one that instead took place within just a few hours. The scientists published their findings October 4 in the journal The Astrophysical Journal Letters.Related: Mystery rocket that smashed into the moon left 2 craters, NASA says"What we have learnt is that it is very hard to predict how much resolution you need to simulate these violent and complex collisions reliably — you simply have to keep testing until you find that increasing the resolution even further stops making a difference to the answer you get," Jacob Kegerreis, a computational cosmologist at Durham University in England, told Live Science.Scientists got their first clues about the moon's creation after the return of the Apollo 11 mission in July 1969, when NASA astronauts Neil Armstrong and Buzz Aldrin brought 47.6 pounds (21.6 kilograms) of lunar rock and dust back to Earth. The samples dated to around 4.5 billion years ago, placing the moon's creation in the turbulent period roughly 150 million years after the formation of the solar system. Other clues point to our largest natural satellite being birthed by a violent collision between Earth and a hypothetical planet, which scientists named after the mythic Greek titan Theia — the mother of Selene, goddess of the moon. This evidence includes similarities in the composition of lunar and Earth rocks; Earth's spin and the moon's orbit having similar orientations; the high combined angular momentum of the two bodies; and the existence of debris disks elsewhere in our solar system. But exactly how the cosmic collision played out is up for debate. The conventional hypothesis suggests that as Theia crashed into Earth, the planet-busting impact shattered Theia into millions of pieces, reducing it to floating rubble. Theia's broken remains, along with some vaporized rocks and gas ripped from our young planet's mantle, slowly mingled into a disk around which the molten sphere of the moon coalesced and cooled over millions of years. Yet some parts of the picture remain elusive. One outstanding question is why, if the moon is mostly made out of Theia, do many of its rocks bear striking similarities to those found on Earth? Some scientists have suggested that more of Earth's vaporized rocks went into creating the moon than Theia's pulverized remnants did, but this idea presents its own problems, such as why other models suggest that a moon made mostly of disintegrated Earth rocks would have a vastly different orbit than the one we see today. To investigate different possible scenarios for moon formation following the collision, the new study's authors turned to a computer program called SPH With Inter-dependent Fine-grained Tasking (SWIFT), which is designed to closely simulate the complex and ever-changing web of gravitational and hydrodynamic forces that act upon large amounts of matter. Doing so accurately is no simple computational task, so the scientists used a supercomputer to run the program: a system nicknamed COSMA (short for "cosmology machine") at Durham University's Distributed Research Utilising Advanced Computing facility (DiRAC). By using COSMA to simulate hundreds of Earth-Theia collisions with different angles, spins and speeds, the lunar sleuths were able to model the aftermath of the astronomical crack-up at higher resolutions than ever before. Resolutions in these simulations are set by the number of particles the simulation uses. According to Kegerreis, for gigantic impacts the standard simulation resolution is usually between 100,000 and 1 million particles, but in the new study he and his fellow researchers were able to model up to 100 million particles."With a higher resolution we can study more detail — much like how a larger telescope lets you take higher resolution images of distant planets or galaxies to discover new details," Kegerreis said. "Secondly, perhaps even more importantly, using too low a resolution in a simulation can give you misleading or even simply wrong answers," he added. "You might imagine that if you build a model car out of toy blocks to simulate how the car might break in a crash, then if you use only a few dozen blocks, it might just split perfectly down the middle. But with a few thousand or million, then you might start to get it crumpling and breaking in a more realistic way." The higher-resolution simulation left the researchers with a moon which formed in a matter of hours from the ejected chunks of Earth and the shattered pieces of Theia, offering single-stage formation theory that provides a clean and elegant answer to the moon's visible properties, such as its wide, tilted orbit; its partially molten interior; and its thin crust. However, the researchers will have to examine rock and dust samples excavated from deep beneath the moon's surface — an objective of NASA's future Artemis missions — before they can confirm how mixed its mantle could be."Even more samples from the surface of the moon could be extremely helpful for making new and more confident discoveries about the moon's composition and evolution, which we can then trace back to model simulations like ours," Kegerreis said. "Missions and studies like these and many others steadily help us to rule out more possibilities and narrow in on the actual history of both the moon and Earth, and to learn more about how planets form throughout and beyond our solar system."Such investigations could also shed light on how Earth took shape and became a life-harboring planet."The more we learn about how the Moon came to be, the more we discover about the evolution of our own Earth," study co-author Vincent Eke, an associate professor of Physics at Durham University, said in a statement. "Their histories are intertwined — and could be echoed in the stories of other planets changed by similar or very different collisions." Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.
Cosmology & The Universe
Scientists leading the European Space Agency’s Euclid space telescope mission have just released its breathtaking first science images, taken only four months after launch. These new space photos reveal spectacular snapshots of the vast structure of the cosmos, including a massive galaxy cluster in the Perseus constellation, an object nicknamed the “Hidden Galaxy,” an irregularly structured galaxy, a globular cluster packed with myriad stars, and the gorgeous Horsehead Nebula.Euclid mission leaders announced the first images today at an event at ESA Space Operations Centre in Darmstadt, Germany. Carole Mundell, head of ESA’s science program, introduced the images. “Today is an iconic day. We’ve reached all of the engineering milestones of our mission and we’re finally able to enter into our science mission,” she said.Mundell and her colleagues emphasized the space telescope’s potential for studying the large-scale structure of the universe. “I’m looking forward to the insights Euclid will give us, especially to understand what dark matter and dark energy really are,” said Josef Aschbacher, ESA Director General.“It’s a big achievement. The first images are wonderful. They are up to expectations in terms of quality and precision, so we are very hopeful for the rest of the mission,” said Francis Bernardeau, the Euclid Consortium deputy lead and an astrophysicist at CEA Paris-Saclay, speaking to WIRED the day before the event.These images are just the beginning of Euclid’s mission: By the end of this decade, the telescope will survey billions of galaxies like these, parsing over 15,000 square degrees—about one third of the sky—and looking back through 10 billion years of cosmic time. Together, these images will create unprecedented three-dimensional views spanning most of the life of the universe.This new generation of space photos will also demonstrate the sensitivity of Euclid’s two instruments, which simultaneously photograph objects at optical and near-infrared wavelengths. They also measure objects’ spectra, or graphs showing the intensity of light emitted at a range of wavelengths. These measurements indicate an object’s distance and chemical composition, among other things.Galaxies in the Perseus cluster, with tens of thousands of galaxies visible behind them. Photograph: ESAThe details in the Perseus cluster image (shown above) demonstrate Euclid’s power and potential. The cluster’s gravity—and that of invisible dark matter particles—binds about 200 galaxies together. It’s also part of a larger network, a supercluster of around 1,000 galaxies swirling around its outskirts, sort of an extended galactic family. Tens of thousands of additional galaxies lurk in the background of the image, showing how Euclid can survey many objects at once.A large spiral galaxy known as IC 342, hidden behind the dusty disk of the Milky Way. Photograph: ESAEuclid can capture details of individual objects, like the spiral galaxy IC 342 (shown above), also known as Caldwell 5 or the “Hidden Galaxy,” because it’s difficult to see with optical telescopes. The fact that this galaxy’s stars and dust are so clear to Euclid shows the benefits of an infrared view: It allows the faint galaxy to be seen even though it’s hiding behind the equator of the Milky Way, whose dust blocks visible light.An image of the edge of a small, irregularly shaped galaxy called NGC 6822. Photograph: ESAWhile that galaxy and our own display fancy spiral arms, most galaxies are actually much smaller and irregularly structured, including the one in the image above, known as NGC 6822. Over billions of years, dwarf galaxies like this densely packed one can become the building blocks of larger galaxies.The globular cluster NGC 6397 contains hundreds of thousands of stars, young and old. Photograph: ESAGlobular clusters like the one in the image above, called NGC 6397, are typically groups of hundreds of thousands of stars bound by gravity. But unlike galaxies, they lack dark matter. This is the second-closest globular cluster to Earth, about 7,800 light-years away.Everyone loves the Horsehead Nebula, also known as Barnard 33, which is part of the constellation Orion. (NASA’s James Webb Space Telescope and Hubble have been used to image the exquisite stellar nursery as well.) Two images from Euclid are shown below.The iconic Horsehead Nebula, a stellar nursery. Photograph: ESAA closeup of the cloud of gas and dust from which the Horsehead Nebula emerges. Photograph: ESAAs Euclid’s science mission gets underway, more images like these will aid astrophysicists’ efforts to better understand how galaxies form and evolve, to study how fast the universe is expanding, and to investigate the mysterious nature of dark matter and dark energy, which can only be probed indirectly through their gravitational and cosmological effects on celestial bodies. Euclid’s wide field of view sets it apart from the JWST, whose strengths lie in capturing deeper and more focused images of individual objects rather than of huge swaths of the sky.Euclid will also enable astrophysicists to develop larger and higher-resolution maps of dark matter structures than the ESA’s Planck space telescope. Astrophysicists will study dark matter with Euclid’s galaxy catalogs using statistical tools and a phenomenon known as weak gravitational lensing. That involves investigating how massive clumps of foreground dark matter deflect the light we see from background galaxies—slightly, but predictably, distorting their shapes.Michael Seiffert, a Jet Propulsion Laboratory astrophysicist and project scientist for NASA’s contribution to the Euclid mission, looks forward to examining those galaxies lensed by dark matter. Most of those distant, distorted galaxies merely appear as tiny smudges, less clear than today’s new image of IC 342. But together their impact on dark matter physics will be important, he says: “We’re overwhelmed by the sheer scale of the data, having the fine angular resolution and also the wide field of view. I think we’re going to be drowning in data for years to come.” That resolution is three to five times sharper than what can be achieved by telescopes on the ground, he points out. And while the image resolution is lower than the JWST’s, Euclid surveys large areas 100 times faster.Like the JWST, Euclid glimpses celestial objects from a spot called the L2 Lagrange point at a distance of about 1.5 million kilometers beyond Earth’s orbit. After the probe reached its destination in late July, engineering teams at ESA’s mission control conducted a long list of tests, calibrating the instruments and ensuring that they will work as planned. On July 31, they released raw test images of fields of galaxies, which hinted at what’s to come.Then, in August, they encountered issues with the telescope’s fine guidance sensors, which are designed to deliver a precise and stable pointing direction. Those optical sensors are meant to image the sky on the sides of the visible-wavelength instrument’s field of view, but when cosmic rays hit the detectors, they intermittently lost track of guide stars, celestial landmarks used for imaging and navigating. After updating and uploading new flight software, the engineering team determined that they had the problem under control. The issue slightly delayed the team’s progress, but they do not anticipate any further effects on the mission, Bernardeau says.For now, the Euclid team is continuing their instrument calibration work, and then the telescope’s science mission will begin in earnest in January. Next year they’ll release data from the first 50 square degrees of surveying, followed by the first year’s data. By that time, they’ll finally have scanned enough of the sky to release not just images, but new cosmology research.
Cosmology & The Universe
The image, known as “Webb’s First Deep Field,” will be the deepest and highest-resolution view of the universe ever captured. Biden is scheduled to release it on Monday.July 10, 2022, 10:00 PM UTCPresident Joe Biden will unveil the much-anticipated first full-color image from NASA's James Webb Space Telescope on Monday, agency officials confirmed.The image, known as "Webb's First Deep Field," will be the deepest and highest-resolution view of the universe ever captured, showing myriad galaxies as they appeared up to 13 billion years in the past, according to NASA. The agency and its partners, the European Space Agency and the Canadian Space Agency, are set to release a separate batch of full-color images from the Webb telescope on Tuesday, but Biden, Vice President Kamala Harris and the public will get a sneak peek a day early.NASA will brief the president and the vice president on Monday, agency officials said, and the first image will be revealed at an event at 5 p.m. ET at the White House.The $10 billion James Webb Space Telescope is humanity’s largest and most powerful space telescope, and experts have said it could revolutionize our understanding of the cosmos.After the White House event, NASA will unveil more images in an event streamed live Tuesday at 10:30 a.m. ET. NASA officials said that batch will include the Webb telescope’s first spectrum of an exoplanet, showing light emitted at different wavelengths from a planet in another star system. The images could offer new insights into the atmospheres and chemical makeups of other exoplanets in the cosmos.Some images included in the Tuesday release will show how galaxies interact and grow, and others will depict the life cycle of stars, from the emergence of new ones to violent stellar deaths.The Webb telescope launched into space on Dec. 25. The tennis-court-size observatory is able to peer deeper into the cosmos and in greater detail than any telescope that has come before it.Denise Chow is a reporter for NBC News Science focused on general science and climate change.
Cosmology & The Universe
Paul Sutter is an astrophysicist at The Ohio State University and the chief scientist at COSI Science Center. Sutter is also host of Ask a Spaceman, We Don't Planet and COSI Science Now. Sutter contributed this article to Space.com's Expert Voices: Op-Ed & Insights. The Earth is mediocre, but not in the way you might think (the food is too bland, pop music is soulless, architecture is boring, etc.). In the cosmological sense, the Earth does not enjoy a special vantage point in the universe. There's simply nothing special about our particular location. We orbit a typical star, in a typical spiral galaxy, in a typical branch of a typical supercluster. Our planet and its evolutionary leap to intelligent beings could've occurred in any old spot around the cosmos and we ought to have come to the same conclusions about the big science questions: the Big Bang, dark energy, the cosmic web, the works. [The Universe: Big Bang to Now in 10 Easy Steps] Look this way This "principle of mediocrity" (also sometimes dubbed the Copernican Principle in an effort to salvage our ego) is baked right into the very mathematics that cosmologists use to understand and model the universe: general relativity. The equations of GR are…complex, to say the least, so to make any headway at all physicists must make some simplifying assumptions. When it comes to cosmology, two assumptions make the difference between a world of math-pain and math-paradise: that our universe is homogenous and isotropic. Homogenous means that, at big enough scales, one patch of the universe is roughly like any other patch. Obviously you have to go to beefy enough scales to make this work (for example, the Earth is very different than the sun), and for our universe that happens at around 200 million light-years. Isotropic means that the cosmos looks pretty much the same no matter what direction you look in — again, over sufficiently large distances. With these two assumptions in place, the math of GR is slightly less torturous and progress (i.e., generating testable hypotheses and confirming/refuting them with experiment) can be made. But being good little scientists, not only do astronomers test their hypotheses, they also test their assumptions. Is the universe really homogenous and isotropic? A ball of light The cosmic microwave background (CMB) is a relic of the very early universe. Released when the cosmos was barely 300,000 years old, that light has been traveling the great expanses of our universe for 13.8 billion years — until it collided with our telescopes. That light surrounds us on all sides, giving us an unprecedented look at the state of the infant universe, equivalent to a picture of you when you were a mere handful of hours old. What a wonderful testbed for our cosmological assumptions. We can ask, "Is the universe isotropic?" to the CMB, and it can answer. The response: Yes! The CMB is perfectly uniform all across the sky to one part in 10,000. Mission accomplished. But the devil, as they say, is in the details. [Cosmic Microwave Background: Big Bang Relic Explained (Infographic)] Pole position There are subtle, tiny differences in the temperature of the CMB light from place to place. Those minuscule variations offer a wealth of information on many pressing cosmological questions, like the geometry of the universe, the amount of dark matter and the growth of the largest structures. To mathematically quantify the statistical properties of the CMB's characteristic bumps and wiggles, cosmologists turn to a technique called multipole expansion. This expansion looks at ever-smaller patches of the sky and analyzes the variations at only that scale. The first scale is the average CMB temperature across the whole sky — the monopole. The second is the dipole, or two hemispheres. The next is a quadrupole (the sky cut into quadrants). Then octupole (eigths), hexadecapole (sixteenths), dotriacontapole (thirty-seconds) and so on all the way up to…. Well, we kinda run out of names, so just go with 4,000-pole. The total CMB signal is the sum of all these contributions at various sizes, and in an isotropic universe these should all have random orientations. The dipole's hot patch may be over here, but the octupole should be directed over there, with no connection between them. And it turns out there are! From the 2-pole to the 4,000-pole, the European Space Agency's Planck mission (which observed the CMB from 2009 to 2013) verified that all the multipoles point in all sorts of random directions. Except…. A curious coincidence Except for the quadrupole and octupole, which are just a few degrees away from each other. This coincidence was first noted by NASA's early WMAP mission, but many dismissed it as a statistical fluke that would surely go away with better measurements. It didn't go away with better measurements. And it gets worse. It seems that the CMB is slightly cooler when viewed through the "top half" of our solar system, and slightly warmer on the opposite side. I'm not talking much; just a handful of microKelvin difference, but it's measurable and definitely there. Plus, this peculiar relationship to our solar system is aligned with the quadrupole and octupole. Around the axis That's odd. It's one thing for two of the multipoles to be aligned — maybe that's just random coincidence — but it's another for them to be associated with our solar system. Hence the nickname "Axis of Evil," a tongue-in-cheek reference to President George W. Bush's labeling of Iran, Iraq, and North Korea in 2002. What's going on? The CMB shouldn't give two photons about our solar system — it was generated before the sun was a twinkle in the Milky Way's eye. And we can't find any simple astrophysical explanation, like a random cloud of dust in our southern end, that might interfere with the pristine cosmological signal in this odd way. Is it really just coincidence? A chance alignment that we're conditioned to find because of our pattern-loving brains? Or does it seductively point the way to new and revolutionary physics? Or maybe we just screwed something up with the measurements? At this early stage, it's tough to say. There aren't a lot of data, and it's easy to get excited. We'll just have to wait and see; eventually the universe will….wait for it…point us in the right direction. Learn more by listening to the episode "The Axis of Evil" on the Ask A Spaceman podcast, available on iTunes and on the web at http://www.askaspaceman.com. Thanks to @Censored_No_More, Peter B., Hayward Z. for the questions that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter.
Cosmology & The Universe
Some stellar objects spin hundreds of times per second. What mechanism initializes this phenomenon and how is it maintained with such precision over long periods of time? Steve Weirich Portland, Oregon The first part of your question — how stellar objects reach millisecond spin periods — involves two concepts. First, the object needs to be small, since no material can move faster than the speed of light. Consider a point on the equator of a rapidly spinning object. In a millisecond, it travels once around, or about 6.3 times the radius. For that point to move at sub-light-speed, the radius must be less than 30 miles (48 kilometers). Second, the object must be dense with strong surface gravity, since gravity generally holds astrophysical objects together. Without strong equatorial gravity, the object would simply fly apart if it spins too fast. Of stellar-mass objects, only neutron stars and black holes can satisfy those requirements. Among white dwarfs, which are less dense than a neutron star or black hole, the fastest spin period is about a second. Other hypothesized objects (e.g., quark stars) might satisfy these constraints, but there is little evidence they exist. How do astrophysical objects start spinning so fast? On paper, it can happen as a slowly spinning star collapses to a neutron star or black hole; its spin speeds up due to conservation of angular momentum, like an ice skater pulling in their arms during a spin to twirl faster. But in practice, it seems that it usually takes accretion of additional matter to reach millisecond periods. Imagine a star orbiting a neutron star or black hole, transferring matter through an accretion disk. The disk material rotates faster and faster as it heads in toward the neutron star/black hole. As the material drops down the last bit, it adds its angular momentum to the neutron star/black hole, spinning it up. Adding a total of about a tenth of a solar mass can bring a neutron star to millisecond periods. Finally, how is the spin maintained? Well, we now have a great flywheel (the spinning neutron star/black hole) on an essentially frictionless bearing (the vacuum of space). However, millisecond pulsars (which are neutron stars) connect to the universe via a relatively weak magnetic field, which produces the lighthouselike pulsed emission we see. The magnetic field generates a small amount of “friction” that slows the pulsar’s spin in a highly predictable way. In the best case, one can predict the change in pulse arrival time a decade in advance with sub-microsecond precision. Such precision enables all sorts of neat experiments, from the measurements of pulsar orbits to searches for ultra-long-wavelength gravitational waves. Roger W. Romani Professor, Department of Physics/Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, California
Cosmology & The Universe
The expansion of the universe could be a mirage, a potentially controversial new study suggests. This rethinking of the cosmos also suggests solutions for the puzzles of dark energy and dark matter, which scientists believe account for around 95% of the universe's total energy and matter but remain shrouded in mystery. Scientists know the universe is expanding ever faster because of redshift, the stretching of light's wavelength towards the redder end of the spectrum as the object emitting it moves away from us. Distant galaxies have a higher redshift than those nearer to us, suggesting accelerating expansion. This accelerating expansion is captured by a term known as the cosmological constant, or lambda, a number that Einstein inserted into his theory of relativity as a "fudge factor" in order to keep the universe flat and constant. Einstein later called it his "greatest mistake" after astronomer Edwin Hubble found the universe was, in fact, expanding, negating the need for such a constant. Years later, however, when scientists found the universe's expansion wasn't constant, but was actually accelerating, they resurrected the cosmological constant to define this acceleration. The cosmological constant has been a headache for cosmologists because predictions of its value made by particle physics differ from actual observations by 120 orders of magnitude. The cosmological constant has therefore been described as "the worst prediction in the history of physics." Cosmologists often try to resolve the discrepancy between the different values of lambda by proposing new particles or physical forces but Lombriser tackles it by reconceptualizing what's already there.. "In this work, we put on a new pair of glasses to look at the cosmos and its unsolved puzzles by performing a mathematical transformation of the physical laws that govern it," Lombriser told Live Science via email. In Lombriser's mathematical interpretation, the universe isn't expanding but is flat and static, as Einstein once believed. The effects we observe that point to expansion are instead explained by the evolution of particle masses — such as protons and electrons — over time. In this picture, these particles arise from a field that permeates space-time. The cosmological constant is set by the field's mass and assumes a natural value that is of the same scale as the physical constants in the theory, rather than being several orders of magnitude different as is the case in conventional models. And because this field fluctuates, the masses of the particles it gives birth to also fluctuate. The cosmological constant still varies with time, but in this model that variation is due to changing particle mass over time, not the expansion of the universe. In the model, these field fluctuations result in larger redshifts for distant galaxy clusters than traditional cosmological models predict. "I was surprised that the cosmological constant problem simply seems to disappear in this new perspective on the cosmos," Lombriser said. A recipe for the dark universe Lombriser's new framework also tackles some of cosmology's other pressing problems, including the nature of dark matter. This invisible material outnumbers ordinary matter particles by a ratio of 5 to 1, but remains mysterious because it doesn't interact with light. Lombriser suggested that fluctuations in the field could also behave like a so-called axion field, with axions being hypothetical particles that are one of the suggested candidates for dark matter. These fluctuations could also do away with dark energy, the hypothetical force stretching the fabric of space and thus driving galaxies apart faster and faster. In this model, the effect of dark energy, according to Lombriser, would be explained by particle masses taking a different evolutionary path at later times in the universe. In this picture "there is, in principle, no need for dark energy," Lombriser added. Post-doctoral researcher at the Universidad ECCI, Bogotá, Colombia, Luz Ángela García, was impressed with Lombriser's new interpretation and how many problems it resolves. "The paper is pretty interesting, and it provides an unusual outcome for multiple problems in cosmology," García, who was not involved in the research, told Live Science. "The theory provides an outlet for the current tensions in cosmology." However, García urged caution in assessing the paper's findings, saying it contains elements in its theoretical model that likely can't be tested observationally, at least in the near future. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Robert Lea is a science journalist in the U.K. who specializes in science, space, physics, astronomy, astrophysics, cosmology, quantum mechanics and technology. Rob's articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University
Cosmology & The Universe
, /PRNewswire/ -- Black holes potentially have an even larger influence on the galaxies around them than we thought. And the Stratospheric Observatory for Infrared Astronomy (SOFIA) provided a new way to look at their impact. Researchers used SOFIA to measure the magnetic fields of nine active galactic nuclei to study the origin of their radio loudness (brightness). They found that with the help of magnetic fields, black holes can work through larger distances and have a larger influence on their galaxy than previously thought. Active galactic nuclei (AGN) — the central region of a galaxy, which houses its supermassive black hole — are classified by how strong of a jet they produce, shooting matter away at near light speed. Since the jets are mostly visible at radio wavelengths, they are described as either radio loud or radio quiet. "We see that some AGN have very powerful radio jets and some don't, even though all AGN are intrinsically the same — they all have a supermassive black hole in the center and accrete mass," said Enrique Lopez-Rodriguez, a research scientist at Stanford University's Kavli Institute for Particle Astrophysics and Cosmology and lead author on the new SOFIA finding. "We don't understand why some of them are so powerful, and some of them are not." Now, using SOFIA, Lopez-Rodriguez and his team have found that the polarization of infrared light from AGN also increases with their radio loudness, providing a new way to study black hole characteristics. Motivated by the 2018 SOFIA discovery that the infrared light from the strongest known radio-loud AGN, Cygnus A, was highly polarized, the researchers developed a follow-up observation program with SOFIA to determine whether there's a relationship between infrared polarization and radio loudness, and if so, why. They looked at the magnetic fields of a total of nine AGN, four of them radio loud and five radio quiet. From SOFIA observations of light polarization, astronomers can deduce the structure of the magnetic field in the region. In the AGN sample Lopez-Rodriquez and his team studied, these polarizations show that in radio-loud AGN — AGN with strong jets — there's a donut-shaped magnetic field perpendicular to the jets, along the equator of the AGN. That only radio-loud AGN have such a strong toroidal magnetic field indicates that the field is helping to transfer energy inward, feeding the black hole with matter coming from the host galaxy. The stronger the jets, the stronger the magnetic field, and the more energy there is in the system. The researchers were surprised by the strength of the result. "We were hoping for it, but we weren't expecting such a nice correlation," Lopez-Rodriguez said. "There's so much physics behind it that we don't understand, and future hydromagnetic models are required." Though a lot of science behind these objects remains unexplained, the result implies that black holes are potentially affecting galaxy evolution and jet production quite a bit more than astronomers previously realized. While astronomers typically consider gravity as the only force influencing supermassive black holes, this work shows that magnetic fields can aid in bridging the interface between black holes and matter in their host galaxy. With the help of these magnetic fields, black holes can impact not only the matter immediately around them, but can also work at even larger distances within the galaxy. ABOUT SOFIA SOFIA was a joint project of NASA and the German Space Agency at DLR. DLR provided the telescope, scheduled aircraft maintenance, and other support for the mission. NASA's Ames Research Center in California's Silicon Valley managed the SOFIA program, science, and mission operations in cooperation with the Universities Space Research Association, headquartered in Columbia, Maryland, and the German SOFIA Institute at the University of Stuttgart. The aircraft was maintained and operated by NASA's Armstrong Flight Research Center Building 703, in Palmdale, California. SOFIA achieved full operational capability in 2014 and concluded its final science flight on Sept. 29, 2022. The aircraft was maintained and operated by NASA's Armstrong Flight Research Center Building 703, in Palmdale, California. SOFIA achieved full operational capability in 2014 and concluded its final science flight on Sept. 29, 2022. About USRA Founded in 1969, under the auspices of the National Academy of Sciences at the request of the U.S. Government, the Universities Space Research Association (USRA) is a nonprofit corporation chartered to advance space-related science, technology and engineering. USRA operates scientific institutes and facilities and conducts other major research and educational programs. USRA engages the university community and employs in-house scientific leadership, innovative research and development, and project management expertise. PR Contact:Suraiya Farukhi[email protected]443-812-6945 SOURCE Universities Space Research Association
Cosmology & The Universe
NASA's highly sensitive James Webb Space Telescope has captured an extremely detailed image of thousands of never-before-seen young stars in a region known as the Tarantula Nebula.Located in the Large Magellanic Cloud, which is around 160,000 light years from Earth, the nebula, also known as stellar nursery 30 Doradus, is a region of very active star formation, according to NASA's Jet Propulsion Laboratory.NASA's mosaic image of the nebula covers an area of 340 light-years. Viewed with Webb's Near-Infrared Camera (NIRCam), the region resembles a burrowing tarantula's home. But it was actually named the Tarantula Nebula for its dusty filaments captured in previous telescope images.In this mosaic image stretching 340 light-years across, Webb's Near-Infrared Camera displays the Tarantula Nebula star-forming region in a new light, including tens of thousands of never-before-seen young stars that were previously shrouded in cosmic dust. / Credit: Credit: NASA, ESA, CSA, STScI, Webb ERO Production TeamThe nebula is home to the hottest, most massive stars known to exist. And it's of major interest to astronomers because, unlike in our Milky Way, it is producing new stars at a "furious rate."Studying the nebula also offers astronomers a unique insight into our universe's past and how stars formed in the deep cosmic past. Though close to us, the chemical make-up of the nebula is similar to the gigantic, star-forming regions from when the universe was only a few billion years old, and star formation was at its peak — a period known as "cosmic noon."The sparkling blue stars seen in the image are responsible for creating the nebula's cavity — located right at the center of the NIRCam image — with their own radiation."Only the densest surrounding areas of the nebula resist erosion by these stars' powerful stellar winds, forming pillars that appear to point back toward the cluster," said NASA. These pillars contain young stars called "protostars," which form in cocoons of dust.Webb's NIRCam caught one very young star still gathering mass in a cloud of dust and gas."Astronomers previously thought this star might be a bit older and already in the process of clearing out a bubble around itself," NASA said. "However, NIRSpec showed that the star was only just beginning to emerge from its pillar and still maintained an insulating cloud of dust around itself. Without Webb's high-resolution spectra at infrared wavelengths, this episode of star formation in action could not have been revealed."The heart of the Tarantula Nebula as seen in mid-infrared light by the James Webb Space Telescope. / Credit: NASA, ESA, CSA, STScI, Webb ERO Production TeamNASA also used its Mid-Infrared Instrument (MIRI), which is capable of penetrating deeper into the cosmos than a telescope using visible light, to look at the nebula. The MIRI revealed a very different side of the celestial structure and a "previously unseen cosmic environment," NASA said."The hot stars fade, and the cooler gas and dust glow," NASA said. "Within the stellar nursery clouds, points of light indicate embedded protostars, still gaining mass."  Webb, a joint project from NASA, the European Space Agency and the Canadian Space Agency, launched on Christmas Day last year, after more than 20 years of development, and by July it has begun delivering stunning new images of the cosmos."Webb has already begun revealing a universe never seen before, and is only getting started on rewriting the stellar creation story," NASA said.Correction: This story has been updated to note that Webb launched on Christmas Day but took several more months to begin sending images.Hillary Clinton discusses her political futureNew reporting on Dr. Oz's Palm Beach mansion dealJuul to pay nearly $440 million to settle states' teen vaping probe
Cosmology & The Universe
This week the James Webb Space Telescope made history, proving itself to be the most powerful space-based observatory humanity has ever built and revealing a tiny sliver of the vast universe around us in breathtaking detail. Astronomers the world over have been shown cheering, in floods of tears and lost for words. Astrobiologists like myself, who study the origins, evolution, distribution and future of life in the universe, are getting pretty excited too. By revealing images of galaxies from the dawn of time and chemical data of planetary atmospheres, the JWST has the power to help us answer one of humanity’s oldest questions: are we alone in the universe?The first spectacular image released was of the galaxy cluster SMACS 0723, known as Webb’s First Deep Field. This image covers just a patch of sky approximately the size of a grain of sand held at arm’s length by someone on the ground – and yet it is crowded with galaxies, literally thousands of them. Within each galaxy, there could be on average 100 billion stars, each with its own family of planets and moons orbiting them.Given the fact that in our solar system alone we have multiple habitable (Earth) or potentially habitable (Mars, Europa, Enceladus, Titan) worlds, then the odds of finding other planets or moons out there with the potential for hosting life as we know it have increased exponentially. The universe is probably littered with them.Using a different instrument called MIRI (Mid-Infrared Instrument) on the same view reveals even more about the character of these stars and galaxies . Some appear blue because of not having much dust and older stars, while other objects, probably galaxies, appear red because they are shrouded in dust. For me, the most exciting are the galaxies now coloured green. The green indicates that the dust in these galaxies includes a mix of hydrocarbons and other chemical compounds – the chemical building blocks of life. ]Galaxy cluster SMACS 0723 taken from Webb’s First Deep Field, the first infrared image from NASA’s James Webb Space Telescope, shows dust levels in galaxies indicated by the colours blue, red and green. Photograph: NASA/ReutersThe team has also released an infrared spectrum taken with the Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph (FGS-NIRISS) instrument, which analysed starlight as it passed through the atmosphere of Wasp-96b, a hot, Jupiter-like planet 1,150 light years away, orbiting closer to its star than Mercury does to our Sun. This bunch of wavy lines revealed to us the presence of water vapour in its atmosphere (the planet is way too hot for liquid water). This is a sensational result, and now the detective work really begins as we search the smaller, rocky planets in the hope of finding worlds where conditions are suitable for life.So how will we do this? We look for Earth-like atmospheres, ones dominated by nitrogen, carbon dioxide and water, as an Earth-like atmosphere is, by definition, our gold standard of habitability. But Earth’s atmosphere over the history of life hasn’t always been composed this way, and we are sure other atmospheric mixtures can create habitable worlds. We call these “habitability markers”, and they also include the glint of light reflecting off of oceans and the effects of vegetation.Astrobiologists are also looking to find biosignature gases in these distant exoplanetary atmospheres – that is, gases indicative of biological activity. For example, oxygen is a dominant gas in Earth’s modern atmosphere, and most of it is produced from photosynthesis. Also, the dominant source of methane in our atmosphere is produced via methanogenesis, an ancient form of metabolism for some micro-organisms. I should say here that identifying unambiguous signatures of life isn’t going to be easy. Many have abiotic (non-life) sources as well as biological ones; they can be produced by volcanoes, water-rock interactions or even human activity.At least for now, only those biosignatures with a global, planetary impact will probably be detectable. However, the detection of these habitability markers or biosignature gases using the JWST will be enough of an enticement to make us pause and more deeply explore the worlds in question. And that is more than exciting enough for now.The JWST has already, in just a few days, transformed the way we look at the universe and will in the future open our eyes to the chemical and, if we are lucky, biological makeup of other worlds in it. Perhaps, we will finally get the proof that life in one form or another is universal, and, as I have always believed, that we have never actually been alone.
Cosmology & The Universe
The James Webb Space Telescope cost taxpayers $10 billion. For that considerable sum, we’ve recently been treated to some spectacular photos of the cosmos. Judging by the comments of my friends, the reaction to these views has been wonder, even if few know their scientific significance.But are the beautiful photos worth it? After all, this instrument cost 10 times the estimated value of the Mona Lisa.But are the beautiful photos worth it? After all, this instrument cost 10 times the estimated value of the Mona Lisa. Or if you’re not an art fan, imagine the bill for 50,000 Lamborghinis. Surely someone could make a reasonable argument that — with all the troubles in the world today — we shouldn’t spend such a sum just to add new illustrations to college textbooks or to beautify the flanks of city buses.To which I would respond that future generations will be glad we spent it.It’s not that new astronomical knowledge will, by itself, improve the lives of our kids and grandkids. When, in 1543, Nicolaus Copernicus finally dismissed the 2,000-year-old idea that the Earth was the center of the solar system, the standard of living in Poland didn’t suddenly make a great leap forward. And yet Copernicus’ work demonstrated something of profound philosophical importance: Humans, as special as we are, occupy an unremarkable speck of dirt and dust in the universe. Our location is humdrum, and it’s likely that our talents are likewise unexceptional. We have good reason to be humble.Thanks to its special talents, the James Webb telescope could very well have a similar effect. Unlike the Hubble Space Telescope, the James Webb doesn’t see in visible light — the range of wavelengths to which our eyes are sensitive. The James Webb is an infrared telescope, which makes it useful in two important areas of astronomy. The first is understanding the history of the universe. How did the convulsion of the Big Bang 13 billion to 14 billion years ago lead to stars, galaxies, life and us? We are curious about this story in the same way that we want to know our family history. Because the universe is expanding, the light reaching us from distant objects is reddened. Consequently, if we want to see how the cosmos looked when it was still young — to understand the cosmos’ life history — it’s best to do so by using a telescope that is sensitive to deep red light. And that was much of the motivation behind building the James Webb telescope. Cosmology — unraveling the history of our universe — is astronomy’s No. 1 job.  But if understanding how we came to be is the telescope's “big picture” challenge, a second major interest is equally compelling.But if understanding how we came to be is the telescope’s “big picture” challenge, a second major interest is equally compelling: Does life exist beyond Earth? James Webb may answer this question by examining the atmospheres of planets orbiting distant stars and possibly finding tell-tale signs of biology such as oxygen or methane. Although a longer shot, it’s also possible that the James Webb could discover evidence of sophisticated beings. It might uncover massive constructions assembled by a highly advanced society: artifacts on the scale of solar systems that could be the handiwork of an intelligence thousands or millions of years more advanced than our own. This would, once again, show that humans are not the only game in town, and certainly not the best game. Are such things essential to know? Of course not. For 10,000 generations humans had no idea how their species came to be. Yet they stumbled through their lives, finding occasional joy despite an ignorance as vast as the Canadian prairies. But anyone who can look at the stars on a summer night and not question what they are, or wonder if some of those heavenly lights illuminate populated worlds, well, that individual has lost the ability to marvel and to experience the satisfaction of understanding something about how nature works. Ignorance is not bliss. The Webb telescope's view of the Carina Nebula reveals previously invisible areas of star birth.NASA, ESA, CSA, STScIAnd while studying the sky doesn’t cure disease or generate significant economic activity, it has both inspired physics and served as a check on its theories. And physics very much does affect your lifestyle.At the beginning of the 20th century, astronomers noticed that the orbit of the planet Mercury differed slightly from what Newton’s physics predicted. A small discrepancy might seem to be of no consequence, like making an error of a dime on your taxes. But while classical physics didn’t precisely agree with Mercury’s behavior, Albert Einstein’s physics did. And his relativity theory — which may have seemed abstract and useless a century ago — is now an essential part of our lives, incorporated into everything from the Global Positioning System (GPS) to cellphones. For a half millennium, astronomy has been the muse of physics.We don’t know what mysteries the James Webb telescope will unveil. If we did, we wouldn’t have had to build it. But history tells us that every time we peer skyward with a new telescope, we find novel and frequently dramatic phenomena. And even aside from the eventual spinoff from these discoveries, learning the universe’s history and better knowing our place within it are gifts that no fleet of Lamborghinis can match.Seth ShostakDr. Seth Shostak is senior astronomer at the SETI Institute in Mountain View, California, and the host of the “Big Picture Science” podcast.
Cosmology & The Universe
ANU astrophysicist and cosmologist Dr Brad Tucker says the new images released by NASA from the James Webb Space Telescope provide a “snapshot of the true history of the universe”. “So, when we saw that first image released of all of these clusters in a galaxy, we are literally seeing it billions of years ago – it has taken billions of years to travel through the universe and land on the camera of the James Webb Space Telescope,” he told Sky News host Chris Smith. “The further back we look, the older we get, the different parts of the universe we can see and then hopefully we can see more of the universe that we haven’t been able to see previously.”
Cosmology & The Universe
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more. CNN  —  For the first time, the Hubble Space Telescope has detected a lone object drifting through our Milky Way galaxy – the invisible, ghostly remains of a once radiant star. When stars massive enough to dwarf our sun die, they explode in a supernova and the remaining core is crushed by its own gravity, forming a black hole. Sometimes, the explosion may send the black hole into motion, hurtling across the galaxy like a pinball. By rights, there should be a lot of roving black holes known to scientists, but they are practically invisible in space and therefore very difficult to uncover. Astronomers believe that 100 million free-floating black holes roam our galaxy. Now, researchers believe they have detected such an object. The detection was made after dedicating six years to observations – and astronomers were even able to make a precise mass measurement of the extreme cosmic object. The black hole is 5,000 light years away, located in a spiral arm of the Milky Way galaxy called Carina-Sagittarius. This observation allowed the research team to estimate that the nearest isolated black hole in relation to Earth could be only 80 light-years away. But if black holes are essentially indistinguishable from the void of space, how did Hubble spot this one? The extremely strong gravitational field of black holes warp the space around them, creating conditions that can deflect and amplify starlight that aligns behind them. This phenomenon is known as gravitational lensing. Ground-based telescopes peer at the millions of stars dotting the center of the Milky Way and seek out this ephemeral brightening, signifying that a large object has passed between us and the star. Hubble is perfectly poised to follow up on these observations. Two different teams of researchers studied the observations to determine the mass of the object. Both studies have been accepted for publication in The Astrophysical Journal. One team, led by astronomer Kailash Sahu, a Hubble instrument scientist at the Space Telescope Science Institute in Baltimore, determined that the black hole weighed seven times the mass of our sun. The second team, led by doctoral student Casey Lam and Jessica Lu, associate professor of astronomy, both of University of California, Berkeley, arrived at a smaller mass range, between 1.6 and 4.4 times that of the sun. According to this estimate,the object could be a black hole or a neutron star. Neutron stars are the incredibly dense remnants of exploded stars. “Whatever it is, the object is the first dark stellar remnant discovered wandering through the galaxy, unaccompanied by another star,” Lam said in a statement. The black hole passed in front of a background star located 19,000 light-years away from Earth toward the center of the galaxy, amplifying its starlight for 270 days. Astronomers had a difficult time determining their measurement because there is another bright star very close to the one they observed brightening behind the black hole. “It’s like trying to measure the tiny motion of a firefly next to a bright light bulb,” Sahu said in a statement. “We had to meticulously subtract the light of the nearby bright star to precisely measure the deflection of the faint source.” Sahu’s team thinks the object may be traveling as quickly as 99,419 miles per hour (160,000 kilometers per hour), which is quicker than most stars in that part of the galaxy, while Lu and Lam’s team arrived at an estimate of 67,108 miles per hour (108,000 kilometers per hour). More data and observations from Hubble and further analysis could settle the argument over the identity of the object. Astronomers continue the needle-in-a-haystack search for more of these invisible oddities, which could help them better understand how stars evolve and die. “With microlensing, we’re able to probe these lonely, compact objects and weigh them. I think we have opened a new window onto these dark objects, which can’t be seen any other way,” Lu said.
Cosmology & The Universe
This is where Perseverance found more organic matter than ever on Mars Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more. CNN  —  Investigating the site of an ancient river delta, the Perseverance rover has collected some of the most important samples yet on its mission to determine if life ever existed on Mars, according to NASA scientists. A few of the recently collected samples include organic matter, indicating that Jezero Crater, which likely once held a lake and the delta that emptied into it, had potentially habitable environments 3.5 billion years ago. “The rocks that we have been investigating on the delta have the highest concentration of organic matter that we have yet found on the mission,” said Ken Farley, Perseverance project scientist at the California Institute of Technology in Pasadena. The rover’s mission, which began on the red planet 18 months ago, includes looking for signs of ancient microbial life. Perseverance is collecting rock samples that could have preserved these telltale biosignatures. Currently, the rover contains 12 rock samples. A series of missions called Mars Sample Return will eventually take the collection back to Earth in the 2030s. The site of the delta makes Jezero Crater, which spans 28 miles (45 kilometers), of particularly high interest to NASA scientists. The fan-shaped geological feature, once present where a river converged with a lake, preserves layers of Martian history in sedimentary rock, which formed when particles fused together in this formerly water-filled environment. The rover investigated the crater floor and found evidence of igneous, or volcanic, rock. During its second campaign to study the delta over the past five months, Perseverance has found rich sedimentary rock layers that add more to the story of Mars’ ancient climate and environment. “The delta, with its diverse sedimentary rocks, contrasts beautifully with the igneous rocks – formed from crystallization of magma – discovered on the crater floor,” Farley said. “This juxtaposition provides us with a rich understanding of the geologic history after the crater formed and a diverse sample suite. For example, we found a sandstone that carries grains and rock fragments created far from Jezero Crater.” The mission team nicknamed one of the rocks that Perseverance sampled as Wildcat Ridge. The rock likely formed when mud and sand settled in a saltwater lake as it evaporated billions of years ago. The rover scraped away at the surface of the rock and analyzed it with an instrument known as the Scanning Habitable Environments with Raman & Luminescence for Organics & Chemicals, or SHERLOC. This rock-zapping laser functions as a fancy black light to uncover chemicals, minerals and organic matter, said Sunanda Sharma, SHERLOC scientist at NASA’s Jet Propulsion Laboratory in Pasadena. The instrument’s analysis revealed that the organic minerals are likely aromatics, or stable molecules of carbon and hydrogen, which are connected to sulfates. Sulfate minerals, often found sandwiched within the layers of sedimentary rocks, preserve information about the watery environments they formed in. Organic molecules are of interest on Mars because they represent the building blocks of life, such as carbon, hydrogen and oxygen, as well as nitrogen, phosphorous and sulfur. Not all organic molecules require life to form because some can be created through chemical processes. “While the detection of this class of organics alone does not mean that life was definitively there, this set of observations does start to look like some things that we’ve seen here on Earth,” Sharma said. “To put it simply, if this is a treasure hunt for potential signs of life on another planet, organic matter is a clue. And we’re getting stronger and stronger clues as we’re moving through our delta campaign.” Perseverance as well as the Curiosity rover has found organic matter before on Mars. But this time, the detection occurred in an area where life may have once existed. “In the distant past, the sand, mud, and salts that now make up the Wildcat Ridge sample were deposited under conditions where life could potentially have thrived,” Farley said. “The fact the organic matter was found in such a sedimentary rock – known for preserving fossils of ancient life here on Earth – is important. However, as capable as our instruments aboard Perseverance are, further conclusions regarding what is contained in the Wildcat Ridge sample will have to wait until it’s returned to Earth for in-depth study as part of the agency’s Mars Sample Return campaign.” The samples collected so far represent such a wealth of diversity from key areas within the crater and delta that the Perseverance team is interested in depositing some of the collection tubes at a designated site on Mars in about two months, Farley said. Once the rover drops off the samples at this cache depot, it will continue exploring the delta. Future missions can collect these samples and return them to Earth for analysis using some of the most sensitive and advanced instruments on the planet. It’s unlikely that Perseverance will find undisputed evidence of life on Mars because the burden of proof for establishing it on another planet is so high, Farley said. “I’ve studied Martian habitability and geology for much of my career and know first-hand the incredible scientific value of returning a carefully collected set of Mars rocks to Earth,” said Laurie Leshin, director of NASA’s Jet Propulsion Laboratory, in a statement. “That we are weeks from deploying Perseverance’s fascinating samples and mere years from bringing them to Earth so scientists can study them in exquisite detail is truly phenomenal. We will learn so much.” Some of the diverse rocks in the delta were about 65.6 feet (20 meters) apart, and they each tell different stories. One piece of sandstone, called Skinner Ridge, is evidence of rocky material that was likely transported into the crater from hundreds of miles away, representing material that the rover won’t be able to travel to during its mission. Wildcat Ridge, on the other hand, preserves evidence of clays and sulfates that layered together and formed into rock. Once the samples are in labs on Earth, they could reveal insights about potentially habitable Martian environments, such as chemistry, temperature and when the material was deposited in the lake. “I think it’s safe to say that these are two of the most important samples that we will collect on this mission,” said David Shuster, Perseverance return sample scientist at the University of California, Berkeley.
Cosmology & The Universe
Scientists are using atomic clocks to investigate some of the universe's greatest mysteries, including the nature of dark matter, in a laboratory. In the process, they say they're bringing cosmology and astrophysics "down to Earth." The project, which is a collaboration between the University of Sussex and the National Physical Laboratory (NPL) in the U.K., uses the ticks of these incredibly precise clocks to hunt for hitherto unknown ultra-light particles. These particles could be connected to dark matter, the mysterious substance that makes up an estimated 85% of all matter in the universe but remains effectively invisible to us because it does not interact with light or, more precisely, electromagnetic radiation. Scientists believe most galaxies are enveloped by a cloud of dark matter, but its presence can only be inferred by the effect it has on gravity. Related: How does an atomic clock work? "Our universe, as we know it, is governed by laws of physics, so gravity is governed by general relativity and particle physics by the Standard Model of particle physics," Xavier Calmet, project leader and a professor of physics at the University of Sussex, told Space.com. "We call deviations from these laws 'breakdown in physics' — basically, that is a synonym for new physics beyond our current understanding of the universe." "One of the biggest mysteries is the nature of dark matter. We know that it is out there, we see its impact in our universe, but we don't have a valid explanation within the Standard Model of particle physics," Calmet continued. "There must be new physics, but we do not know how to describe these new particles and how they couple to regular matter." How can 'new physics' be spotted with atomic clocks? According to established laws of physics, clocks should tick at a constant rate, but physics beyond the Standard Model's scope would result in tiny charges in atomic energy levels. This should affect the rate at which clocks tick, but the variation would be so small it could only be spotted with an incredibly precise clock — and that's where atomic clocks come in. "Atomic clocks bring cosmology and astrophysics down to Earth, enabling searches for ultra-light particles that could explain dark matter in a laboratory," Calmet said. Atomic clocks measure time using atoms with two potential energy states. When atoms absorb energy, they go to a higher energy state. Then, they eventually release this energy and drop back down to their lower ground state. In atomic clocks, groups of atoms are prepared by placing them in a higher energy state using microwave energy, and the characteristic and consistent rates at which they vibrate between states — their resonance frequencies — are used to precisely measure time. So, for example, all atoms of cesium resonate at the same frequency, meaning the standard measure of a second can be defined as 9,192,631,770 cycles of cesium. Because this cycling per second occurs with far less variation than, say, the swinging of a pendulum, this makes atomic clocks incredibly precise. "It has been recently realized that dark matter could be made of ultra-light particles that interact extremely weakly with regular matter," Calmet explained. "If that is the case, dark matter would essentially behave as a classical wave that interacts with electrons and protons. This dark matter wave would give some small kicks to these particles." Calmet added that these ultra-light dark matter particle kicks to the building blocks of the atom would lead to a time variation in fundamental constants of the universe, such as the fine-structure constant or "alpha" — a measure of how strong particles couple via the electromagnetic force — and the mass of the proton. "Because atomic clocks are amazingly precise devices, they would be able to detect these kicks and thus discover ultra-light dark matter," he continued. "By comparing two clocks, one sensitive to changes in alpha and the other one less sensitive to changes in alpha, we can obtain a limit on the time variation of this fundamental constant and thus set constraints on ultra-light particles." Calmet thinks the technique could potentially also be used to investigate another problematic aspect of the universe for physicists: Dark energy, the unknown force that is driving the accelerating expansion of space. While Calmet acknowledges that dark energy is more likely explained by the cosmological constant, a form of energy that acts almost in opposition to gravity to stretch the fabric of space and push apart galaxies, there is a small chance it could be connected to an ultra-light particle. In this vein, future clocks could also be sensitive to that particle and its associated wave. "While the clocks have not discovered new physics at this stage, we were able to develop a new theoretical framework to probe generic new physics with clocks and were able to derive the first model-independent limits on physics beyond the standard model within this approach," Calmet concluded. "We are creating a new field at the interface of atomic, molecular, and optical physics and traditional particle physics. "These are exciting results!" The results are set to be published in a future edition of the New Journal of Physics. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Robert Lea is a science journalist in the U.K. who specializes in science, space, physics, astronomy, astrophysics, cosmology, quantum mechanics and technology. Rob's articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University
Cosmology & The Universe
The Webb Space Telescope caught Mars in two different wavelengths using the Near-Infrared Camera.Graphic: NASA, ESA, CSA, STScI, Mars JWST/GTO teamThe Webb Space Telescope has amazed us with pictures of distant galaxies and glittering nebulae, but now it’s returned its first images of our solar system neighbor Mars. The newly released data comes from the telescope’s infrared instruments and provides scientists with information about Mars’ surface and atmospheric composition.OffEnglishThe European Space Agency announced today that the Webb Space Telescope caught its first glimpse of Mars on September 5 using the Near-Infrared Camera and Near-Infrared Spectrograph. This first view of Mars from the telescope is actually made up of two images taken at two infrared wavelengths. It shows the sun-soaked Eastern Hemisphere of the planet, which Webb observed from its vantage point nearly 932,000 miles (1.5 kilometers) away at Sun-Earth Lagrange Point 2. Graphic: NASA/ESA/CSA/STScI and Mars JWST/GTO teamThe first image (top right in the above graphic) shows a view of Mars in 2.1 microns, while the second (bottom right) was taken in 4.3 microns—both wavelengths that correspond to the near infrared spectrum, which is not visible to the human eye.Webb also collected some spectroscopic data on the Martian atmosphere using the Near-Infrared Spectrograph. The spectrograph reveals some of the molecules that make up Mars’ incredibly thin atmosphere, including carbon dioxide, water, and carbon monoxide, which correspond to the highlighted dips in the graph. While this isn’t a groundbreaking conclusion, it does show Webb’s ability to accurately characterize the atmospheric composition of different planets.G/O Media may get a commissionPre-orderApple AirPods Pro (2nd Generation)Releases September 23Featuring 2x stronger active noise cancellation., longer battery life granting up to 6 hours of listening time with ANC enabled and 30 hours of total listening time thanks to the MagSafe charging case, a new smaller silicone ear tip for tinier ears, a new chip, enhanced Bluetooth, and moreGraphic: NASA/ESA/CSA/STScI and Mars JWST/GTO teamBy pointing Webb’s gaze at a well-studied planet like Mars, scientists can establish how reliable the high-tech telescope is at studying very distant celestial bodies. As Webb continues to set its sights on objects clear across the universe, we’re still excited to get new views of our cosmic backyard, like the recent view of a glowing Jupiter and its auroras.
Cosmology & The Universe
NASA released the first full-color images from the James Webb Space Telescope on Monday, images the space agency says are the deepest and highest resolution ever taken of the universe.The images capture dying stars, nebulas, colliding galaxies and more. Besides being breathtaking, what do these images mean?The James Webb Space Telescope, NASA’s biggest, most expensive and powerful telescope to date, was launched in December 2021 with the intention of spending five to 10 years studying a plethora of things, including the formation of the universe’s earliest galaxies and the developments of our own solar system.NASA's James Webb Space Telescope reveals Stephan's Quintet in a new light. A visual grouping of five galaxies, is best known for being prominently featured in the holiday classic film, "It’s a Wonderful Life."Space Telescope Science Institute/NASANadia Drake writes about space for National Geographic. She spoke with ABC News’ “Start Here” podcast on Wednesday about what these images mean and what they say about human development closer to home.START HERE: Nadia, what kind of words would you use to describe these photos?DRAKE: Oh, my gosh. Well, so we saw the first of these images on Monday night, and that was the deepest ever image in the infrared taken of our universe. We're looking so far back in time. Some of the light in that image has traveled for 13.1 billion years before it reached the camera.And that to me is just magnificent, right? That's one of the words I would use is “magnificent, awe-inspiring.”They make me contemplate what we know about reality, what we understand about it, because it's not often that you get to see the universe in a new light through a new set of eyes.That doesn't happen very often and so each time you get that fresh glimpse of the cosmos where an instrument is revealing truths that you didn't even know to ask about. I feel like it's just transcendent.START HERE: And what are we looking at? What are these pictures that we saw yesterday?DRAKE: There were four other data products that were released.One is a spectrum of an exoplanet atmosphere. And so in that spectrum, which is basically telling us what some of the atmospheric composition of that planet is, we learned that there's a lot of water vapor and that there is a surprising amount of clouds and haze. And this is a planet that orbits its star in about three or so days. It's about half the mass of Jupiter. So it's not the type of world that we see in our own solar system.And then there are three more images.One is of a nebula that's blown by a dying star. So this one looks kind of like an eyeball. It looks like a ring that's surrounding a star that has reached the end of its life.NASA's James Webb Space Telescope reveals Stephan's Quintet in a new light. A visual grouping of five galaxies, is best known for being prominently featured in the holiday classic film, "It’s a Wonderful Life."Space Telescope Science Institut/NASAAnd so for me, it's kind of poignant because as the star was dying, it was actually exhaling all of this matter into the cosmos. This dust and gas and these elements that are going to seed planet formation or star formation, it's kind of nourishing the cosmos as it's dying.And then there's another nebula which looks at the opposite side of the stellar life cycle, which is Star Birth. And this is the Carina Nebula. And it's this region of space that looks so much like a landscape. It's almost mind blowing. You look at it and it looks like you're seeing terrestrial formations, cliffs, gullies. Things that you would imagine seeing on Earth. But they're actually not. It's space, these are cosmic ingredients. And instead of seeing things like trees dappling the landscape, you're seeing new stars that are in the process of being born.And so it's the stellar nursery that's just so chaotic and energetic and beautiful.So over the next year of observations which are planned, the telescope will be looking at additional exoplanets, which are worlds that are orbiting faraway stars, also looking at their atmospheric composition, trying to figure out what's in those atmospheres.And maybe, maybe it's a longshot, but maybe some of those molecules could be signs of life.START HERE: But wait, is there life out there? Have you gotten a sense from all of this, whether there is a high likelihood, that there's got to be life beyond? DRAKE: So I have to preface my answer to this question with: I'm fairly biased.My dad is the scientist who came up with the Drake Equation, which is a thought experiment that lets you calculate the number of detectable, intelligent civilizations in the Milky Way galaxy. So I've spent most of my life thinking about the fact that we are not alone on this planet.Like math says, it's got to be out there somewhere. Math says it's got to be out there.And as I've been in this job, it just makes more and more sense to me that we are not alone.Technicians lift the mirror of the James Webb Space Telescope using a crane at the Goddard Space Flight Center in Greenbelt, Md., on April 13, 2017, in this photo provided by NASA.Laura Betz/AP, FILEI think there is definitely other life out there, certainly microbial. We can argue about technology and civilizations, but it just seems improbable for us to be alone.It will also be looking at some of the earliest galaxies that formed. It will be looking at regions of star formation like we saw in the Carina Nebula, but also regions that are fueled by colliding galaxies.START HERE: Colliding galaxies.DRAKE: Colliding galaxies.START HERE: So dramatic, space.DRAKE: So dramatic and so actually the last image that was unveiled yesterday was of a quintet of galaxies, two of which are in the act of colliding.And as [the galaxies] are doing so, they're actually fueling star formation.It's all just very circular. You think about life and death in the cosmos. And one of the most beautiful comments I heard from a scientist was that: “The story of us, the story of humans is actually written in the stars.”You can't really understand life on Earth without understanding the origin of stars and galaxies and the chemical elements that create life as we know it.And so I think that's what this telescope is going to help us do, is connect those dots. How do you get from those very first stars to a planet Earth where you have us?START HERE: Fascinating images, fascinating story behind it. Nadia Drake from Nat Geo, thank you so much.
Cosmology & The Universe
The universe may have started with a dark Big Bang The Big Bang may have not been alone. The appearance of all the particles and radiation in the universe may have been joined by another Big Bang that flooded our universe with dark matter particles. And we may be able to detect it. In the standard cosmological picture the early universe was a very exotic place. Perhaps the most momentous thing to happen in our cosmos was the event of inflation, which at very early times after the Big Bang sent our universe into a period of extremely rapid expansion. When inflation ended, the exotic quantum fields that drove that event decayed, transforming themselves into the flood of particles and radiation that remain today. When our universe was less than 20 minutes old, those particles began to assemble themselves into the first protons and neutrons during what we call Big Bang nucleosynthesis. Big Bang nucleosynthesis is a pillar of modern cosmology, as the calculations behind it accurately predict the amount of hydrogen and helium in the cosmos. However, despite the success of our picture of the early universe, we still do not understand dark matter, which is the mysterious and invisible form of matter that takes up the vast majority of mass in the cosmos. The standard assumption in Big Bang models is that whatever process generated particles and radiation also created the dark matter. And after that the dark matter just hung around ignoring everybody else. But a team of researchers have proposed a new idea. They argue that our inflation and Big Bang nucleosynthesis eras were not alone. Dark matter may have evolved along a completely separate trajectory. In this scenario when inflation ended it still flooded the universe with particles and radiation. But not dark matter. Instead there was some quantum field remaining that did not decay away. As the universe expanded and cooled, that extra quantum field did eventually transform itself triggering the formation of dark matter. The advantage of this approach is that it decouples the evolution of dark matter from normal matter, so that Big Bang nucleosynthesis can proceed as we currently understand it while the dark matter evolves along a separate track. This approach also opens up avenues to explore a rich variety of theoretical models of dark matter because now that it has a separate evolutionary track, it's easier to keep track of in the calculations to see how it might compare to observations. For example, the team behind the paper were able to determine that if there was a so-called dark Big bang, it had to happen when our universe was less than one month old. The research also found that the appearance of a dark Big Bang released a very unique signature of strong gravitational waves that would persist into the present-day universe. Ongoing experiments like pulsar timing arrays should be able to detect these gravitational waves, if they exist. We still do not yet know if a dark Big Bang happened, but this work gives a clear pathway to testing the idea. The study is published on the arXiv preprint server. More information: Katherine Freese et al, Dark Matter and Gravity Waves from a Dark Big Bang, arXiv (2023). DOI: 10.48550/arxiv.2302.11579 Journal information: arXiv Provided by Universe Today
Cosmology & The Universe
By Layne Cameron - Michigan State UniversityHigh-speed lasers are helping to shine a spotlight on the unusual chemistry of the molecule that made the universe, Trihydrogen, or H3+. Image Credit: Wikilmages via Pixabay, edited by Universal-Sci H3+ is prevalent in the universe, the Milky Way, gas giants, and the Earth’s ionosphere. The lab of Marcos Dantus, a professor in chemistry and physics at Michigan State University, is also creating and studying it. The researchers are using ultrafast lasers—and technology Dantus invented—to begin to understand the chemistry of this iconic molecule.The new research appears in Nature Communications and the Journal of Chemical Physics. “Observing how roaming H2 molecules evolve to H3+ is nothing short of astounding,” Dantus says. “We first documented this process using methanol; now we’ve been able to expand and duplicate this process in a number of molecules and identified a number of new pathways.”The Small PictureAstrochemists see the big picture, observing H3+ and defining it through an interstellar perspective. It’s created so fast—in less time than it takes a bullet to cross an atom—that it’s extremely difficult to figure out how three chemical bonds break and three new ones form in such a short timescale.That’s when chemists using femtosecond lasers come into play. Rather than study the stars using a telescope, Dantus’ team literally looks at the small picture. The researchers view the entire procedure at the molecular level and measured it in femtoseconds—1 millionth of 1 billionth of a second. The process the team views takes between 100 and 240 femtoseconds. Dantus knows this because the clock starts when he fires the first laser pulse. The laser pulse then “sees” what’s happening. Image Credit: NASA/ ESA/The Hubble Heritage Team (STScl/AURA) The two-laser technique revealed the hydrogen transfer, as well as the hydrogen-roaming chemistry, that’s responsible for H3+ formation. Roaming mechanisms briefly generate a neutral molecule (H2) that stays in the vicinity and extracts a third hydrogen molecule to form H3+. And it turns out there’s more than one way it can happen. In one experiment involving ethanol, the team revealed six potential pathways, confirming four of them.Since laser pulses are comparable to sound waves, Dantus’ team discovered a “tune” that enhances H3+ formation and one that discourages formation. When converting these “shaped” pulses to a slide whistle, successful formation happens when the note starts flats, rises slightly and finishes with a downward, deeper dive. The song is music to the ears of chemists who can envision many potential applications for this breakthrough. The Chemistry of Life“These chemical reactions are the building blocks of life in the universe,” Dantus says.“The prevalence of roaming hydrogen molecules in high-energy chemical reactions involving organic molecules and organic ions is relevant not only for materials irradiated with lasers, but also materials and tissues irradiated with x-rays, high energy electrons, positrons, and more.”This study reveals chemistry that is relevant in terms of the universe’s formation of water and organic molecules. The secrets it could unlock, from astrochemical to medical, are endless, he adds.Researchers from Kansas State University also contributed to the Nature Communications study. The Department of Energy and the National Science Foundation funded the work. via FuturitySource: Michigan State University via Futurity - Original Study DOI: 10.1038/s41467-018-07577-0 If you enjoy our selection of content please consider following Universal-Sci on social media:
Cosmology & The Universe
By Paul Kyberd - Senior Lecturer in Particle Physics Informatics, Brunel University London The epoch of the leptons existed for nine seconds after the Big Bang - Image Credit: Yinweichen Simon Villeneuve via Wikimedia Commons It is often claimed that the Ancient Greeks were the first to identify objects that have no size, yet are able to build up the world around us through their interactions. And as we are able to observe the world in tinier and tinier detail through microscopes of increasing power, it is natural to wonder what these objects are made of.We believe we have found some of these objects: subatomic particles, or fundamental particles, which having no size can have no substructure. We are now seeking to explain the properties of these particles and working to show how these can be used to explain the contents of the universe.There are two types of fundamental particles: matter particles, some of which combine to produce the world about us, and force particles – one of which, the photon, is responsible for electromagnetic radiation. These are classified in the standard model of particle physics, which theorises how the basic building blocks of matter interact, governed by fundamental forces. Matter particles are fermions while force particles are bosons.Matter particles: quarks and leptonsMatter particles are split into two groups: quarks and leptons – there are six of these, each with a corresponding partner.Leptons are divided into three pairs. Each pair has an elementary particle with a charge and one with no charge – one that is much lighter and extremely difficult to detect. The lightest of these pairs is the electron and electron-neutrino. The charged electron is responsible for electric currents. Its uncharged partner, known as the electron-neutrino, is produced copiously in the sun and these interact so weakly with their surroundings that they pass unhindered through the Earth. A million of them pass through every square centimetre of your body every second, day and night.Electron-neutrinos are produced in unimaginable numbers during supernova explosions and it is these particles that disperse elements produced by nuclear burning into the universe. These elements include the carbon from which we are made, the oxygen we breathe, and almost everything else on earth. Therefore, in spite of the reluctance of neutrinos to interact with other fundamental particles, they are vital for our existence. The other two neutrino pairs (called muon and muon neutrino, tau and tau neutrino) appear to be just heavier versions of the electron.Since normal matter does not contain these particles it may seem that they are an unnecessary complication. However during the first one to ten seconds of the universe following the Big Bang, they had a crucial role to play in establishing the structure of the universe in which we live – known as the Lepton Epoch. Image Credit: Triff via Shutterstock - HDR tune by Universal-Sci The six quarks are also split into three pairs with whimsical names: “up” with “down”, “charm” with “strange”, and “top” with “bottom” (previously called “truth” and “beauty” though regrettably changed). The up and down quarks stick together to form the protons and neutrons which lie at the heart of every atom. Again only the lightest pair of quarks are found in normal matter, the charm/strange and top/bottom pairs seem to play no role in the universe as it now exists, but, like the heavier leptons, played a role in the early moments of the universe and helped to create one that is amenable to our existence.Force particlesThere are six force particles in the standard model, which create the interactions between matter particles. They are divided into four fundamental forces: gravitational, electromagnetic, strong and weak forces.A photon is a particle of light and is responsible for electric and magnetic fields, created by the exchange of photons from one charged object to another.The gluon produces the force responsible for holding quarks together to form protons and neutrons, and for holding those protons and neutrons together to form heavier nuclei.Three particles named the “W plus”, the “W minus” and the “Z zero” – referred to as intermediate vector bosons – are responsible for the process of radioactive decay and for the processes in the sun which cause it to shine. A sixth force particle, the graviton, is believed to be responsible for gravitation, but has not yet been observed. Anti-matter: the science fiction realityWe also know of the existence of anti-matter. This is a concept much beloved by science fiction writers, but it really does exist. Anti-matter particles have been frequently observed. For example, the positron (the anti-particle of the electron) is used in medicine to map our internal organs using positron emission tomography (PET). Famously when a particle meets its anti-particle they both annihilate each other and a burst of energy is produced. A PET scanner is used to detect this.Each of the matter particles above has a partner particle which has the same mass, but opposite electric charge, so we can double the number of matter particles (six quarks and six leptons) to arrive at a final number of 24.We give matter quarks a number of +1 and anti-matter quarks a value of -1. If we add up the number of matter quarks plus the number of anti-matter quarks then we get the net number of quarks in the universe, this never varies. If we have enough energy we can create any of the matter quarks as long as we create an anti-matter quark at the same time. In the early moments of the universe these particles were being created continuously – now they are only created in the collisions of cosmic rays with the atmosphere of planets and stars.The famous Higgs bosonThere is a final particle which completes the roll call of particles in what is referred as the standard model of particle physics so far described. It is the Higgs, predicted by Peter Higgs 50 years ago, and whose discovery at CERN in 2012 led to a Nobel Prize for Higgs and Francois Englert.The Higgs boson is an odd particle: it is the second heaviest of the standard model particles and it resists a simple explanation. It is often said to be the origin of mass, which is true, but misleading. It gives mass to the quarks, and quarks make up the protons and neutrons, but only 2% of the mass of protons and neutrons is provided by the quarks, and the rest is from the energy in the gluons.At this point we have accounted for all the particles required by the standard model: six force particles, 24 matter particles and one Higgs particle – a total of 31 fundamental particles. Despite what we know about them, their properties have not been measured well enough to allow us to say definitively that these particles are all that is needed to build the universe we see around us, and we certainly don’t have all the answers. The next run of the Large Hadron Collider will allow us to refine our measurements of some of these properties – but there is something else. The Larte Hadron Collider - Image Credit D-VISIONS via Shutterstock - HDR tune by Universal-Sci Yet the theory is still wrongThe beautiful theory, the standard model, has been tested and re-tested over two decades and more; and we have not yet made a measurement that is in contradiction with our predictions. But we know that the standard model must be wrong. When we collide two fundamental particles together a number of outcomes are possible. Our theory allows us to calculate the probability that any particular outcome can occur, but at energies beyond which we have so far achieved it predicts that some of these outcomes occur with a probability of greater than 100% – clearly nonsense.Theoretical physicists have spent much effort in trying to construct a theory which gives sensible answers at all energies, while giving the same answer as the standard model in every circumstance in which the standard model has been tested.The most common modification implies that there are very heavy undiscovered particles. The fact they are heavy means lots of energy will be needed to produce them. The properties of these extra particles can be chosen to make sure that the resulting theory gives sensible answers at all energies, but they have no effect on the measurements that agree so well with the standard model.The number of these undiscovered and as-yet-unseen particles depends on which theory you choose to believe. The most popular class of these theories are called supersymmetric theories and they imply that all the particles which we have seen have a much heavier counterpart. However, if they are too heavy, problems will arise at energies we can produce before these particles are found. But the energies that will be reached in the next run of the LHC are high enough that an absence of new particles will be a blow to all supersymmetric theories.Source: The Conversation
Cosmology & The Universe
Artist rendering of Uranus - Image Credit: buradaki via Shutterstock If you’ve got really good eyesight and can find a place where the light pollution is non-existent, you might be able to see Uranus without a telescope. It’s only possible with the right conditions, and if you know exactly where to look. And for thousands of years, scholars and astronomers were doing just that. But given that it was just a tiny pinprick of light, they believed Uranus was a star.It was not until the late 18th century that the first recorded observation that recognized Uranus as being a planet took place. This occurred on March 13th, 1781, when British astronomer Sir William Herschel observed the planet using a telescope of his own creation. From this point onwards, Uranus would be recognized as the seventh planet and the third gas giant of the Solar System.Observations pre-18th Century:The first recorded instance of Uranus being spotted in the night sky is believed to date back to Classical Antiquity.  During the 2nd century BCE, Hipparchos – the Greek astronomer, mathematician and founder of trigonometry – apparently recorded the planet as a star in his star catalogue (completed in 129 BCE). William Herschel’s telescope, through which the planet Uranus was first observed. - Image Credit: Wikimedia Commons This catalog was later incorporated into Ptolemy’s Almagest, which became the definitive source for Islamic astronomers and for scholars in Medieval Europe for over one-thousand years. During the 17th and 18th centuries, multiple recorded sightings were made by astronomers who also catalogued it as being a star.This included English astronomer John Flamsteed, who in 1690 observed the star on six occasions and catalogued it as a star in the Taurus constellation (34 Tauri). During the mid-18th century, French astronomer Pierre Lemonnier made twelve recorded sightings, and also recorded it as being a star. It was not until March 13th, 1781, when William Herschel observed it from his garden house in Bath, that Uranus’ true nature began to be revealed.Hershel’s Discovery:On the evening in question –  March 13th, 1781 – William Herschel was surveying the sky with his telescope, looking for binary stars. His first report on the object was recorded on April 26th, 1781. Initially, he described it as being a “Nebulous star or perhaps a comet”, but later settled on it being a comet since it appeared to have changed its position in the sky. Portrait of Sir William Herschel, by Lewis Francis Abbot (1784). - Image Credit: WikimediaCommons/National Portrait Gallery When he presented his discovery to the Royal Society, he maintained this theory, but also likened it to a planet. As was recorded in the Journal of the Royal Society and Royal Astronomical Society on the occasion of his presentation:“The power I had on when I first saw the comet was 227. From experience I know that the diameters of the fixed stars are not proportionally magnified with higher powers, as planets are; therefore I now put the powers at 460 and 932, and found that the diameter of the comet increased in proportion to the power, as it ought to be, on the supposition of its not being a fixed star, while the diameters of the stars to which I compared it were not increased in the same ratio. Moreover, the comet being magnified much beyond what its light would admit of, appeared hazy and ill-defined with these great powers, while the stars preserved that lustre and distinctness which from many thousand observations I knew they would retain. The sequel has shown that my surmises were well-founded, this proving to be the Comet we have lately observed.”While Herschel would continue to maintain that what he observed was a comet, his “discovery” stimulated debate in the astronomical community about what Uranus was. In time, astronomers like Johann Elert Bode would conclude that it was a planet, based on its nearly-circular orbit. By 1783, Herschel himself acknowledged that it was a planet to the Royal Society.Naming:As he lived in England, Herschel originally wanted to name Uranus after his patron, King George III. Specifically, he wanted to call it Georgium Sidus (Latin for “George’s Star”), or the Georgian Planet. Although this was a popular name in Britain, the international astronomy community didn’t think much of it, and wanted to follow the historical precedent of naming the planets after ancient Greek and Roman gods. Large floor mosaic from a Roman villa in Sassoferrato, Italy (ca. 200–250 CE). Aion (Uranus), the god of eternity, stands above Tellus (Gaia) and her four children (the seasons). - Image Credit: Wikipedia Commons/Bibi Saint-Poi Consistent with this, Bode proposed the name Uranus in a 1782 treatise. The Latin form of Ouranos, Uranus was the grandfather of Zeus (Jupiter in the Roman pantheon), the father of Cronos (Saturn), and the king of the Titans in Greek mythology. As it was discovered beyond the orbits of Jupiter and Saturn, the name seemed highly appropriate.In the following century, Neptune would be discovered, the last of the eight official planets that are currently recognized by the IAU. And by the 20th century, astronomers would discovery Pluto and other minor planets within the Kuiper Belt. The process of discovery has been ongoing, and will likely continue for some time to come.We have written many articles about planetary discovery here at Universe Today. Here’s Who Discovered Mercury?, Who Discovered Venus?, Who Discovered Earth?, Who Discovered Mars?, Who Discovered Jupiter?, Who Discovered Saturn?, Who Discovered Neptune?, and Who Discovered Pluto?Here’s an article from the Hubble educational site about the discovery of Uranus, and here’s the NASA Solar System Exploration page on Uranus.We have recorded an episode of Astronomy Cast just about Uranus. You can access it here: Episode 62: Uranus.Sources:Universe TodayNASA: Solar System Exploration – UranusWindows to the Universe – UranusSpace Facts – UranusWikipedia – Uranus
Cosmology & The Universe
We’ve now seen farther, deeper and more clearly into space than ever before. The first image from the James Webb Space Telescope, released in a White House briefing on July 11, shows thousands of distant galaxies. The galaxies captured here lie behind a cluster of galaxies about 4.6 billion light-years away. The mass from those galaxies distorts spacetime in such a way that objects behind the cluster are magnified, giving astronomers a way to peer about 13 billion years into the early universe. Sign Up For the Latest from Science News Headlines and summaries of the latest Science News articles, delivered to your inbox Even with that celestial assist, other existing telescopes could never see so far.  But the James Webb Space Telescope, also known as JWST, is incredibly large — at 6.5 meters across, its mirror is nearly three times wider than that of the Hubble Space Telescope. It also sees in the infrared wavelengths of light where distant galaxies appear. Those features give it an edge over previous observatories. “The James Webb Space Telescope allows us to see deeper into space than ever before, and in stunning clarity,” said Vice President Kamala Harris in the July 11 briefing. “It will enhance what we know about the origins of our universe, our solar system, and possibly life itself.” Although this first image represents the deepest view of the cosmos to date, “this is not a record that will stand for very long,” astronomer Klaus Pontoppidan of the Space Telescope Science Institute in Baltimore said in a June 29 news briefing. “Scientists will very quickly beat that record and go even deeper.” And this image is just the first. On July 12, astronomers plan to release first images of a stellar birthplace, a nebula surrounding a dying star, and a group of closely interacting galaxies, plus the first spectrum of an exoplanet’s light, a clue to its composition. All these images are a glimpse of what JWST will continue to reveal over its decade-plus planned mission. This first image has been a very long time coming. The telescope that would become JWST was first dreamed up in the 1980s, and the planning and construction suffered years of budget issues and delays (SN: 10/6/21). The telescope finally launched on December 25. It then had to unfold and assemble itself in space, travel to a gravitationally stable spot about 1.5 million kilometers from Earth, align its insectlike primary mirror made of 18 hexagonal segments and calibrate its science instruments (SN: 1/24/22). There were hundreds of possible points of failure in that process, but the telescope unfurled successfully and got to work. The James Webb Space Telescope (illustrated) spent months unfolding and calibrating its instruments after it launched on December 25. Adriana Manrique Gutierrez/CIL/GSFC/NASA In the months following, the telescope team released teasers of imagery from calibration, which already showed hundreds of distant, never-before-seen galaxies. But the images now being released are the first full-color pictures made from the data scientists will use to start unraveling mysteries of the universe. For the telescope team, the relief in finally seeing the first images was palpable. “It was like, ‘Oh my god, we made it!’” says image processor Alyssa Pagan, also of Space Telescope Science Institute. “It seems impossible. It’s like the impossible happened.” In light of the expected anticipation surrounding the first batch of images, the imaging team was sworn to secrecy.  “I couldn’t even share it with my wife,” says Pontoppidan, leader of the team that produced the first color science images.   “You’re looking at the deepest image of the universe yet, and you’re the only one who’s seen that,” he says. “It’s profoundly lonely.” Soon, though, the team of about 30 scientists, image processors and science writers was seeing something new every day for weeks as the telescope downloaded the first images. “It’s a crazy experience,” Pontoppidan says. “Once in a lifetime.” For Pagan, the timing is perfect. “It’s a very unifying thing,” she says. “The world is so polarized right now. I think it could use something that’s a little bit more universal and connecting. It’s a good perspective, to be reminded that we’re part of something so much greater and beautiful.”  This story will be updated as more images are released.
Cosmology & The Universe
The James Webb space telescope has turned its gaze away from the deep universe towards our home solar system, capturing an image of a luminous Neptune and its delicate, dusty rings in detail not seen in decades.The last time astronomers had such a clear view of the farthest planet from the sun was when Nasa’s Voyager 2 became the first and only space probe to fly past the ice giant for just a few hours in 1989.Now Webb’s unprecedented infrared imaging capability has provided a new glimpse into Neptune’s atmosphere, said Mark McCaughrean, a senior adviser for science and exploration at the European Space Agency.The telescope “takes all that glare and background away” so that “we can start to tease out the atmospheric composition” of the planet, said McCaughrean, who has worked on the Webb project for more than 20 years.Neptune appears as deep blue in previous images taken by the Hubble space telescope due to methane in its atmosphere.Side-by-side photos of Neptune taken by Voyager 2 in 1989, Hubble in 2021 and Webb in 2022. Photograph: APHowever, the near-infrared wavelengths captured by Webb’s primary imager NIRCam show the planet as greyish white, with icy clouds streaking the surface.“The rings are more reflective in the infrared,” McCaughrean said, “so they’re much easier to see.”The image also shows an “intriguing brightness” near the top of Neptune, Nasa said in a statement. Because the planet is tilted away from Earth and takes 164 years to orbit the sun, astronomers have not yet had a good look at its north pole.Webb also spotted seven of Neptune’s 14 known moons. Looming over Neptune in a zoomed-out image is what appears to be a very bright spiky star, but is in fact Triton, Neptune’s strange, huge moon haloed with Webb’s famed diffraction spikes.Neptune and seven of its 14 known satellites, including Triton (top left). Photograph: Space Telescope Science Institut/ESA/Webb/AFP/Getty ImagesTriton, which is larger than dwarf planet Pluto, appears brighter than Neptune because it is covered in ice, which reflects light. Neptune meanwhile “absorbs most of the light falling on it”, McCaughrean said.Because Triton orbits the wrong way around Neptune, it is believed to have once been an object from the nearby Kuiper belt which was captured in the planet’s orbit. “So it’s pretty cool to go and have a look at,” said McCaughrean.As astronomers sweep the universe searching for other planets like our own, they have found that ice giants such as Neptune and Uranus are the most common in the Milky Way. “By being able to look at these ones in great detail, we can key into our observations of other ice giants,” McCaughrean said.Operational since July, Webb is the most powerful space telescope ever built, and has already unleashed copious unprecedented data. Scientists are hopeful it will herald a new era of discovery.Research based on Webb’s observations of Neptune and Triton is expected in the next year.“The kind of astronomy we’re seeing now was unimaginable five years ago,” McCaughrean said.“Of course, we knew that it would do this, we built it to do this, it is exactly the machine we designed. But to suddenly start seeing things in these longer wavelengths, which were impossible before … it’s just absolutely remarkable.”
Cosmology & The Universe
A team of astrophysicists has revealed an unusual discovery they say appears to challenge our current understanding of gravity based on Newton’s law of universal gravitation, according to a newly published paper. The controversial claim, published in the Monthly Notices of the Royal Astronomical Society, appears to be consistent with alternative interpretations about one of physics’ most mysterious fundamental interactions. In their new study, an international team of astrophysicists says that they came upon the discovery while investigating open star clusters. These formations are created as a gas cloud emerges following the birth of thousands of stars within a relatively short time, the remnants of which are ejected as these clusters of stars ignite and begin to expand, which can result in the formation of anywhere from several dozen, to several thousands of new stars. Gravity’s role in this process involves how the weak gravitational forces essentially serve as the glue that contains these clusters of stars and holds them together. Able to survive for hundreds of millions of years, these clusters do eventually begin to lose stars over time, resulting in the formation of a pair of “tidal tails,” one of which is dragged behind the open star cluster as it is propelled through space while the other protrudes ahead of the formation. Based on Newton’s law of universal gravitation, we would expect the process of how various stars in the cluster become allocated into either of these tidal tails to be entirely random. However, that wasn’t the case according to the team involved in the recent study, who found that one of the two tails clearly was able to outperform its star-grabbing counterpart. “In the clusters we studied, the front tail always contains significantly more stars nearby to the cluster than the rear tail,” according to Dr. Jan Pflamm-Altenburg, the Helmholtz Institute of Radiation and Nuclear Physics. “The asymmetry between the number of stars in the leading and trailing tails tests gravitational theory,” the authors wrote in their paper. Dr. Tereza Jerabkova, one of the paper’s co-authors, says that the research team was the first to develop the method they used to calculate the number of stars that become allocated within the pair of tidal tails in star clusters. “When we analyzed all the data, we encountered [a] contradiction with the current theory,” Jerabkova said in a statement, adding that the level of precision of data the team had available to them in survey data from the ESA’s pioneering Gaia mission had been “indispensable” in making their observations. If not the traditionally accepted Newtonian concept of gravity, then what did this new, contradictory data seem to point to regarding the weakest of the four fundamental forces? The research team believes that a theory of gravity involving what is appropriately known as Modified Newtonian Dynamics (MOND) may offer the answer. MOND proponents argue that observations of galaxies and their properties point to the necessity for modifications to Newton’s law of universal gravitation. Significantly, such ideas could potentially resolve problems like the question of dark matter by offering alternative models to account for the behavior of galaxies, which in many instances do not appear to obey the laws of physics as we currently understand them. “Put simply, according to MOND, stars can leave a cluster through two different doors,” says Pavel Kroupa, the study’s lead author, who added that where one “door” leads to the forward-facing tidal tail, and the other to the one behind the cluster. However, as Kroupa notes, “the first is much narrower than the second—so it’s less likely that a star will leave the cluster through it.” “Newton’s theory of gravity, on the other hand, predicts that both doors should be the same width,” Kroupa says. Although team members express that current tools available to physicists that may help them analyze potential modifications required for Newtonian dynamics are limited, calculations based on simulations nonetheless appear to be able to accurately predict the lifespan of open star clusters. This, according to the research team, is much shorter than what Newton’s laws would seem to allow, and for Kroupa and her team, this might even explain the mystery of why star clusters in galaxies nearby to ours have been observed disappearing more quickly than astronomers expect. Naturally, theories that require significant changes to our existing models of how the universe works are generally slow to win favor among scientists. Modifications to Newton’s theory of gravity, while useful in helping resolve such observations as those entailed in the team’s recent study, will also have wider implications that could potentially extend into virtually all areas of physics. But for Kroupa and her team, accepting and incorporating such ideas into our knowledge of the universe would be more helpful overall than anything else. “[I]t solves many of the problems that cosmology faces today,” Kroupa says. The team’s paper, “Asymmetrical tidal tails of open star clusters: stars crossing their cluster’s práh challenge Newtonian gravitation,” was published in Monthly Notices of the Royal Astronomical Society. Micah Hanks is Editor-in-Chief and Co-Founder of The Debrief. Follow his work at micahhanks.com and on Twitter: @MicahHanks.
Cosmology & The Universe